Compare commits

..

152 Commits
11.0 ... 5.3

Author SHA1 Message Date
Alex Vandiver
3c7fdf8a82 Release Zulip Server 5.3. 2022-06-21 20:25:50 +00:00
Anders Kaseorg
b031537fe9 CVE-2022-31017: Fix edit event exposure in protected-history streams.
When editing an old message in a private stream with protected
history, the server would incorrectly send an API event including the
edited message to all of the stream’s current subscribers, including
those who should not have access to the old message. This API event is
ignored by official clients, so it could only be observed by a user
using a modified client or their browser’s developer tools.

Signed-off-by: Anders Kaseorg <anders@zulip.com>
2022-06-21 18:23:30 +00:00
Alex Vandiver
9d3fb85897 install/upgrade: Allow new packages during apt-get upgrade.
`postgresql-14.4` is a notable upgrade in the PostgreSQL series, as it
fixes potential database corruption from `CREATE INDEX CONCURRENTLY`
statements which are run while rows are modified[1].  However, it also
requires an upgrade from `libllvm9` to `libllvm10`, which means it is
not installed by a mere `apt-get upgrade`.

Add the `--with-new-pkgs` flag to all of the potentially relevant
`apt-get upgrade` calls, so that this (and similar) packages are
upgraded successfully.

[1]: https://www.postgresql.org/docs/release/14.4/

(cherry picked from commit a35af3f38b)
2022-06-21 11:22:39 -07:00
Mateusz Mandera
b5e64dd1ef get_old_unclaimed_attachments: Add docstring explaining the logic.
(cherry picked from commit a671ae9749)
2022-06-20 11:13:24 -07:00
Mateusz Mandera
b1156e6d67 do_delete_old_unclaimed_attachments: Consider ArchivedAttachment rows.
This function is oblivious to the existence of ArchivedAttachment, which
is incorrect. A file can be removed if and only if it is not referenced
by any Messages or ArchivedMessages.

(cherry picked from commit 09dc166b45)
2022-06-20 11:13:24 -07:00
Mateusz Mandera
d918a09db8 test_upload: Fix some URLs to uploaded files.
Using http://localhost:9991 is incorrect - e.g. messages sent with file
urls constructed trigger do_claim_attachments to be called with empty
list in potential_path_ids.

realm.host should be used in all these places, like in the other tests
in the file.

(cherry picked from commit 5ff4754090)
2022-06-20 11:13:24 -07:00
Alex Vandiver
70aed5e26c upgrade-zulip-from-git: init, then add remote.
30457ecd02 removed the `--mirror` from
initial clones, but did not add back `--bare`, which `--mirror`
implies.  This leads to `/srv/zulip.git` having a working tree in it,
with a `/srv/zulip.git/.git` directory.

This is mostly harmless, and since the bug was recent, not worth
introducing additional complexity into the upgrade process to handle.

Calling `git clone --bare`, however, would clone the refs into
`refs/heads/`, not the `refs/remotes/origin/` we want.  Instead, use
`git init --bare`, followed by `git remote add origin`.  The remote
will be fetched by the usual `git fetch --all --prune` which is below.

(cherry picked from commit 5bdc4b3562)
2022-06-20 11:01:27 -07:00
Alex Vandiver
30ef55ca6c upgrade-zulip-from-git: Check fetch refspecs, not mirror flag.
While the `remote.origin.mirror` boolean being set is a very good
proxy for having been cloned with `--mirror`, is technically only used
when pushing into the remote[1].  What we care about is if fetches
from this remote will overwrite `refs/heads/`, or all of `refs/` --
the latter of which is most likely, from having run `git clone
--bare`.

Detect either of these fetch refspecs, and not the mirror flag.  We
let the upgrade process error out if `remote.origin.fetch` is unset,
as that represents an unexpected state.  We ignore failures to unset
the `remote.origin.mirror` flag, in case it is not set already.

[1]: https://git-scm.com/docs/git-config#Documentation/git-config.txt-remoteltnamegtmirror

(cherry picked from commit 1639792e9e)
2022-06-20 11:01:23 -07:00
Alex Vandiver
09bd546210 upgrade-zulip-from-git: Stop mirroring the remote.
The local `/srv/zulip.git` directory has been cloned with `--mirror`
since it was first created as a local cache in dc4b89fb08.  This
made some sense at the time, since it was purely a cache of the
remote, and not a home to local branches of its own.

That changed in 3f83b843c2, when we began using `git worktree`,
which caused the `deployment-...` branches to begin being stored in
`/src/zulip.git`.  This caused intermixing of local and remote
branches.

When 02582c6956 landed, the addition of `--prune` caused all but the
most recent deployment branch to be deleted upon every fetch --
leaving previous deployments with non-existent branches checked out:

```
zulip@example-prod-host:~/deployments/last$ git status
On branch deployment-2022-04-15-23-07-55

No commits yet

Changes to be committed:
  (use "git rm --cached <file>..." to unstage)
	new file:   .browserslistrc
	new file:   .codecov.yml
	new file:   .codespellignore
	new file:   .editorconfig
[...snip list of every file in repo...]
```

Switch `/srv/zulip.git` to no longer be a `--mirror` cache of the
origin.  We reconfigure the remote to drop `remote.origin.mirror`, and
delete all refs under `refs/pulls/` and `refs/heads/`, while
preserving any checked-out branches.  `refs/pulls/`, if the remote is
the canonical upstream, contains _tens of thousands_ of refs, so
pruning those refs trims off 20% of the repository size.

Those savings require a `git gc --prune=now`, otherwise the dangling
objects are ejected from the packfiles, which would balloon the
repository up to more than three times its previous size.  Repacking
the repository is reasonable, in general, after removing such a large
number of refs -- and the `--prune=now` is safe and will not lose
data, as the `--mirror` was good at ensuring that the repository could
not be used for any local state.

The refname in the upgrade process was previously resolved from the
union of local and remote refs, since they were in the same namespace.
We instead now only resolve arguments as tags, then origin branches;
this means that stale local branches will be skipped.  Users who want
to deploy from local branches can use `--remote-url=.`.

Because the `scripts/lib/upgrade-zulip-from-git` file is "stage 1" and
run from the old version's code, this will take two invocations of
`upgrade-zulip-from-git` to take effect.

Fixes #21901.

(cherry picked from commit 30457ecd02)
2022-06-20 11:01:08 -07:00
Alex Vandiver
8619f858f6 upgrade: Add --skip-restart which preps but does not restart.
This adds a --skip-restart which makes `deployments/next` in a state
where it can be restarted into, but holds off on conducting that
restart.

This requires many of the same guarantees as `--skip-tornado`, in
terms of there being no Puppet or database schema changes between the
versions.  Enforce those with `--skip-restart`, and also broaden both
flags to prevent other, less common changes which nonetheless
potentially might affect the other deploy.

(cherry picked from commit 6337f17923)
2022-06-20 11:00:14 -07:00
Alex Vandiver
97f49cc555 upgrade: Enforce that --skip-tornado does not have Puppet or DB changes.
(cherry picked from commit 86a4e64726)
2022-06-20 11:00:14 -07:00
Alex Vandiver
096e7af06d upgrade: Copy cache prefix with --skip-tornado.
Because Tornado and Django use memcached as a shared cache for
checking session information, they must agree on the prefix used to
store those values.

Subsequent commits will work to ensure that it is always _safe_ to
share that cache.

(cherry picked from commit ef7c2ea0ea)
2022-06-20 11:00:14 -07:00
Alex Vandiver
e6f52eb2a0 upgrade: Only run Django system checks once, explicitly.
These are expensive, and moving them to one explicit call early has
considerable time savings in the critical period:

```
$ hyperfine './manage.py fill_memcached_caches' './manage.py fill_memcached_caches --skip-checks'
Benchmark #1: ./manage.py fill_memcached_caches
  Time (mean ± σ):      5.264 s ±  0.146 s    [User: 4.885 s, System: 0.344 s]
  Range (min … max):    5.119 s …  5.569 s    10 runs

Benchmark #2: ./manage.py fill_memcached_caches --skip-checks
  Time (mean ± σ):      3.090 s ±  0.089 s    [User: 2.853 s, System: 0.214 s]
  Range (min … max):    2.950 s …  3.204 s    10 runs

Summary
  './manage.py fill_memcached_caches --skip-checks' ran
    1.70 ± 0.07 times faster than './manage.py fill_memcached_caches'
```

(cherry picked from commit fa77be6e6c)
2022-06-20 11:00:14 -07:00
Alex Vandiver
51ff34083e restart-server: Treat as a start if nothing is running.
Treating the restart as a start is important in reducing the critical
period during upgrades -- we call restart even when we suspect the
services are stopped, because puppet has a small possibility of
placing them in indeterminate state.  However, restart orders the
workers first, then tornado/django, which prolongs the outage.

Recognize when no services are currently started, and switch to acting
like a start, not a restart, which places tornado/django first.

(cherry picked from commit 3928606886)
2022-06-20 11:00:14 -07:00
Alex Vandiver
41038c3510 stop-server: Only stop services if they exist and are running.
This hides ugly output if the services were already stopped:

```
2022-03-25 23:26:04,165 upgrade-zulip-stage-2: Stopping Zulip...
process-fts-updates: ERROR (not running)
zulip-django: ERROR (not running)
zulip_deliver_scheduled_emails: ERROR (not running)
zulip_deliver_scheduled_messages: ERROR (not running)

Zulip stopped successfully!
```

Being able to skip having to shell out to `supervisorctl`, if all
services are already stopped is also a significant performance
improvement.

(cherry picked from commit 3717c329b8)
2022-06-20 11:00:14 -07:00
Alex Vandiver
25c87d9823 upgrade: Check with zulip-puppet-apply to see if we can skip it.
(cherry picked from commit 2e5a079ef4)
2022-06-20 11:00:14 -07:00
Alex Vandiver
14e60fd203 zulip-puppet-apply: Make --force --noop have an exit code.
(cherry picked from commit ecfc23bd0b)
2022-06-20 11:00:14 -07:00
Alex Vandiver
236508f61e zulip-puppet-apply: Factor out the --noop returncode logic.
(cherry picked from commit c91725bfb5)
2022-06-20 11:00:14 -07:00
Alex Vandiver
4bbcfd0499 upgrade: Skip the pre-work if the server is already stopped.
This optimization makes sense if the server is already running, but if
it is already stopped, it is just prolonging the downtime.

(cherry picked from commit b15d8e0118)
2022-06-20 11:00:14 -07:00
Alex Vandiver
80bf880d6f upgrade: Fill caches before the critical period, if possible.
(cherry picked from commit 05af4b0a11)
2022-06-20 11:00:14 -07:00
Alex Vandiver
6a3488d7ed fill_memcached_caches: Document possible arguments to --cache.
(cherry picked from commit 3d66dd9eeb)
2022-06-20 11:00:14 -07:00
Alex Vandiver
7039f1d182 upgrade: Move puppet class renames earlier.
These do not need to happen during the critical period when the server
is stopped.

(cherry picked from commit 2f7068ffbb)
2022-06-20 11:00:14 -07:00
Alex Vandiver
4fa62a25e2 docs: Correct and clarify wal-g backup documentation.
Backups are written every 16k of WAL archive, and by default do not
have an upper limit on how out of date they are, as `archive_timeout`
defaults to 0.

Also emphasize that these are streaming backups, not just one
point-in-time backup daily.

Fixes #21976.

(cherry picked from commit 18230fcd99)
2022-06-02 19:38:50 +00:00
Anders Kaseorg
09678193c9 stream_create: Fix crash on stream creation error.
Commit a9ca5f603b (#15863) incorrectly
translated this; stream_name_error is not a jQuery object.

Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit f2d0ae3255)
2022-06-01 14:13:00 -07:00
Tim Abbott
28a8655a9d i18n: Update translation data from Transifex.
Includes a new Mongolian translation.
2022-05-26 11:00:21 -07:00
Anders Kaseorg
cf86e7b3d8 apt-repos: Remove now-unneeded Ubuntu 21.10 repository on 22.04.
Followup to commit f8957863a2 (#22055).

Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 98ed6248e3)
2022-05-26 10:35:49 -07:00
Anders Kaseorg
472e216cec Revert "apt-repos: Downgrade PostgreSQL to dodge PGroonga regression."
This reverts commit 9c8d2b7be3 (#21115).

The PostgreSQL fix was released 2022-05-12.

Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit f8957863a2)
2022-05-18 17:43:03 -07:00
Alex Vandiver
345939dc64 puppet: Only fix certbot certificates if https is enabled.
This is a reprise of c97162e485, but for the case where certbot
certs are no longer in use by way of enabling `http_only` and letting
another server handle TLS termination.

Fixes: #22034.
2022-05-17 15:08:44 -07:00
Alex Vandiver
029b72c496 puppet: Include the OS-enabled nginx module configurations.
This allows system-level configuration to be done by `apt-get install`
of nginx modules, which place their load statements in this directory.

The initial import in ed0cb0a5f8 of the stock nginx config omitted
this include -- one potential explanation was in an effort to reduce
the memory footprint of the server.

The default nginx install enables:

    50-mod-http-auth-pam.conf
    50-mod-http-dav-ext.conf
    50-mod-http-echo.conf
    50-mod-http-geoip2.conf
    50-mod-http-geoip.conf
    50-mod-http-image-filter.conf
    50-mod-http-subs-filter.conf
    50-mod-http-upstream-fair.conf
    50-mod-http-xslt-filter.conf
    50-mod-mail.conf
    50-mod-stream.conf

While Zulip doesn't actively use any of these, they likely don't do
any harm to simply be loaded -- they are loaded into every nginx by
default.

Having the `modules-enabled` include allows easier extension of the
server, as neither of the existing wildcard
includes (`/etc/nginx/conf.d/*.conf` and
`/etc/nginx/zulip-include/app.d/*.conf`) are in the top context, and
thus able to load modules.

(cherry picked from commit 62f234328d)
2022-05-17 15:07:58 -07:00
Alex Vandiver
602984f73e oneclick: Fail if the fab command fails.
(cherry picked from commit c93024cd5b)
2022-05-17 13:42:52 -07:00
Alex Vandiver
fcf4ede700 oneclick: Do not use a stale Zulip client.
Initializing the Zulip client opens a long-lived TCP connection due to
connection pooling in urllib3.  In Github Actions, the network kills
such requests after ~270s, making the later `send_message` call fail.

Use a singular call to `zulip.Client()` early on to verify the
credentials, and do not cache the resulting client object.  Instead,
re-create it during the final step when it is needed, so we do not run
afoul of bad TCP connection state.

This would ideally be fixed via connection keepalive or retry at the
level of the Zulip module.

(cherry picked from commit ff647dff03)
2022-05-17 13:42:50 -07:00
Anders Kaseorg
318da92b59 mypy: Link some upstream issues for adding library type annotations.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit e6d85895ca)
2022-05-16 12:05:23 -07:00
Anders Kaseorg
5de2969275 requirements: Upgrade Python requirements.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit f29553d809)
2022-05-16 12:05:23 -07:00
Anders Kaseorg
44bee53f30 mypy: Use upstream types for asgiref, natsort.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit a7cdcbb6e3)
2022-05-16 12:05:23 -07:00
Anders Kaseorg
1593ab6082 install: Resupport Ubuntu 22.04.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit e952641013)
2022-05-16 12:05:23 -07:00
Anders Kaseorg
3bc1ad05f7 zulip-puppet-apply: Work around broken Puppet on Ubuntu 22.04.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 25c87cc7da)
2022-05-16 12:05:23 -07:00
Anders Kaseorg
e124464fea requirements: Upgrade to Tornado 6.
Fixes #8913.

Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 7acb642fa5)
2022-05-16 12:05:23 -07:00
Anders Kaseorg
9362158e04 run-dev: Fix types.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit f23bfe91c0)
2022-05-16 12:05:23 -07:00
Anders Kaseorg
0ccc706f7a runtornado: Switch to asyncio event loop.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 6fd1a558b7)
2022-05-16 12:05:23 -07:00
Anders Kaseorg
b4a0684201 queue: Use a thread-local Pika connection.
According to the documentation: “Pika does not have any notion of
threading in the code. If you want to use Pika with threading, make
sure you have a Pika connection per thread, created in that thread. It
is not safe to share one Pika connection across threads, with one
exception: you may call the connection method add_callback_threadsafe
from another thread to schedule a callback within an active pika
connection.”

https://pika.readthedocs.io/en/stable/faq.html

This also means that synchronous Django code running in Tornado will
use its own synchronous SimpleQueueClient rather than sharing the
asynchronous TornadoQueueClient, which is unfortunate but necessary as
they’re about to be on different threads.

Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit c263bfdb41)
2022-05-16 12:05:23 -07:00
Anders Kaseorg
ad9187d9f7 cache: Instantiate only one BMemcached cache backend.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit c9faefd50e)
2022-05-16 12:05:23 -07:00
Anders Kaseorg
edda368670 requirements: Upgrade asgiref.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 52b9c59875)
2022-05-16 12:05:23 -07:00
Anders Kaseorg
f7f750e7a8 run-dev: Switch to asyncio event loop.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 0ef9309e92)
2022-05-16 12:05:23 -07:00
Anders Kaseorg
ce8d8f3846 runtornado: Avoid deprecated IOLoop debugging methods.
IOLoop.set_blocking_log_threshold and IOLoop.handle_callback_exception
are removed in Tornado 6.

Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 5d69dafddb)
2022-05-16 12:05:23 -07:00
Anders Kaseorg
d632e2c6bf tornado: Remove instrument_tornado_ioloop.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit e4bf7066f3)
2022-05-16 12:05:23 -07:00
Anders Kaseorg
ac5e31ce04 tornado: Unfork tornado.autoreload.
We previously forked tornado.autoreload to work around a problem where
it would crash if you introduce a syntax error and not recover if you
fix it (https://github.com/tornadoweb/tornado/issues/2398).

A much more maintainable workaround for that issue, at least in
current Tornado, is to use tornado.autoreload as the main module.

Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit bded7180f7)
2022-05-16 12:05:23 -07:00
Anders Kaseorg
5f474e8425 run-dev: Avoid deprecated tornado.gen.engine.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 1c7954b452)
2022-05-16 12:05:23 -07:00
Alex Vandiver
33d43b695e docs: Clarify nginx extension points.
(cherry picked from commit 04d4ae9862)
2022-05-10 14:57:11 -07:00
Alex Vandiver
acf90db8b6 docs: Update proxy docs.
Notable changes:
 - Describe `X-Forwarded-For` by name.
 - Switch each specific proxy to numbered steps.
 - Link back to the `X-Forwarded-For` section in each proxy
 - Default to using HTTPS, not HTTP, for the backend.
 - Include the HTTP-to-HTTPS redirect code for all proxies; it is
   important that it happen at the proxy, as the backend is unaware of
   it.
 - Call out Apache2 modules which are necessary.
 - Specify where the dhparam.pem file can be found.
 - Call out the `Host:` header forwarding necessary, and document
   `USE_X_FORWARDED_HOST` if that is not possible.
 - Standardize on 20 minutes of connection timeout.
2022-05-04 14:44:11 -07:00
Alex Vandiver
40968fda49 settings: Stop enabling USE_X_FORWARDED_HOST by default.
This was added in 1fded25025, and is not
necessary for standard Zulip installs.  While both Host: and
X-Forwarded-Host: are nominally untrusted, there is no reason to
complicate the deployment by defaulting it on.
2022-05-04 14:44:11 -07:00
Alex Vandiver
4b3f68382c version: Update version after 5.2 release. 2022-05-03 18:00:50 -07:00
Alex Vandiver
b20797ed9c Release Zulip Server 5.2. 2022-05-04 00:27:55 +00:00
Alex Vandiver
e637ff626d version: Fix latest major version and blog post link. 2022-05-04 00:27:08 +00:00
Alex Vandiver
ca4cf94e79 settings: Remove misleading and irrelevant comment.
This comment was _originally_ for the `default` memcached cache, back
when it was added all of the way back in 0a84d7ac62.  9e64750083
made it a lie, and edc718951c made it even more confusing when it
removed the `default` cache configuration block, leaving the wrong
comment next to the wrong cache configuration block.

Banish the comment.

(cherry picked from commit 7cc9b93b91)
2022-05-03 16:12:08 -07:00
Alex Vandiver
789e960672 test_link_embed: Remove unnecessary TEST_CACHES.
The only purpose of this seems to be to not have to reset the cache;
fae59502ab added it without any explanation for why it is necessary.

Remove it, and explicitly flush the cache in the one place where it is
necessary.

(cherry picked from commit 9030d53acb)
2022-05-03 16:12:08 -07:00
Alex Vandiver
572138d983 caches: Remove unnecessary "in-memory" cache.
This cache was added in da33b72848 to serve as a replacement for the
durable database cache, in development; the previous commit has
switched that to be the non-durable memcached backend.

The special-case for "in-memory" in development is mostly-unnecessary
in contrast to memcached -- `./tools/run-dev.py` flushes memcached on
every startup.  This differs in behaviour slightly, in that if the
codepath is changed and `run-dev` restarts Django, the cache is not
cleared.  This seems an unlikely occurrence, however, and the code
cleanup from its removal is worth it.

(cherry picked from commit 56058f3316)
2022-05-03 16:12:08 -07:00
Alex Vandiver
df8ac69d90 caches: Cache link preview data in memcached, not in PostgreSQL.
The choice to cache these in the database dates back to c93f1d4eda,
with the comment added in da33b72848 while working around the
durability of the "database" cache in local development.

The values were stored in a durable cache, as they needed to be
ensured to persist between when they were inserted in
`get_link_embed_data` and when they were used in
`render_incoming_message` via `link_embed_data_from_cache`.

However, database accesses are not fast compared to memcached, and we
wish to avoid the overhead of the database connection from the
`embed_links` worker.  Specifically, making the connection may not be
thread-safe -- and in low-memory (and Docker) configurations, all
workers run as separate threads in a single process.  This can lead to
stalled database connections in `embed_links` workers, and failed
previews.

Since the previous commit made the durability of the cache no longer
necessary, this will have minimal effect; at worst, posting the same
URL twice, on either side of an upgrade, will result in two preview
fetches of it.

(cherry picked from commit 04ca2e92f7)
2022-05-03 16:12:08 -07:00
Alex Vandiver
9a9c6730ff preview: Use cache only as a non-durable cache, not an IPC.
The `get_link_embed_data` / `link_embed_data_from_cache` pair as
introduced in c93f1d4eda uses the cache
as a temporary store inside of the `embed_links` worker; this means
that it must be durable storage, or the worker will stall and re-fetch
the same links to preview them.

Switch to plumbing through the fetched URL embed data as an parameter
to the Markdown evaluation which uses them, rather than using the
cache as an intermediary.  This frees up the cache to be merely a
non-durable cache.

As a side-effect, this removes get_cache_with_key, and
link_embed_data_from_cache which was its only callsite.

(cherry picked from commit 351bdfaf78)
2022-05-03 16:12:04 -07:00
Alex Vandiver
5ff82c82ae preview: Use a dataclass for the embed data.
This is significantly cleaner than passing around `Dict[str, Any]` all
of the time.

(cherry picked from commit 327ff9ea0f)
2022-05-03 16:10:25 -07:00
Alex Vandiver
00b3da0a0c populate_db: Remove unnecessary pre-population of URL cache.
76deb30312 changed this to not just be the URL, but rather a
prefixed hash of the URL, but failed to update this location which
wrote to it.  This meant that this pre-population step was writing to
the wrong keys in the durable cache, and thus ineffective.

Then, da33b72848 switched the cache to be in-memory, making this
write to the wrong keys in an in-process memory store.  There is no
way to pre-fill this sort of cache, except at server start-up.

Finally, and most fundamentally, 8c0c9ca7a4 then disabled
`inline_url_embed_preview` by default, making the code entirely moot.

Remove the triply-unnecessary code.

(cherry picked from commit ede4a88b49)
2022-05-03 16:10:25 -07:00
Alex Vandiver
9ded5be2a7 cache: Make the cache_name=None behaviour clearer.
`django.core.cache.cache` is equal to
`django.core.cache.caches["default"]`; the latter is more
understandable in context.

(cherry picked from commit aaa58a49db)
2022-05-03 16:10:25 -07:00
Alex Vandiver
0d0aaf3c92 markdown: Use named parameters to add_a helper.
This has enough parameters that it benefits from making which is which
explicit.

(cherry picked from commit 661c333377)
2022-05-03 16:10:24 -07:00
Alex Vandiver
26907e1c2e markdown: Clarify url parameter of "add_a" helper.
(cherry picked from commit 452a30305d)
2022-05-03 16:10:24 -07:00
Aman Agrawal
953f3c8c1d resize: Don't use visible selector to find element states.
This change decreases the time required to open compose
after clicking a message. The amount of time reduced varies with pc.

The time reduction was around 0.4s to 0.6s for me after using a
6x CPU slowdown. This may not sound convincing but the profile
uploaded in #21979 clearly shows the root cause of having a message
click take 10s was the `:visible` query.

Fixes #21979
2022-05-03 09:36:32 -07:00
Alex Vandiver
abf82392a3 puppet: Fix non-replicated PostgreSQL 10 and 11 configuration.
6f5ae8d13d removed the `$replication` variable from the
configurations of PostgreSQL 12 and higher, but left it in the
templates for PostgreSQL 10 and 11.  Because `undef != ''`,
deployments on PostgreSQL 10 and 11 started trying to push to S3
backups, regardless of if they were configured, leaving frequent log
messages like:

```
2022-04-30 12:45:47.805 UTC [626d24ec.1f8db0]: [107-1] LOG: archiver process (PID 2086106) exited with exit code 1
2022-04-30 12:45:49.680 UTC [626d24ee.1f8dc3]: [18-1] LOG: checkpoint complete: wrote 19 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=1.910 s, sync=0.022 s, total=1.950 s; sync files=16, longest=0.018 s, average=0.002 s; distance=49 kB, estimate=373 kB
/usr/bin/timeout: failed to run command "/usr/local/bin/env-wal-g": No such file or directory
2022-04-30 12:46:17.852 UTC [626d2f99.1fd4e9]: [1-1] FATAL: archive command failed with exit code 127
2022-04-30 12:46:17.852 UTC [626d2f99.1fd4e9]: [2-1] DETAIL: The failed archive command was: /usr/bin/timeout 10m /usr/local/bin/env-wal-g wal-push pg_wal/000000010000000300000080
```

Switch the PostgreSQL 10 and 11 configuration to check
`s3_backups_bucket`, like the other versions.

(cherry picked from commit d891b9590a)
2022-05-02 17:58:27 -07:00
Alex Vandiver
fb9cdf0f56 compare-settings-to-template: Handle prod_settings_template renaming.
(cherry picked from commit 3476f63dca)
2022-04-28 15:46:22 -07:00
Alex Vandiver
df80303a64 compare-settings-to-template: Simplify and dedent logic.
(cherry picked from commit b6b6faa404)
2022-04-28 15:46:22 -07:00
Alex Vandiver
d7fb2292eb compare-settings-to-template: Fetch 100 per pagination.
(cherry picked from commit d205050ab0)
2022-04-28 15:46:22 -07:00
Alex Vandiver
827d1d9d3b compare-settings-to-template: Paginate through all tags.
The default page size is 30, which means this only goes back to 4.6 at
present, due to starting with `shared-...` and old `enterprise-...`
tags.

(cherry picked from commit d79776f80d)
2022-04-28 15:46:22 -07:00
Alex Vandiver
64b563e1dc prod_settings_template: Switch to double quotes in commented lines. 2022-04-28 12:40:52 -07:00
Alex Vandiver
92fdfffa4d prod_settings_template: Add some missing quotes in commented lines. 2022-04-28 12:40:50 -07:00
Alex Vandiver
1767a0bcb1 puppet: Check that certbot certs are in use before fixing them.
It is possible to have previously installed certbot, but switched back
to using self-signed certificates -- in which case renewing them using
certbot may fail.

Verify that the certificate is a symlink into certbot's output
directory before running `fix-standalone-certbot`.

(cherry picked from commit c97162e485)
2022-04-28 00:53:56 -07:00
Anders Kaseorg
6736b35f5f puppet: ‘supervisorctl stop all’ before restarting Supervisor.
This fixes a failure of the 3.4 upgrade test running on Ubuntu 20.04
with Supervisor 4.

Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit a7e6cb7705)
2022-04-27 12:15:07 -07:00
Mateusz Mandera
c34110f88c digest: Don't send emails to deactivated users, even if queued. 2022-04-15 14:33:16 -07:00
Mateusz Mandera
fc4102d779 test_digest: Fix typo in a comment. 2022-04-15 14:33:15 -07:00
Anders Kaseorg
4d0ddf483d actions: Delete zerver.lib.actions.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit cc30ed8ec7)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
9c927e40d6 actions: Move part into zerver.lib.test_classes.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 729019acdd)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
4d21bad033 actions: Split out zerver.actions.create_realm.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit e01faebd7e)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
472428621a actions: Split out zerver.actions.realm_domains.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 53f4a395bc)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
37b40df30c actions: Split out zerver.actions.realm_settings.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 59f6b090c7)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
87c58f8e23 actions: Move part into zerver.forms.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 12de8d797e)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
eb5832f7a4 actions: Split out zerver.actions.message_edit.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit eda000899b)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
a9b6d9990a actions: Split out zerver.actions.muted_users.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 5d1a5a3877)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
f4fe1660f3 actions: Split out zerver.actions.bots.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit ec174dfb47)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
3bb3a415a8 actions: Split out zerver.actions.message_flags.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit eb4e9fe1e7)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
cca19fedf0 actions: Split out zerver.actions.reactions.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit e5500a2226)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
c59eb24674 actions: Split out zerver.actions.create_user.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit cbad5739ab)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
c530f1b582 actions: Split out zerver.actions.streams.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 5fcbc412cf)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
3b48bcca95 actions: Split out zerver.actions.message_send.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 975066e3f0)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
50ca78447e actions: Split out zerver.actions.user_settings.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit ec6355389a)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
b4d9cd4e0f actions: Split out zerver.actions.users.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit d7981dad62)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
7c5e017c14 actions: Split out zerver.actions.custom_profile_fields.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit bbce879c81)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
7ba639960d actions: Move part into zerver.lib.bulk_create.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit f6a06ba6e3)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
76641a5f21 actions: Move part into zerver.lib.message.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit c041b68578)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
b54240d6cf actions: Move part into zerver.lib.subscription_info.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 9dd7e34ab3)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
508c676f61 actions: Split out zerver.actions.presence.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit b7adfb02f6)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
b60ba10351 actions: Move part into zerver.lib.users.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit ab04068294)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
b8567d8d8f actions: Split out zerver.actions.uploads.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit e230ea2598)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
025219da16 actions: Move part into zerver.lib.streams.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit a29f1b39da)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
5bcb52390c actions: Split out zerver.actions.user_activity.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 6168c0110a)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
90cbf900d4 actions: Split out zerver.actions.user_topics.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit df4849bb15)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
ddf76baf89 actions: Split out zerver.actions.realm_emoji.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 385616f27f)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
be7169bed0 actions: Split out zerver.actions.realm_export.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 8fc5922ebd)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
bdc67055b1 actions: Split out zerver.actions.realm_icon.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 3d7aa98c45)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
f5b96c8551 actions: Split out zerver.actions.realm_logo.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 7f088f3403)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
2e48056a9c actions: Split out zerver.actions.invites.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit ca8d374e21)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
e0f9f58411 actions: Split out zerver.actions.alert_words.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 241463e215)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
f29b1d3192 actions: Split out zerver.actions.default_streams.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 1ac7496855)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
b1e8ead908 actions: Split out zerver.actions.hotspots.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 12130da339)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
5cb7acec36 actions: Split out zerver.actions.realm_linkifiers.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 975f5a3c2d)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
0cb261ac6b actions: Split out zerver.actions.realm_playgrounds.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit e887abcf41)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
b58a5b3bf3 actions: Split out zerver.actions.submessage.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 3a135b04d9)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
76dc8bc9f7 actions: Split out zerver.actions.typing.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 62d3b5bfd5)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
bf5f006971 actions: Split out zerver.actions.user_groups.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 372c10f5f3)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
500bd04e11 actions: Split out zerver.actions.video_calls.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 90cae59ea6)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
d35bdd312f actions: Split out zerver.lib.recipient_users.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit c136eebb33)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
51d9bbca1e actions: Split out zerver.lib.user_counts.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 703186c339)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
f3c9a5019b actions: Split out zerver.lib.user_message.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 05195c02c1)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
36fa5e0385 actions: Move part into zerver.models.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 7f00aa078e)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
faea77d03f actions: Split out zerver.lib.sounds.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 6a70f75587)
2022-04-15 10:08:19 -07:00
Anders Kaseorg
ea9ba8b24c actions: Add zerver/actions directory.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit dd8b1aaba6)
2022-04-15 10:08:19 -07:00
Greg Price
051f1c3120 docs: Update apps' compatibility threshold to 3.0, from 2.1.0.
Zulip Server 3.0 is now about 21 months old, which is more than
18 months.  Per the general policy in the "Client apps" section
below, that means it's time to drop support for older versions.

We released 4.0 in 2021-05, so around 2022-11 we can update this
further to say 4.0.
2022-04-14 11:54:44 -07:00
Alex Vandiver
fb03c3205e timeout: Add test coverage.
(cherry picked from commit e6e4b7b3ef)
2022-04-13 20:47:34 -07:00
Alex Vandiver
662396d2c5 timeout: Minor comment cleanups.
We remove the StackOverflow link because it is now so dated as to be
irrelevant -- it does not use `self.ident`, and cargo-cults the return
value of PyThreadState_SetAsyncExc.

(cherry picked from commit 04159a674c)
2022-04-13 20:47:34 -07:00
Alex Vandiver
4a5204a967 timeout: Warn if the thread did not exit.
As noted in the docstring for this function, the timeout is
best-effort only -- if the thread is blocked in a syscall, it will not
service the exception until it returns.  It can also choose to catch
and ignore the TimeoutExpired; in either case it will still be running
even after the `timeout()` function returns.

Raising a vare TimeoutExpired it still somewhat accurate, but obscures
that the backend thread may still be running along merrily.  Notice
such cases, and log a warning about them.

(cherry picked from commit 3af2c8d9a3)
2022-04-13 20:47:34 -07:00
Alex Vandiver
44a3cd8dd3 timeout: Re-raise from where the TimeoutExpired hit the thread.
Having just thrown an exception into the thread, it is often useful to
know _what_ was the slow code that we interrupted.  Raising a bare
TimeoutExpired here obscures that information, as any `exc_info` will
end there.

Examine the thread for any exception information, and use that to
re-raise.  This exception information is not guaranteed to exist -- if
the thread didn't respond to the exception in time, or caught it, for
instance.

(cherry picked from commit e714264756)
2022-04-13 20:47:34 -07:00
Alex Vandiver
efddda2609 timeout: Remove cargo-culted and impossible-to-reach code block.
The quote in question originates in python/cpython@b8b6d0c2c6, when
the code was added.  However, the code stopped having that comment,
and was no longer able to return anything but 1 or 0, starting in
python/cpython@4643c2fda1 -- Python 2.5.

Remove the block.

(cherry picked from commit 85eeaf5f18)
2022-04-13 20:47:34 -07:00
Alex Vandiver
ecfcc20351 send_email: Only warn if EMAIL_HOST_PASSWORD is unset, not "".
Some email hosts actually do want an empty password; since the default
is `None`, we should key on that, and not just being false-y.

(cherry picked from commit ee04f42897)
2022-04-13 20:46:38 -07:00
Alex Vandiver
fa68acd669 settings: Use default database_user value when looking up.
Failure to pull the default "zulip" value here can lead to
accidentally applying a `postgres_password` value which is unnecessary
and may never work.

For consistency, always skip password auth attempts for the "zulip"
user on localhost, even if the password is set.  This mirrors the
behavior of `process_fts_updates`.

(cherry picked from commit 828c9d1c18)
2022-04-13 20:44:56 -07:00
Mateusz Mandera
9c88f6c4ce scim: Temporarily stop running SCIM change operations atomically.
do_deactivate_user can't be run in an atomic block due to concerns
around revoking session in a transaction. See
62ba8e455d for more details.

Without the change in this commit, the process of deactivating a user
via SCIM is broken.
2022-04-13 16:02:04 -07:00
Mateusz Mandera
8aa6958923 docs: Fix incorrect path to SAML certs in SAML Keycloak instructions.
This was supposed to be /etc/zulip/saml/idps/
2022-04-13 16:01:25 -07:00
Alex Vandiver
88e2f64869 docs: Fix typo.
We don't suggest self-hosing, unless via a sprinkler in warm weather.
2022-04-13 11:38:35 -07:00
Alex Vandiver
c7df68eb48 check-database-compatibility: Sort and prettify output.
(cherry picked from commit 09860dc284)
2022-04-06 14:16:55 -07:00
Alex Vandiver
599094bcf5 docs: Fold FTS index updating into the upgrade step.
On the Debian 10 -> 11 upgrade, the server is running Zulip 4.x, which
lets us pass `--audit-fts-indexes` to `upgrade-zulip-stage-2` rather
than run the command as a separate step.

(cherry picked from commit 488aaef9b7)
2022-04-06 14:16:55 -07:00
Alex Vandiver
4a4be8620c docs: Upgrade Zulip before trying to fix collations.
The reindex-textual-data tool needs the venv to be cable to run;
switch the order of the last two steps, making them now match the
Debian 9 -> 10 and 10 -> upgrades.

Ref #21296.

(cherry picked from commit 1e3a6984a4)
2022-04-06 14:16:54 -07:00
Aman Agrawal
3dc29fbc76 message_edit: Fix false sub/unsub bookend on using a near link.
We were not setting the `historical` flag correctly for
messages fetched via `json_fetch_raw_message` when used didn't
have any UserMessage.

Extended relevant tests to fetch check message flags too.
2022-04-04 12:34:43 -07:00
Alex Vandiver
c49dfc5679 version: Update version after 5.1 release. 2022-04-01 23:13:55 -07:00
Alex Vandiver
08c2d9a766 Release Zulip Server 5.1. 2022-04-02 05:45:55 +00:00
Alex Vandiver
d9e7feae0a migrations: Remove the possibly-duplicated emoji re-uploading.
In 85e531e377, we duplicated this block
of migration code to fix a bug, but moving it (aka deleting the
original copy) is a cleaner solution.

(cherry picked from commit 35e27aef4a)
2022-04-01 22:31:54 -07:00
Alex Vandiver
58e29a9ca0 supervisor: 'foo:*' also matches 'foo'.
7c4293a7d3 switched to checking if the
service was already running, and use `supervisorctl start` if it was
not.

Unfortunately, `list_supervisor_processes("zulip-tornado:*")` did not
include `zulip-tornado`, and as such a non-sharded process was always
considered to _not_ be running, and was thus started, not restarted.
Starting an already-started service is a no-op, and thus non-sharded
tornado processes were never restarted.

The observed behaviour is that requests to the tornado process attempt
to load the user from the cache, with a different prefix from Django,
and immediately invalidate the session and eject the user back to the
login page.

Fix the `list_supervisor_processes` logic to match without the
trailing `:*`.

(cherry picked from commit 65e19c4fbd)
2022-04-01 17:25:29 -07:00
Alex Vandiver
266dbad737 check-database-compatibility: Ignore squashed and renamed migrations.
Fixes: #21596.
(cherry picked from commit eb31681934)
2022-04-01 17:03:38 -07:00
Tim Abbott
4db1aa75ce migrations: Repeat part of migration 0376.
The blockquote explains the motivation for this change in detail.

Fixes #21608.

(cherry picked from commit 85e531e377)
2022-04-01 15:21:05 -07:00
Alex Vandiver
fc0d5fcfd5 upgrade: Mark puppet as having started the server.
We previously used restart-server if puppet was run, as a nod to the
fact that `supervisor reread && supervisor update` will _start_
service groups that were modified, even if they were previously
stopped; this is because they are marked as `autostart=true`, which is
honored on service change.

However, upgrades want to run while there are no services running.  If
puppet is run, explicitly set the server as potentially being "up", so
that a `shutdown_server()` before migrations, if they exist, will stop
services.

(cherry picked from commit 0af00a3233)
2022-03-31 17:36:57 -07:00
Alex Vandiver
00382078ad upgrade: Move the shutdown_server calls to where they are relevant.
shutdown_server is a noop if the server is already stopped; placing
these in each block makes the logic more apparent.

(cherry picked from commit e9596637e7)
2022-03-31 17:36:55 -07:00
Tim Abbott
9313e8f909 i18n: Update translation data from Transifex. 2022-03-31 13:18:55 -07:00
Alex Vandiver
9bb31433f1 data_import: Fix bot email address de-duplication.
4815f6e28b tried to de-duplicate bot
email addresses, but instead caused duplicates to crash:

```
Traceback (most recent call last):
  File "./manage.py", line 157, in <module>
    execute_from_command_line(sys.argv)
  File "./manage.py", line 122, in execute_from_command_line
    utility.execute()
  File "/srv/zulip-venv-cache/56ac6adf406011a100282dd526d03537be84d23e/zulip-py3-venv/lib/python3.8/site-packages/django/core/management/__init__.py", line 413, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "/srv/zulip-venv-cache/56ac6adf406011a100282dd526d03537be84d23e/zulip-py3-venv/lib/python3.8/site-packages/django/core/management/base.py", line 354, in run_from_argv
    self.execute(*args, **cmd_options)
  File "/srv/zulip-venv-cache/56ac6adf406011a100282dd526d03537be84d23e/zulip-py3-venv/lib/python3.8/site-packages/django/core/management/base.py", line 398, in execute
    output = self.handle(*args, **options)
  File "/home/zulip/deployments/2022-03-16-22-25-42/zerver/management/commands/convert_slack_data.py", line 59, in handle
    do_convert_data(path, output_dir, token, threads=num_threads)
  File "/home/zulip/deployments/2022-03-16-22-25-42/zerver/data_import/slack.py", line 1320, in do_convert_data
    ) = slack_workspace_to_realm(
  File "/home/zulip/deployments/2022-03-16-22-25-42/zerver/data_import/slack.py", line 141, in slack_workspace_to_realm
    ) = users_to_zerver_userprofile(slack_data_dir, user_list, realm_id, int(NOW), domain_name)
  File "/home/zulip/deployments/2022-03-16-22-25-42/zerver/data_import/slack.py", line 248, in users_to_zerver_userprofile
    email = get_user_email(user, domain_name)
  File "/home/zulip/deployments/2022-03-16-22-25-42/zerver/data_import/slack.py", line 406, in get_user_email
    return SlackBotEmail.get_email(user["profile"], domain_name)
  File "/home/zulip/deployments/2022-03-16-22-25-42/zerver/data_import/slack.py", line 85, in get_email
    email_prefix += cls.duplicate_email_count[email]
TypeError: can only concatenate str (not "int") to str
```

Fix the stringification, make it case-insensitive, append with a dash
for readability, and add tests for all of the above.
2022-03-31 11:29:53 -07:00
Aman Agrawal
4c313ff652 navbar_alerts: Adjust height of recent topics when alert is visible.
Fixes #21619

We need to adjust height of recent topics along with the app
otherwise the container becomes separately scrollable due to
it overflowing the app height.
2022-03-31 11:27:03 -07:00
Heidi Ahlberg
9ba6664c44 i18n: Fix missing translation tags in stream creation view. 2022-03-31 10:37:36 -07:00
Anders Kaseorg
8616c2e092 changelog: Remove broken link.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 7de1e7c477)
2022-03-30 20:38:16 -07:00
Anders Kaseorg
6ca04586c1 puppet: Do not ensure Chrony is running.
Commit f6d27562fa (#21564) tried to
ensure Chrony is running, which fails in containers where Chrony
doesn’t have permission to update the host clock.

The Debian package should still attempt to start it, and Puppet should
still restart it when chrony.conf is modified.

Signed-off-by: Anders Kaseorg <anders@zulip.com>
2022-03-30 11:38:05 -07:00
Sahil Batra
04fc7e293e settings: Fix push notifications tooltip being incorrectly shown.
We were showing the push notifications tooltip in user default
settings section even if the push notifications were configured
on the server.

The bug was because the setting value was undefined in the template
used for user default settings section, so this commit fixes the bug
by correctly passing the setting value to relevant template file.

Fixes #21602.
2022-03-30 11:31:49 -07:00
Tim Abbott
7809ecd38e version: Update version after 5.0 release. 2022-03-29 08:31:25 -07:00
8880 changed files with 536367 additions and 1228868 deletions

5
.browserslistrc Normal file
View File

@@ -0,0 +1,5 @@
> 0.15%
> 0.15% in US
last 2 versions
Firefox ESR
not dead and supports async-functions

View File

@@ -16,17 +16,3 @@ fpr
alls
nd
ot
womens
vise
falsy
ro
derails
forin
uper
slac
couldn
ges
assertIn
thirdparty
asend
COO

View File

@@ -8,11 +8,10 @@ indent_style = space
insert_final_newline = true
trim_trailing_whitespace = true
[[shell]]
binary_next_line = true
switch_case_indent = true
binary_next_line = true # for shfmt
switch_case_indent = true # for shfmt
[{*.{cjs,cts,js,json,mjs,mts,ts},check-openapi}]
[{*.{js,json,ts},check-openapi}]
max_line_length = 100
[*.{py,pyi}]

14
.eslintignore Normal file
View File

@@ -0,0 +1,14 @@
# This is intended for generated files and vendored third-party files.
# For our source code, instead of adding files here, consider using
# specific eslint-disable comments in the files themselves.
/docs/_build
/static/generated
/static/third
/static/webpack-bundles
/var/*
!/var/puppeteer
/var/puppeteer/*
!/var/puppeteer/test_credentials.d.ts
/zulip-current-venv
/zulip-py3-venv

276
.eslintrc.json Normal file
View File

@@ -0,0 +1,276 @@
{
"env": {
"es2020": true,
"node": true
},
"extends": [
"eslint:recommended",
"plugin:import/errors",
"plugin:import/warnings",
"plugin:no-jquery/recommended",
"plugin:no-jquery/deprecated",
"plugin:unicorn/recommended",
"prettier"
],
"parser": "@babel/eslint-parser",
"parserOptions": {
"warnOnUnsupportedTypeScriptVersion": false,
"sourceType": "unambiguous"
},
"plugins": ["formatjs", "no-jquery"],
"settings": {
"additionalFunctionNames": ["$t", "$t_html"],
"no-jquery": {
"collectionReturningPlugins": {
"expectOne": "always"
},
"variablePattern": "^\\$(?!t$|t_html$)."
}
},
"reportUnusedDisableDirectives": true,
"rules": {
"array-callback-return": "error",
"arrow-body-style": "error",
"block-scoped-var": "error",
"consistent-return": "error",
"curly": "error",
"dot-notation": "error",
"eqeqeq": "error",
"formatjs/enforce-default-message": ["error", "literal"],
"formatjs/enforce-placeholders": [
"error",
{"ignoreList": ["b", "code", "em", "i", "kbd", "p", "strong"]}
],
"formatjs/no-id": "error",
"guard-for-in": "error",
"import/extensions": "error",
"import/first": "error",
"import/newline-after-import": "error",
"import/no-self-import": "error",
"import/no-useless-path-segments": "error",
"import/order": [
"error",
{
"alphabetize": {"order": "asc"},
"newlines-between": "always"
}
],
"import/unambiguous": "error",
"lines-around-directive": "error",
"new-cap": "error",
"no-alert": "error",
"no-array-constructor": "error",
"no-bitwise": "error",
"no-caller": "error",
"no-catch-shadow": "error",
"no-constant-condition": ["error", {"checkLoops": false}],
"no-div-regex": "error",
"no-duplicate-imports": "error",
"no-else-return": "error",
"no-eq-null": "error",
"no-eval": "error",
"no-implicit-coercion": "error",
"no-implied-eval": "error",
"no-inner-declarations": "off",
"no-iterator": "error",
"no-jquery/no-parse-html-literal": "error",
"no-label-var": "error",
"no-labels": "error",
"no-loop-func": "error",
"no-multi-str": "error",
"no-native-reassign": "error",
"no-new-func": "error",
"no-new-object": "error",
"no-new-wrappers": "error",
"no-octal-escape": "error",
"no-plusplus": "error",
"no-proto": "error",
"no-return-assign": "error",
"no-script-url": "error",
"no-self-compare": "error",
"no-sync": "error",
"no-throw-literal": "error",
"no-undef-init": "error",
"no-unneeded-ternary": ["error", {"defaultAssignment": false}],
"no-unused-expressions": "error",
"no-use-before-define": ["error", {"functions": false}],
"no-useless-concat": "error",
"no-useless-constructor": "error",
"no-var": "error",
"object-shorthand": "error",
"one-var": ["error", "never"],
"prefer-arrow-callback": "error",
"prefer-const": [
"error",
{
"ignoreReadBeforeAssign": true
}
],
"radix": "error",
"sort-imports": ["error", {"ignoreDeclarationSort": true}],
"spaced-comment": ["error", "always", {"markers": ["/"]}],
"strict": "error",
"unicorn/consistent-function-scoping": "off",
"unicorn/explicit-length-check": "off",
"unicorn/filename-case": "off",
"unicorn/no-await-expression-member": "off",
"unicorn/no-nested-ternary": "off",
"unicorn/no-null": "off",
"unicorn/no-process-exit": "off",
"unicorn/no-useless-undefined": "off",
"unicorn/number-literal-case": "off",
"unicorn/numeric-separators-style": "off",
"unicorn/prefer-module": "off",
"unicorn/prefer-node-protocol": "off",
"unicorn/prefer-spread": "off",
"unicorn/prefer-ternary": "off",
"unicorn/prevent-abbreviations": "off",
"valid-typeof": ["error", {"requireStringLiterals": true}],
"yoda": "error"
},
"overrides": [
{
"files": ["frontend_tests/node_tests/**", "frontend_tests/zjsunit/**"],
"rules": {
"no-jquery/no-selector-prop": "off"
}
},
{
"files": ["frontend_tests/puppeteer_lib/**", "frontend_tests/puppeteer_tests/**"],
"globals": {
"$": false,
"zulip_test": false
}
},
{
"files": ["static/js/**"],
"globals": {
"StripeCheckout": false
}
},
{
"files": ["**/*.ts"],
"extends": [
"plugin:@typescript-eslint/recommended-requiring-type-checking",
"plugin:import/typescript"
],
"parserOptions": {
"project": "tsconfig.json"
},
"settings": {
"import/resolver": {
"node": {
"extensions": [".ts", ".d.ts", ".js"] // https://github.com/import-js/eslint-plugin-import/issues/2267
}
}
},
"globals": {
"JQuery": false
},
"rules": {
// Disable base rule to avoid conflict
"no-duplicate-imports": "off",
"no-unused-vars": "off",
"no-useless-constructor": "off",
"no-use-before-define": "off",
"@typescript-eslint/array-type": "error",
"@typescript-eslint/consistent-type-assertions": "error",
"@typescript-eslint/consistent-type-imports": "error",
"@typescript-eslint/explicit-function-return-type": [
"error",
{"allowExpressions": true}
],
"@typescript-eslint/member-ordering": "error",
"@typescript-eslint/no-duplicate-imports": "off",
"@typescript-eslint/no-explicit-any": "off",
"@typescript-eslint/no-extraneous-class": "error",
"@typescript-eslint/no-non-null-assertion": "off",
"@typescript-eslint/no-parameter-properties": "error",
"@typescript-eslint/no-unnecessary-qualifier": "error",
"@typescript-eslint/no-unused-vars": ["error", {"varsIgnorePattern": "^_"}],
"@typescript-eslint/no-unsafe-argument": "off",
"@typescript-eslint/no-unsafe-assignment": "off",
"@typescript-eslint/no-unsafe-call": "off",
"@typescript-eslint/no-unsafe-member-access": "off",
"@typescript-eslint/no-unsafe-return": "off",
"@typescript-eslint/no-use-before-define": "error",
"@typescript-eslint/no-useless-constructor": "error",
"@typescript-eslint/prefer-includes": "error",
"@typescript-eslint/prefer-string-starts-ends-with": "error",
"@typescript-eslint/promise-function-async": "error",
"@typescript-eslint/unified-signatures": "error",
"no-undef": "error"
}
},
{
"files": ["**/*.d.ts"],
"rules": {
"import/unambiguous": "off"
}
},
{
"files": ["frontend_tests/**"],
"globals": {
"CSS": false,
"document": false,
"navigator": false,
"window": false
},
"rules": {
"formatjs/no-id": "off",
"new-cap": "off",
"no-sync": "off",
"unicorn/prefer-prototype-methods": "off"
}
},
{
"files": ["tools/debug-require.js"],
"env": {
"browser": true,
"es2020": false
},
"rules": {
// Dont require ES features that PhantomJS doesnt support
// TODO: Toggle these settings now that we don't use PhantomJS
"no-var": "off",
"object-shorthand": "off",
"prefer-arrow-callback": "off"
}
},
{
"files": ["static/**"],
"env": {
"browser": true,
"node": false
},
"rules": {
"no-console": "error"
},
"settings": {
"import/resolver": "webpack"
}
},
{
"files": ["static/shared/**"],
"env": {
"browser": false,
"shared-node-browser": true
},
"rules": {
"import/no-restricted-paths": [
"error",
{
"zones": [
{
"target": "./static/shared",
"from": ".",
"except": ["./node_modules", "./static/shared"]
}
]
}
]
}
}
]
}

4
.gitattributes vendored
View File

@@ -20,6 +20,7 @@ corporate/tests/stripe_fixtures/*.json -diff
*.eot binary
*.woff binary
*.woff2 binary
*.svg binary
*.ttf binary
*.png binary
*.otf binary
@@ -29,6 +30,3 @@ corporate/tests/stripe_fixtures/*.json -diff
*.bmp binary
*.mp3 binary
*.pdf binary
# Treat SVG files as code for diffing purposes.
*.svg diff

View File

@@ -1,10 +0,0 @@
---
name: Issue discussed in the Zulip development community
about: Bug report, feature or improvement already discussed on chat.zulip.org.
---
<!-- Issue description -->
<!-- Link to a message in the chat.zulip.org discussion. Message links will still work even if the topic is renamed or resolved. Link back to this issue from the chat.zulip.org thread. -->
CZO thread

View File

@@ -1,18 +0,0 @@
---
name: Bug report
about: A concrete bug report with steps to reproduce the behavior. (See also "Possible bug" below.)
labels: ["bug"]
---
<!-- Describe what you were expecting to see, what you saw instead, and steps to take in order to reproduce the buggy behavior. Screenshots can be helpful. -->
<!-- Check the box for the version of Zulip you are using (see https://zulip.com/help/view-zulip-version).-->
**Zulip Server and web app version:**
- [ ] Zulip Cloud (`*.zulipchat.com`)
- [ ] Zulip Server 10.x
- [ ] Zulip Server 9.x
- [ ] Zulip Server 8.x
- [ ] Zulip Server 7.x or older
- [ ] Other or not sure

View File

@@ -1,6 +0,0 @@
---
name: Feature or improvement request
about: A specific proposal for a new feature of improvement. (See also "Feature suggestion or feedback" below.)
---
<!-- Describe the proposal, including how it would help you or your organization. -->

View File

@@ -1,14 +0,0 @@
blank_issues_enabled: true
contact_links:
- name: Possible bug
url: https://zulip.readthedocs.io/en/latest/contributing/reporting-bugs.html
about: Report unexpected behavior that may be a bug.
- name: Feature suggestion or feedback
url: https://zulip.readthedocs.io/en/latest/contributing/suggesting-features.html
about: Start a discussion about your idea for improving Zulip.
- name: Issue with running or upgrading a Zulip server
url: https://zulip.readthedocs.io/en/latest/production/troubleshooting.html
about: We provide free, interactive support for the vast majority of questions about running a Zulip server.
- name: Other support requests and sales questions
url: https://zulip.com/help/contact-support
about: Contact us — we're happy to help!

82
.github/funding.json vendored
View File

@@ -1,82 +0,0 @@
{
"version": "v1.0.0",
"entity": {
"type": "organisation",
"role": "steward",
"name": "Kandra Labs, Inc.",
"email": "support@zulip.com",
"description": "Guiding the Zulip community in developing a world-class organized team chat product with apps for every major desktop and mobile platform requires leadership from a talented, dedicated team. We believe that the only sustainable model is for our core team to be compensated fairly for their time. We have thus founded a company (Kandra Labs) to steward and financially support Zulips development. We are growing our business sustainably, without venture capital funding. VCs are incentivized to push companies to gamble for explosive growth. Often, the result is that a company with a useful product burns rapidly through its resources and goes out of business. We have built Zulip as a sustainable business (also supported by SBIR grants from the US National Science Foundation), and are being thoughtful about our pace of spending. Funding our company without venture capital also allows us to live by our values, without investor pressure to compromise them when doing so might be “good business” or “what everyone does”.",
"webpageUrl": {
"url": "https://zulip.com/values/",
"wellKnown": "https://zulip.com/.well-known/funding-manifest-urls"
}
},
"projects": [
{
"guid": "zulip",
"name": "Zulip",
"description": "Zulip is an open-source team chat application designed for seamless remote and hybrid work. With conversations organized by topic, Zulip is ideal for both live and asynchronous communication. Zulips 100% open-source software is available as a cloud service or a self-hosted solution, and is used by thousands of organizations around the world. An important part of Zulips mission is ensuring that worthy organizations, from programming-language developers to research communities, are able to use Zulip whether or not they have funding. For this reason, we sponsor Zulip Cloud Standard for open source projects, non-profits, education, and academic research. This program has grown exponentially since its inception; today we are proud to fully sponsor Zulip hosting for several hundred organizations. Support from the community will help us continue to afford these programs as their popularity grows. ",
"webpageUrl": {
"url": "https://zulip.com/",
"wellKnown": "https://zulip.com/.well-known/funding-manifest-urls"
},
"repositoryUrl": {
"url": "https://github.com/zulip"
},
"licenses": ["spdx:Apache-2.0"],
"tags": ["communication", "team-chat", "collaboration"]
}
],
"funding": {
"channels": [
{
"guid": "github-sponsors",
"type": "payment-provider",
"address": "https://github.com/sponsors/zulip",
"description": "Preferred channel for sponsoring Zulip, since GitHub Sponsors does not charge any fees to sponsored projects."
},
{
"guid": "patreon",
"type": "payment-provider",
"address": "https://patreon.com/zulip"
},
{
"guid": "open-collective",
"type": "payment-provider",
"address": "https://opencollective.com/zulip"
}
],
"plans": [
{
"guid": "github-sponsors",
"status": "active",
"name": "Support Zulip",
"description": "Contribute to Zulip's development and free hosting for open source projects and other worthy organizations!",
"amount": 0,
"currency": "USD",
"frequency": "monthly",
"channels": ["github-sponsors"]
},
{
"guid": "patreon",
"status": "active",
"name": "Support Zulip",
"description": "Contribute to Zulip's development and free hosting for open source projects and other worthy organizations!",
"amount": 0,
"currency": "USD",
"frequency": "monthly",
"channels": ["patreon"]
},
{
"guid": "open-collective",
"status": "active",
"name": "Support Zulip",
"description": "Contribute to Zulip's development and free hosting for open source projects and other worthy organizations!",
"amount": 0,
"currency": "USD",
"frequency": "monthly",
"channels": ["open-collective"]
}
]
}
}

View File

@@ -1,43 +1,11 @@
<!-- Describe your pull request here.-->
<!-- What's this PR for? (Just a link to an issue is fine.) -->
Fixes: <!-- Issue link, or clear description.-->
**Testing plan:** <!-- How have you tested? -->
<!-- If the PR makes UI changes, always include one or more still screenshots to demonstrate your changes. If it seems helpful, add a screen capture of the new functionality as well.
**GIFs or screenshots:** <!-- If a UI change. See:
https://zulip.readthedocs.io/en/latest/tutorials/screenshot-and-gif-software.html
-->
Tooling tips: https://zulip.readthedocs.io/en/latest/tutorials/screenshot-and-gif-software.html
-->
**Screenshots and screen captures:**
<details>
<summary>Self-review checklist</summary>
<!-- Prior to submitting a PR, follow our step-by-step guide to review your own code:
https://zulip.readthedocs.io/en/latest/contributing/code-reviewing.html#how-to-review-code -->
<!-- Once you create the PR, check off all the steps below that you have completed.
If any of these steps are not relevant or you have not completed, leave them unchecked.-->
- [ ] [Self-reviewed](https://zulip.readthedocs.io/en/latest/contributing/code-reviewing.html#how-to-review-code) the changes for clarity and maintainability
(variable names, code reuse, readability, etc.).
Communicate decisions, questions, and potential concerns.
- [ ] Explains differences from previous plans (e.g., issue description).
- [ ] Highlights technical choices and bugs encountered.
- [ ] Calls out remaining decisions and concerns.
- [ ] Automated tests verify logic where appropriate.
Individual commits are ready for review (see [commit discipline](https://zulip.readthedocs.io/en/latest/contributing/commit-discipline.html)).
- [ ] Each commit is a coherent idea.
- [ ] Commit message(s) explain reasoning and motivation for changes.
Completed manual review and testing of the following:
- [ ] Visual appearance of the changes.
- [ ] Responsiveness and internationalization.
- [ ] Strings and tooltips.
- [ ] End-to-end functionality of buttons, interactions and flows.
- [ ] Corner cases, error conditions, and easily imagined bugs.
</details>
<!-- Also be sure to make clear, coherent commits:
https://zulip.readthedocs.io/en/latest/contributing/version-control.html
-->

View File

@@ -0,0 +1,43 @@
name: Cancel previous runs
on: [push, pull_request]
defaults:
run:
shell: bash
jobs:
cancel:
name: Cancel previous runs
runs-on: ubuntu-latest
timeout-minutes: 3
# Don't run this job for zulip/zulip pushes since we
# want to run those jobs.
if: ${{ github.event_name != 'push' || github.event.repository.full_name != 'zulip/zulip' }}
steps:
# We get workflow IDs from GitHub API so we don't have to maintain
# a hard-coded list of IDs which need to be updated when a workflow
# is added or removed. And, workflow IDs are different for other forks
# so this is required.
- name: Get workflow IDs.
id: workflow_ids
continue-on-error: true # Don't fail this job on failure
env:
# This is in <owner>/<repo> format e.g. zulip/zulip
REPOSITORY: ${{ github.repository }}
run: |
workflow_api_url=https://api.github.com/repos/$REPOSITORY/actions/workflows
curl -fL $workflow_api_url -o workflows.json
script="const {workflows} = require('./workflows'); \
const ids = workflows.map(workflow => workflow.id); \
console.log(ids.join(','));"
ids=$(node -e "$script")
echo "::set-output name=ids::$ids"
- uses: styfle/cancel-workflow-action@0.9.0
continue-on-error: true # Don't fail this job on failure
with:
workflow_id: ${{ steps.workflow_ids.outputs.ids }}
access_token: ${{ github.token }}

View File

@@ -1,46 +0,0 @@
name: Check feature level updated
on:
push:
branches: [main]
paths:
- "api_docs/**"
workflow_dispatch:
jobs:
check-feature-level-updated:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
token: ${{ secrets.GITHUB_TOKEN }}
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.x"
- name: Add required permissions
run: chmod +x ./tools/check-feature-level-updated
- name: Run tools/check-feature-level-updated
id: run_check
run: ./tools/check-feature-level-updated >> $GITHUB_OUTPUT
- name: Report status to CZO
if: ${{ steps.run_check.outputs.fail == 'true' && github.repository == 'zulip/zulip'}}
uses: zulip/github-actions-zulip/send-message@v1
with:
api-key: ${{ secrets.ZULIP_BOT_KEY }}
email: "github-actions-bot@chat.zulip.org"
organization-url: "https://chat.zulip.org"
to: "automated testing"
topic: ${{ steps.run_check.outputs.topic }}
type: "stream"
content: ${{ steps.run_check.outputs.content }}
- name: Fail job if feature level not updated in API docs
if: ${{ steps.run_check.outputs.fail == 'true' }}
run: exit 1

View File

@@ -2,39 +2,26 @@ name: "Code scanning"
on:
push:
branches: ["*.x", chat.zulip.org, main]
tags: ["*"]
pull_request:
branches: ["*.x", chat.zulip.org, main]
workflow_dispatch:
concurrency:
group: "${{ github.workflow }}-${{ github.head_ref || github.run_id }}"
cancel-in-progress: true
permissions:
contents: read
branches-ignore:
- dependabot/** # https://github.com/github/codeql-action/pull/435
pull_request: {}
jobs:
CodeQL:
permissions:
actions: read # for github/codeql-action/init to get workflow details
contents: read # for actions/checkout to fetch code
security-events: write # for github/codeql-action/analyze to upload SARIF results
if: ${{!github.event.repository.private}}
runs-on: ubuntu-latest
steps:
- name: Check out repository
uses: actions/checkout@v4
uses: actions/checkout@v2
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
uses: github/codeql-action/init@v1
# Override language selection by uncommenting this and choosing your languages
# with:
# languages: go, javascript, csharp, python, cpp, java
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
uses: github/codeql-action/analyze@v1

View File

@@ -1,53 +1,42 @@
name: Zulip production suite
on:
push:
branches: ["*.x", chat.zulip.org, main]
tags: ["*"]
push: {}
pull_request:
paths:
- .github/workflows/production-suite.yml
- "**/migrations/**"
- babel.config.js
- manage.py
- pnpm-lock.yaml
- postcss.config.js
- puppet/**
- requirements/**
- scripts/**
- static/assets/**
- static/third/**
- tools/**
- uv.lock
- web/babel.config.js
- web/postcss.config.js
- web/third/**
- web/webpack.config.ts
- webpack.config.ts
- yarn.lock
- zerver/worker/queue_processors.py
- zerver/lib/push_notifications.py
- zerver/lib/storage.py
- zerver/decorator.py
- zproject/**
workflow_dispatch:
concurrency:
group: "${{ github.workflow }}-${{ github.head_ref || github.run_id }}"
cancel-in-progress: true
defaults:
run:
shell: bash
permissions:
contents: read
jobs:
production_build:
# This job builds a release tarball from the current commit, which
# will be used for all of the following install/upgrade tests.
name: Ubuntu 22.04 production build
name: Debian 10 production build
runs-on: ubuntu-latest
# Docker images are built from 'tools/ci/Dockerfile'; the comments at
# the top explain how to build and upload these images.
# Ubuntu 22.04 ships with Python 3.10.12.
container: zulip/ci:jammy
# Debian 10 ships with Python 3.7.3.
container: zulip/ci:buster
steps:
- name: Add required permissions
run: |
@@ -65,69 +54,50 @@ jobs:
# cache action to work. It is owned by root currently.
sudo chmod -R 0777 /__w/_temp/
- uses: actions/checkout@v4
- uses: actions/checkout@v2
- name: Create cache directories
run: |
dirs=(/srv/zulip-emoji-cache)
dirs=(/srv/zulip-{npm,venv,emoji}-cache)
sudo mkdir -p "${dirs[@]}"
sudo chown -R github "${dirs[@]}"
- name: Restore pnpm store
uses: actions/cache@v4
- name: Restore node_modules cache
uses: actions/cache@v2
with:
path: /__w/.pnpm-store
key: v1-pnpm-store-jammy-${{ hashFiles('pnpm-lock.yaml') }}
path: /srv/zulip-npm-cache
key: v1-yarn-deps-buster-${{ hashFiles('package.json') }}-${{ hashFiles('yarn.lock') }}
restore-keys: v1-yarn-deps-buster
- name: Restore uv cache
uses: actions/cache@v4
- name: Restore python cache
uses: actions/cache@v2
with:
path: ~/.cache/uv
key: uv-jammy-${{ hashFiles('uv.lock') }}
restore-keys: uv-jammy-
path: /srv/zulip-venv-cache
key: v1-venv-buster-${{ hashFiles('requirements/dev.txt') }}
restore-keys: v1-venv-buster
- name: Restore emoji cache
uses: actions/cache@v4
uses: actions/cache@v2
with:
path: /srv/zulip-emoji-cache
key: v1-emoji-jammy-${{ hashFiles('tools/setup/emoji/emoji_map.json') }}-${{ hashFiles('tools/setup/emoji/build_emoji') }}-${{ hashFiles('tools/setup/emoji/emoji_setup_utils.py') }}-${{ hashFiles('tools/setup/emoji/emoji_names.py') }}-${{ hashFiles('package.json') }}
restore-keys: v1-emoji-jammy
key: v1-emoji-buster-${{ hashFiles('tools/setup/emoji/emoji_map.json') }}-${{ hashFiles('tools/setup/emoji/build_emoji') }}-${{ hashFiles('tools/setup/emoji/emoji_setup_utils.py') }}-${{ hashFiles('tools/setup/emoji/emoji_names.py') }}-${{ hashFiles('package.json') }}
restore-keys: v1-emoji-buster
- name: Build production tarball
run: ./tools/ci/production-build
- name: Upload production build artifacts for install jobs
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v2
with:
name: production-tarball
path: /tmp/production-build
retention-days: 1
retention-days: 14
- name: Verify pnpm store path
run: |
set -x
path="$(pnpm store path)"
[[ "$path" == /__w/.pnpm-store/* ]]
- name: Minimize uv cache
run: uv cache prune --ci
- name: Generate failure report string
id: failure_report_string
if: ${{ failure() && github.repository == 'zulip/zulip' && github.event_name == 'push' }}
run: tools/ci/generate-failure-message >> $GITHUB_OUTPUT
- name: Report status to CZO
if: ${{ failure() && github.repository == 'zulip/zulip' && github.event_name == 'push' }}
uses: zulip/github-actions-zulip/send-message@v1
with:
api-key: ${{ secrets.ZULIP_BOT_KEY }}
email: "github-actions-bot@chat.zulip.org"
organization-url: "https://chat.zulip.org"
to: "automated testing"
topic: ${{ steps.failure_report_string.outputs.topic }}
type: "stream"
content: ${{ steps.failure_report_string.outputs.content }}
- name: Report status
if: failure()
env:
ZULIP_BOT_KEY: ${{ secrets.ZULIP_BOT_KEY }}
run: tools/ci/send-failure-message
production_install:
# This job installs the server release tarball built above on a
@@ -136,28 +106,26 @@ jobs:
strategy:
fail-fast: false
matrix:
extra_args: [""]
include:
# Docker images are built from 'tools/ci/Dockerfile'; the comments at
# the top explain how to build and upload these images.
- docker_image: zulip/ci:focal
name: Ubuntu 20.04 production install
os: focal
- docker_image: zulip/ci:jammy
name: Ubuntu 22.04 production install and PostgreSQL upgrade with pgroonga
name: Ubuntu 22.04 production install
os: jammy
extra-args: ""
- docker_image: zulip/ci:noble
name: Ubuntu 24.04 production install
os: noble
extra-args: ""
- docker_image: zulip/ci:buster
name: Debian 10 production install with custom db name and user
os: buster
extra_args: --test-custom-db
- docker_image: zulip/ci:bookworm
name: Debian 12 production install with custom db name and user
os: bookworm
extra-args: --test-custom-db
- docker_image: zulip/ci:trixie
name: Debian 13 production install
os: trixie
extra-args: ""
- docker_image: zulip/ci:bullseye
name: Debian 11 production install
os: bullseye
name: ${{ matrix.name }}
container:
@@ -168,7 +136,7 @@ jobs:
steps:
- name: Download built production tarball
uses: actions/download-artifact@v4
uses: actions/download-artifact@v2
with:
name: production-tarball
path: /tmp
@@ -180,58 +148,56 @@ jobs:
# cache action to work. It is owned by root currently.
sudo chmod -R 0777 /__w/_temp/
# Since actions/download-artifact@v4 loses all the permissions
# Since actions/download-artifact@v2 loses all the permissions
# of the tarball uploaded by the upload artifact fix those.
chmod +x /tmp/production-upgrade-pg
chmod +x /tmp/production-pgroonga
chmod +x /tmp/production-install
chmod +x /tmp/production-verify
chmod +x /tmp/generate-failure-message
chmod +x /tmp/send-failure-message
- name: Create cache directories
run: |
dirs=(/srv/zulip-emoji-cache)
dirs=(/srv/zulip-{npm,venv,emoji}-cache)
sudo mkdir -p "${dirs[@]}"
sudo chown -R github "${dirs[@]}"
- name: Restore node_modules cache
uses: actions/cache@v2
with:
path: /srv/zulip-npm-cache
key: v1-yarn-deps-${{ matrix.os }}-${{ hashFiles('/tmp/package.json') }}-${{ hashFiles('/tmp/yarn.lock') }}
restore-keys: v1-yarn-deps-${{ matrix.os }}
- name: Install production
run: sudo /tmp/production-install ${{ matrix.extra-args }}
run: |
sudo service rabbitmq-server restart
sudo /tmp/production-install ${{ matrix.extra-args }}
- name: Verify install
run: sudo /tmp/production-verify ${{ matrix.extra-args }}
- name: Install pgroonga
if: ${{ matrix.os == 'jammy' }}
if: ${{ matrix.os == 'focal' }}
run: sudo /tmp/production-pgroonga
- name: Verify install after installing pgroonga
if: ${{ matrix.os == 'jammy' }}
if: ${{ matrix.os == 'focal' }}
run: sudo /tmp/production-verify ${{ matrix.extra-args }}
- name: Upgrade postgresql
if: ${{ matrix.os == 'jammy' }}
if: ${{ matrix.os == 'focal' }}
run: sudo /tmp/production-upgrade-pg
- name: Verify install after upgrading postgresql
if: ${{ matrix.os == 'jammy' }}
if: ${{ matrix.os == 'focal' }}
run: sudo /tmp/production-verify ${{ matrix.extra-args }}
- name: Generate failure report string
id: failure_report_string
if: ${{ failure() && github.repository == 'zulip/zulip' && github.event_name == 'push' }}
run: /tmp/generate-failure-message >> $GITHUB_OUTPUT
- name: Report status to CZO
if: ${{ failure() && github.repository == 'zulip/zulip' && github.event_name == 'push' }}
uses: zulip/github-actions-zulip/send-message@v1
with:
api-key: ${{ secrets.ZULIP_BOT_KEY }}
email: "github-actions-bot@chat.zulip.org"
organization-url: "https://chat.zulip.org"
to: "automated testing"
topic: ${{ steps.failure_report_string.outputs.topic }}
type: "stream"
content: ${{ steps.failure_report_string.outputs.content }}
- name: Report status
if: failure()
env:
ZULIP_BOT_KEY: ${{ secrets.ZULIP_BOT_KEY }}
run: /tmp/send-failure-message
production_upgrade:
# The production upgrade job starts with a container with a
@@ -244,23 +210,15 @@ jobs:
fail-fast: false
matrix:
include:
# Docker images are built from 'tools/ci/Dockerfile.prod'; the comments at
# Docker images are built from 'tools/ci/Dockerfile'; the comments at
# the top explain how to build and upload these images.
- docker_image: zulip/ci:jammy-6.0
name: 6.0 Version Upgrade
os: jammy
- docker_image: zulip/ci:bookworm-7.0
name: 7.0 Version Upgrade
os: bookworm
- docker_image: zulip/ci:bookworm-8.0
name: 8.0 Version Upgrade
os: bookworm
- docker_image: zulip/ci:noble-9.0
name: 9.0 Version Upgrade
os: noble
- docker_image: zulip/ci:noble-10.0
name: 10.0 Version Upgrade
os: noble
- docker_image: zulip/ci:buster-3.4
name: 3.4 Version Upgrade
os: buster
- docker_image: zulip/ci:bullseye-4.11
name: 4.11 Version Upgrade
os: bullseye
name: ${{ matrix.name }}
container:
@@ -271,7 +229,7 @@ jobs:
steps:
- name: Download built production tarball
uses: actions/download-artifact@v4
uses: actions/download-artifact@v2
with:
name: production-tarball
path: /tmp
@@ -283,15 +241,15 @@ jobs:
# cache action to work. It is owned by root currently.
sudo chmod -R 0777 /__w/_temp/
# Since actions/download-artifact@v4 loses all the permissions
# Since actions/download-artifact@v2 loses all the permissions
# of the tarball uploaded by the upload artifact fix those.
chmod +x /tmp/production-upgrade
chmod +x /tmp/production-verify
chmod +x /tmp/generate-failure-message
chmod +x /tmp/send-failure-message
- name: Create cache directories
run: |
dirs=(/srv/zulip-emoji-cache)
dirs=(/srv/zulip-{npm,venv,emoji}-cache)
sudo mkdir -p "${dirs[@]}"
sudo chown -R github "${dirs[@]}"
@@ -304,19 +262,8 @@ jobs:
# - name: Verify install
# run: sudo /tmp/production-verify
- name: Generate failure report string
id: failure_report_string
if: ${{ failure() && github.repository == 'zulip/zulip' && github.event_name == 'push' }}
run: /tmp/generate-failure-message >> $GITHUB_OUTPUT
- name: Report status to CZO
if: ${{ failure() && github.repository == 'zulip/zulip' && github.event_name == 'push' }}
uses: zulip/github-actions-zulip/send-message@v1
with:
api-key: ${{ secrets.ZULIP_BOT_KEY }}
email: "github-actions-bot@chat.zulip.org"
organization-url: "https://chat.zulip.org"
to: "automated testing"
topic: ${{ steps.failure_report_string.outputs.topic }}
type: "stream"
content: ${{ steps.failure_report_string.outputs.content }}
- name: Report status
if: failure()
env:
ZULIP_BOT_KEY: ${{ secrets.ZULIP_BOT_KEY }}
run: /tmp/send-failure-message

View File

@@ -2,14 +2,11 @@ name: Update one click apps
on:
release:
types: [published]
permissions:
contents: read
jobs:
update-digitalocean-oneclick-app:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v2
- name: Update DigitalOcean one click app
env:
DIGITALOCEAN_API_KEY: ${{ secrets.ONE_CLICK_ACTION_DIGITALOCEAN_API_KEY }}
@@ -22,6 +19,6 @@ jobs:
run: |
export PATH="$HOME/.local/bin:$PATH"
git clone https://github.com/zulip/marketplace-partners
pip3 install python-digitalocean zulip fab-classic PyNaCl
pip3 install python-digitalocean zulip fab-classic
echo $PATH
python3 tools/oneclickapps/prepare_digital_ocean_one_click_app_release.py

View File

@@ -4,56 +4,38 @@
name: Zulip CI
on:
push:
branches: ["*.x", chat.zulip.org, main]
tags: ["*"]
pull_request:
workflow_dispatch:
concurrency:
group: "${{ github.workflow }}-${{ github.head_ref || github.run_id }}"
cancel-in-progress: true
on: [push, pull_request]
defaults:
run:
shell: bash
permissions:
contents: read
jobs:
tests:
strategy:
fail-fast: false
matrix:
include_frontend_tests: [false]
include:
# Base images are built using `tools/ci/Dockerfile`.
# Base images are built using `tools/ci/Dockerfile.prod.template`.
# The comments at the top explain how to build and upload these images.
# Ubuntu 22.04 ships with Python 3.10.12.
- docker_image: zulip/ci:jammy
name: Ubuntu 22.04 (Python 3.10, backend + frontend)
os: jammy
include_documentation_tests: false
# Debian 10 ships with Python 3.7.3.
- docker_image: zulip/ci:buster
name: Debian 10 (Python 3.7, backend + frontend)
os: buster
include_frontend_tests: true
# Debian 12 ships with Python 3.11.2.
- docker_image: zulip/ci:bookworm
name: Debian 12 (Python 3.11, backend + documentation)
os: bookworm
include_documentation_tests: true
include_frontend_tests: false
# Ubuntu 24.04 ships with Python 3.12.2.
- docker_image: zulip/ci:noble
name: Ubuntu 24.04 (Python 3.12, backend)
os: noble
include_documentation_tests: false
include_frontend_tests: false
# Debian 13 ships with Python 3.13.5.
- docker_image: zulip/ci:trixie
name: Debian 13 (Python 3.13, backend)
os: trixie
include_documentation_tests: false
include_frontend_tests: false
# Ubuntu 20.04 ships with Python 3.8.2.
- docker_image: zulip/ci:focal
name: Ubuntu 20.04 (Python 3.8, backend)
os: focal
# Debian 11 ships with Python 3.9.2.
- docker_image: zulip/ci:bullseye
name: Debian 11 (Python 3.9, backend)
os: bullseye
# Ubuntu 22.04 ships with Python 3.10.4.
- docker_image: zulip/ci:jammy
name: Ubuntu 22.04 (Python 3.10, backend)
os: jammy
runs-on: ubuntu-latest
name: ${{ matrix.name }}
@@ -68,39 +50,43 @@ jobs:
HOME: /home/github/
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v2
- name: Create cache directories
run: |
dirs=(/srv/zulip-emoji-cache)
dirs=(/srv/zulip-{npm,venv,emoji}-cache)
sudo mkdir -p "${dirs[@]}"
sudo chown -R github "${dirs[@]}"
- name: Restore pnpm store
uses: actions/cache@v4
- name: Restore node_modules cache
uses: actions/cache@v2
with:
path: /__w/.pnpm-store
key: v1-pnpm-store-${{ matrix.os }}-${{ hashFiles('pnpm-lock.yaml') }}
path: /srv/zulip-npm-cache
key: v1-yarn-deps-${{ matrix.os }}-${{ hashFiles('package.json') }}-${{ hashFiles('yarn.lock') }}
restore-keys: v1-yarn-deps-${{ matrix.os }}
- name: Restore uv cache
uses: actions/cache@v4
- name: Restore python cache
uses: actions/cache@v2
with:
path: ~/.cache/uv
key: uv-${{ matrix.os }}-${{ hashFiles('uv.lock') }}
restore-keys: uv-${{ matrix.os }}-
path: /srv/zulip-venv-cache
key: v1-venv-${{ matrix.os }}-${{ hashFiles('requirements/dev.txt') }}
restore-keys: v1-venv-${{ matrix.os }}
- name: Restore emoji cache
uses: actions/cache@v4
uses: actions/cache@v2
with:
path: /srv/zulip-emoji-cache
key: v1-emoji-${{ matrix.os }}-${{ hashFiles('tools/setup/emoji/emoji_map.json', 'tools/setup/emoji/build_emoji', 'tools/setup/emoji/emoji_setup_utils.py', 'tools/setup/emoji/emoji_names.py', 'package.json') }}
key: v1-emoji-${{ matrix.os }}-${{ hashFiles('tools/setup/emoji/emoji_map.json') }}-${{ hashFiles('tools/setup/emoji/build_emoji') }}-${{ hashFiles('tools/setup/emoji/emoji_setup_utils.py') }}-${{ hashFiles('tools/setup/emoji/emoji_names.py') }}-${{ hashFiles('package.json') }}
restore-keys: v1-emoji-${{ matrix.os }}
- name: Install dependencies
run: |
# This is the main setup job for the test suite
./tools/ci/setup-backend --skip-dev-db-build
scripts/lib/clean_unused_caches.py --verbose --threshold=0
# Cleaning caches is mostly unnecessary in GitHub Actions, because
# most builds don't get to write to the cache.
# scripts/lib/clean_unused_caches.py --verbose --threshold 0
- name: Run tools test
run: |
@@ -112,13 +98,56 @@ jobs:
source tools/ci/activate-venv
./tools/run-codespell
# We run the tests that are only run in a specific job early, so
# that we get feedback to the developer about likely failures as
# quickly as possible. Backend/mypy failures that aren't
# identical across different versions are much more rare than
# frontend linter or node test failures.
- name: Run backend lint
run: |
source tools/ci/activate-venv
echo "Test suite is running under $(python --version)."
./tools/lint --groups=backend --skip=gitlint,mypy # gitlint disabled because flaky
- name: Run frontend lint
if: ${{ matrix.include_frontend_tests }}
run: |
source tools/ci/activate-venv
./tools/lint --groups=frontend --skip=gitlint # gitlint disabled because flaky
- name: Run backend tests
run: |
source tools/ci/activate-venv
./tools/test-backend --coverage --include-webhooks --no-cov-cleanup --ban-console-output
- name: Run mypy
run: |
source tools/ci/activate-venv
# We run mypy after the backend tests so we get output from the
# backend tests, which tend to uncover more serious problems, first.
./tools/run-mypy --version
./tools/run-mypy
- name: Run miscellaneous tests
run: |
source tools/ci/activate-venv
# Currently our compiled requirements files will differ for different python versions
# so we will run test-locked-requirements only for Debian 10.
# ./tools/test-locked-requirements
# ./tools/test-run-dev # https://github.com/zulip/zulip/pull/14233
#
# This test has been persistently flaky at like 1% frequency, is slow,
# and is for a very specific single feature, so we don't run it by default:
# ./tools/test-queue-worker-reload
./tools/test-migrations
./tools/setup/optimize-svg --check
./tools/setup/generate_integration_bots_avatars.py --check-missing
# Ban check-database-compatibility.py from transitively
# relying on static/generated, because it might not be
# up-to-date at that point in upgrade-zulip-stage-2.
chmod 000 static/generated
./scripts/lib/check-database-compatibility.py
chmod 755 static/generated
- name: Run documentation and api tests
if: ${{ matrix.include_documentation_tests }}
run: |
source tools/ci/activate-venv
# In CI, we only test links we control in test-documentation to avoid flakes
@@ -133,12 +162,6 @@ jobs:
# Run the node tests first, since they're fast and deterministic
./tools/test-js-with-node --coverage --parallel=1
- name: Run frontend lint
if: ${{ matrix.include_frontend_tests }}
run: |
source tools/ci/activate-venv
./tools/lint --groups=frontend --skip=gitlint # gitlint disabled because flaky
- name: Check schemas
if: ${{ matrix.include_frontend_tests }}
run: |
@@ -160,52 +183,6 @@ jobs:
source tools/ci/activate-venv
./tools/test-js-with-puppeteer
- name: Check pnpm dedupe
if: ${{ matrix.include_frontend_tests }}
run: pnpm dedupe --check
- name: Run backend lint
run: |
source tools/ci/activate-venv
echo "Test suite is running under $(python --version)."
./tools/lint --groups=backend --skip=gitlint,mypy # gitlint disabled because flaky
- name: Run backend tests
run: |
source tools/ci/activate-venv
./tools/test-backend ${{ matrix.os != 'bookworm' && '--coverage' || '' }} --xml-report --no-html-report --include-webhooks --include-transaction-tests --no-cov-cleanup --ban-console-output
- name: Run mypy
run: |
source tools/ci/activate-venv
# We run mypy after the backend tests so we get output from the
# backend tests, which tend to uncover more serious problems, first.
./tools/run-mypy --version
./tools/run-mypy
- name: Run miscellaneous tests
run: |
source tools/ci/activate-venv
uv lock --check
# ./tools/test-run-dev # https://github.com/zulip/zulip/pull/14233
#
# This test has been persistently flaky at like 1% frequency, is slow,
# and is for a very specific single feature, so we don't run it by default:
# ./tools/test-queue-worker-reload
./tools/test-migrations
./tools/setup/optimize-svg --check
./tools/setup/generate_integration_bots_avatars.py --check-missing
./tools/ci/check-executables
# Ban check-database-compatibility from transitively
# relying on static/generated, because it might not be
# up-to-date at that point in upgrade-zulip-stage-2.
chmod 000 static/generated web/generated
./scripts/lib/check-database-compatibility
chmod 755 static/generated web/generated
- name: Check for untracked files
run: |
source tools/ci/activate-venv
@@ -217,50 +194,36 @@ jobs:
exit 1
fi
- name: Test locked requirements
if: ${{ matrix.os == 'buster' }}
run: |
. /srv/zulip-py3-venv/bin/activate && \
./tools/test-locked-requirements
- name: Upload coverage reports
# Only upload coverage when both frontend and backend
# tests are run.
if: ${{ matrix.include_frontend_tests }}
uses: codecov/codecov-action@v4
uses: codecov/codecov-action@v2
with:
files: var/coverage.xml,var/node-coverage/lcov.info
token: ${{ secrets.CODECOV_TOKEN }}
- name: Store Puppeteer artifacts
# Upload these on failure, as well
if: ${{ always() && matrix.include_frontend_tests }}
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v2
with:
name: puppeteer
path: ./var/puppeteer
retention-days: 60
- name: Check development database build
if: ${{ matrix.os == 'focal' || matrix.os == 'bullseye' || matrix.os == 'jammy' }}
run: ./tools/ci/setup-backend
- name: Verify pnpm store path
run: |
set -x
path="$(pnpm store path)"
[[ "$path" == /__w/.pnpm-store/* ]]
- name: Minimize uv cache
run: uv cache prune --ci
- name: Generate failure report string
id: failure_report_string
if: ${{ failure() && github.repository == 'zulip/zulip' && github.event_name == 'push' }}
run: tools/ci/generate-failure-message >> $GITHUB_OUTPUT
- name: Report status to CZO
if: ${{ failure() && github.repository == 'zulip/zulip' && github.event_name == 'push' }}
uses: zulip/github-actions-zulip/send-message@v1
with:
api-key: ${{ secrets.ZULIP_BOT_KEY }}
email: "github-actions-bot@chat.zulip.org"
organization-url: "https://chat.zulip.org"
to: "automated testing"
topic: ${{ steps.failure_report_string.outputs.topic }}
type: "stream"
content: ${{ steps.failure_report_string.outputs.content }}
- name: Report status
if: failure()
env:
ZULIP_BOT_KEY: ${{ secrets.ZULIP_BOT_KEY }}
run: tools/ci/send-failure-message

18
.gitignore vendored
View File

@@ -17,25 +17,22 @@
# See `git help ignore` for details on the format.
## Config files for the dev environment
/zproject/apns-dev.pem
/zproject/apns-dev-key.p8
/zproject/dev-secrets.conf
/zproject/custom_dev_settings.py
/tools/conf.ini
/tools/custom_provision
/tools/droplets/conf.ini
## Byproducts of setting up and using the dev environment
*.pyc
*.tsbuildinfo
package-lock.json
/.vagrant
/var
/var/*
!/var/puppeteer
/var/puppeteer/*
!/var/puppeteer/test_credentials.d.ts
/.dmypy.json
/.ruff_cache
/.venv
# Generated i18n data
/locale/en
@@ -46,11 +43,11 @@ package-lock.json
# Static build
*.mo
npm-debug.log
/.pnpm-store
/node_modules
/prod-static
/staticfiles.json
/webpack-stats-production.json
/yarn-error.log
zulip-git-version
# Test / analysis tools
@@ -58,6 +55,8 @@ zulip-git-version
## Files (or really symlinks) created in a prod deployment
/zproject/prod_settings.py
/zulip-current-venv
/zulip-py3-venv
## Files left by various editors and local environments
# (Ideally these should be in everyone's respective personal gitignore files.)
@@ -83,9 +82,6 @@ zulip.kdev4
# Core dump files
core
# Static generated files for landing page.
/static/images/landing-page/hello/generated
## Miscellaneous
# (Ideally this section is empty.)
.transifexrc

View File

@@ -1,13 +1,13 @@
[general]
ignore=title-trailing-punctuation, body-min-length, body-is-missing
extra-path=tools/lib/gitlint_rules.py
extra-path=tools/lib/gitlint-rules.py
[title-match-regex]
regex=^(.+:\ )?[A-Z].+\.$
[title-max-length]
line-length=72
line-length=76
[body-max-line-length]
line-length=76

128
.mailmap
View File

@@ -12,174 +12,62 @@
# # shows raw names/emails, filtered by mapped name:
# $ git log --format='%an %ae' --author=$NAME | uniq -c
acrefoot <acrefoot@zulip.com> <acrefoot@alum.mit.edu>
acrefoot <acrefoot@zulip.com> <acrefoot@dropbox.com>
acrefoot <acrefoot@zulip.com> <acrefoot@humbughq.com>
Adam Benesh <Adam.Benesh@gmail.com>
Adam Benesh <Adam.Benesh@gmail.com> <Adam-Daniel.Benesh@t-systems.com>
Adarsh Tiwari <xoldyckk@gmail.com>
Aditya Chaudhary <aditya.chaudhary1558@gmail.com>
Adnan Shabbir Husain <generaladnan139@gmail.com>
Adnan Shabbir Husain <generaladnan139@gmail.com> <78212328+adnan-td@users.noreply.github.com>
Alex Vandiver <alexmv@zulip.com> <alex@chmrr.net>
Alex Vandiver <alexmv@zulip.com> <github@chmrr.net>
Allen Rabinovich <allenrabinovich@yahoo.com> <allenr@humbughq.com>
Allen Rabinovich <allenrabinovich@yahoo.com> <allenr@zulip.com>
Alya Abbott <alya@zulip.com> <2090066+alya@users.noreply.github.com>
Alya Abbott <alya@zulip.com> <alyaabbott@elance-odesk.com>
Aman Agrawal <amanagr@zulip.com>
Aman Agrawal <amanagr@zulip.com> <f2016561@pilani.bits-pilani.ac.in>
Aman Vishwakarma <vishwakarmarambhawan572@gmail.com>
Aman Vishwakarma <vishwakarmarambhawan572@gmail.com> <185982038+whilstsomebody@users.noreply.github.com>
Aman Vishwakarma <vishwakarmarambhawan572@gmail.com> <whilstsomebody@gmail.com>
Anders Kaseorg <anders@zulip.com> <anders@zulipchat.com>
Anders Kaseorg <anders@zulip.com> <andersk@mit.edu>
aparna-bhatt <aparnabhatt2001@gmail.com> <86338542+aparna-bhatt@users.noreply.github.com>
Apoorva Pendse <apoorvavpendse@gmail.com>
Aryan Bhokare <aryan1bhokare@gmail.com>
Aryan Bhokare <aryan1bhokare@gmail.com> <92683836+aryan-bhokare@users.noreply.github.com>
Aryan Shridhar <aryanshridhar7@gmail.com>
Aryan Shridhar <aryanshridhar7@gmail.com> <53977614+aryanshridhar@users.noreply.github.com>
Ashwat Kumar Singh <ashwat.kumarsingh.met20@itbhu.ac.in>
Austin Riba <austin@zulip.com> <austin@m51.io>
Bedo Khaled <bedokhaled66@gmail.com>
Bedo Khaled <bedokhaled66@gmail.com> <64221784+abdelrahman725@users.noreply.github.com>
BIKI DAS <bikid475@gmail.com>
Brijmohan Siyag <brijsiyag@gmail.com>
Brock Whittaker <whittakerbrock@gmail.com> <bjwhitta@asu.edu>
Brock Whittaker <whittakerbrock@gmail.com> <brock@zulip.com>
Brock Whittaker <whittakerbrock@gmail.com> <brock@zulip.org>
Brock Whittaker <whittakerbrock@gmail.com> <brock@zulipchat.org>
Brock Whittaker <whittakerbrock@gmail.com> <brockwhittaker@Brocks-MacBook.local>
Brock Whittaker <brock@zulipchat.com> <bjwhitta@asu.edu>
Brock Whittaker <brock@zulipchat.com> <brockwhittaker@Brocks-MacBook.local>
Brock Whittaker <brock@zulipchat.com> <brock@zulipchat.org>
Chris Bobbe <cbobbe@zulip.com> <cbobbe@zulipchat.com>
Chris Bobbe <cbobbe@zulip.com> <csbobbe@gmail.com>
codewithnick <nikhilsingh526452@gmail.com>
Danny Su <contact@dannysu.com> <opensource@emailengine.org>
Dhruv Goyal <dhruvgoyal.dev@gmail.com>
Dinesh <chdinesh1089@gmail.com>
Dinesh <chdinesh1089@gmail.com> <chdinesh1089>
Eeshan Garg <eeshan@zulip.com> <jerryguitarist@gmail.com>
Eric Smith <erwsmith@gmail.com> <99841919+erwsmith@users.noreply.github.com>
Evy Kassirer <evy@zulip.com>
Evy Kassirer <evy@zulip.com> <evy.kassirer@gmail.com>
Evy Kassirer <evy@zulip.com> <evykassirer@users.noreply.github.com>
Ganesh Pawar <pawarg256@gmail.com> <58626718+ganpa3@users.noreply.github.com>
Greg Price <greg@zulip.com> <gnprice@gmail.com>
Greg Price <greg@zulip.com> <greg@zulipchat.com>
Greg Price <greg@zulip.com> <price@mit.edu>
Hardik Dharmani <Ddharmani99@gmail.com> <ddharmani99@gmail.com>
Harsh Bansal <harsh@harshbansal.in>
Harsh Meena <reharshmeena@gmail.com>
Harsh Meena <reharshmeena@gmail.com> <116981900+reharsh@users.noreply.github.com>
Hemant Umre <hemantumre12@gmail.com> <87542880+HemantUmre12@users.noreply.github.com>
Jai soni <jai_s@me.iitr.ac.in>
Jai soni <jai_s@me.iitr.ac.in> <76561593+jai2201@users.noreply.github.com>
Jeff Arnold <jbarnold@gmail.com> <jbarnold@humbughq.com>
Jeff Arnold <jbarnold@gmail.com> <jbarnold@zulip.com>
Jessica McKellar <jesstess@mit.edu> <jesstess@humbughq.com>
Jessica McKellar <jesstess@mit.edu> <jesstess@zulip.com>
Jitendra Kumar <jk69854@gmail.com>
Jitendra Kumar <jk69854@gmail.com> <36557466+jitendra-ky@users.noreply.github.com>
John Lu <JohnLu10212004@gmail.com>
John Lu <JohnLu10212004@gmail.com> <87673068+JohnLu2004@users.noreply.github.com>
Joseph Ho <josephho678@gmail.com>
Joseph Ho <josephho678@gmail.com> <62449508+Joelute@users.noreply.github.com>
Julia Bichler <julia.bichler@tum.de> <74348920+juliaBichler01@users.noreply.github.com>
Karl Stolley <karl@zulip.com> <karl@stolley.dev>
Kartikay Sambher <kartikaysambher@gmail.com>
Kevin Mehall <km@kevinmehall.net> <kevin@humbughq.com>
Kevin Mehall <km@kevinmehall.net> <kevin@zulip.com>
Kevin Scott <kevin.scott.98@gmail.com>
Kislay Verma <kislayuv27@gmail.com>
Klara Brrettby <klara.bratteby@gmail.com>
Klara Brrettby <klara.bratteby@gmail.com> <93648999+klarabratteby@users.noreply.github.com>
Kumar Aniket <sachinaniket2004@gmail.com>
Kumar Aniket <sachinaniket2004@gmail.com> <142340063+opmkumar@users.noreply.github.com>
Kunal Sharma <v.shm.kunal@gmail.com>
Lalit Kumar Singh <lalitkumarsingh3716@gmail.com>
Lalit Kumar Singh <lalitkumarsingh3716@gmail.com> <lalits01@smartek21.com>
Lauryn Menard <lauryn@zulip.com> <63245456+laurynmm@users.noreply.github.com>
Lauryn Menard <lauryn@zulip.com> <lauryn.menard@gmail.com>
m-e-l-u-h-a-n <purushottam.tiwari.cd.cse19@itbhu.ac.in>
m-e-l-u-h-a-n <purushottam.tiwari.cd.cse19@itbhu.ac.in> <pururshottam.tiwari.cd.cse19@itbhu.ac.in>
Maneesh Shukla <shuklamaneesh24@gmail.com> <143504391+shuklamaneesh23@users.noreply.github.com>
Mateusz Mandera <mateusz.mandera@zulip.com> <mateusz.mandera@protonmail.com>
Matt Keller <matt@zulip.com>
Matt Keller <matt@zulip.com> <m@cognusion.com>
Nehal Sharma <bablinaneh@gmail.com>
Nehal Sharma <bablinaneh@gmail.com> <68962290+N-Shar-ma@users.noreply.github.com>
Nimish Medatwal <medatwalnimish@gmail.com>
Noble Mittal <noblemittal@outlook.com> <62551163+beingnoble03@users.noreply.github.com>
nzai <nzaih18@gmail.com> <70953556+nzaih1999@users.noreply.github.com>
Palash Baderia <palash.baderia@outlook.com>
Palash Baderia <palash.baderia@outlook.com> <66828942+palashb01@users.noreply.github.com>
m-e-l-u-h-a-n <purushottam.tiwari.cd.cse19@itbhu.ac.in>
Palash Raghuwanshi <singhpalash0@gmail.com>
Parth <mittalparth22@gmail.com>
Prakhar Pratyush <prakhar@zulip.com> <prakhar841301@gmail.com>
Pratik Chanda <pratikchanda2000@gmail.com>
Pratik Solanki <pratiksolanki2021@gmail.com>
Priyam Seth <sethpriyam1@gmail.com> <b19188@students.iitmandi.ac.in>
Ray Kraesig <rkraesig@zulip.com> <rkraesig@zulipchat.com>
Reid Barton <rwbarton@gmail.com> <rwbarton@humbughq.com>
Rein Zustand (rht) <rhtbot@protonmail.com>
Rishabh Maheshwari <b20063@students.iitmandi.ac.in>
Rishi Gupta <rishig@zulipchat.com> <rishig+git@mit.edu>
Rishi Gupta <rishig@zulipchat.com> <rishig@kandralabs.com>
Rishi Gupta <rishig@zulipchat.com> <rishig@users.noreply.github.com>
Ritwik Patnaik <ritwikpatnaik@gmail.com>
Rixant Rokaha <rixantrokaha@gmail.com>
Rixant Rokaha <rixantrokaha@gmail.com> <rishantrokaha@gmail.com>
Rixant Rokaha <rixantrokaha@gmail.com> <rrokaha@caldwell.edu>
Rohan Gudimetla <rohan.gudimetla07@gmail.com>
Sahil Batra <sahil@zulip.com> <35494118+sahil839@users.noreply.github.com>
Sahil Batra <sahil@zulip.com> <sahilbatra839@gmail.com>
Sanchit Sharma <ssharmas10662@gmail.com>
Satyam Bansal <sbansal1999@gmail.com>
Reid Barton <rwbarton@gmail.com> <rwbarton@humbughq.com>
Sayam Samal <samal.sayam@gmail.com>
Scott Feeney <scott@oceanbase.org> <scott@humbughq.com>
Scott Feeney <scott@oceanbase.org> <scott@zulip.com>
Shashank Singh <21bec103@iiitdmj.ac.in>
Shlok Patel <shlokcpatel2001@gmail.com>
Shu Chen <shu@zulip.com>
Shubham Padia <shubham@zulip.com>
Shubham Padia <shubham@zulip.com> <shubham-padia@users.noreply.github.com>
Shubham Padia <shubham@zulip.com> <shubham@glints.com>
Somesh Ranjan <somesh.ranjan.met20@itbhu.ac.in> <77766761+somesh202@users.noreply.github.com>
Steve Howell <showell@zulip.com> <showell30@yahoo.com>
Steve Howell <showell@zulip.com> <showell@yahoo.com>
Steve Howell <showell@zulip.com> <showell@zulipchat.com>
Steve Howell <showell@zulip.com> <steve@humbughq.com>
Steve Howell <showell@zulip.com> <steve@zulip.com>
strifel <info@strifel.de>
Sujal Shah <sujalshah28092004@gmail.com>
Tanmay Kumar <tnmdotkr@gmail.com>
Tanmay Kumar <tnmdotkr@gmail.com> <133781250+tnmkr@users.noreply.github.com>
Tim Abbott <tabbott@zulip.com>
Tim Abbott <tabbott@zulip.com> <tabbott@dropbox.com>
Tim Abbott <tabbott@zulip.com> <tabbott@humbughq.com>
Tim Abbott <tabbott@zulip.com> <tabbott@mit.edu>
Tim Abbott <tabbott@zulip.com> <tabbott@zulipchat.com>
Tomasz Kolek <tomasz-kolek@o2.pl> <tomasz-kolek@go2.pl>
Ujjawal Modi <umodi2003@gmail.com> <99073049+Ujjawal3@users.noreply.github.com>
umkay <ukhan@zulipchat.com> <umaimah.k@gmail.com>
umkay <ukhan@zulipchat.com> <umkay@users.noreply.github.com>
Viktor Illmer <1476338+v-ji@users.noreply.github.com>
Vishesh Singh <vishesh.bhu1971@gmail.com>
Vishesh Singh <vishesh.bhu1971@gmail.com> <142628839+NotVishesh@users.noreply.github.com>
Vishnu KS <vishnu@zulip.com> <hackerkid@vishnuks.com>
Vishnu KS <vishnu@zulip.com> <yo@vishnuks.com>
Vivek Tripathi <vivektripathi8005@gmail.com>
Waseem Daher <wdaher@zulip.com> <wdaher@dropbox.com>
Waseem Daher <wdaher@zulip.com> <wdaher@humbughq.com>
Yash RE <33805964+YashRE42@users.noreply.github.com>
Alya Abbott <alya@zulip.com> <alyaabbott@elance-odesk.com>
Sahil Batra <sahil@zulip.com> <sahilbatra839@gmail.com>
Yash RE <33805964+YashRE42@users.noreply.github.com> <YashRE42@github.com>
Yash RE <33805964+YashRE42@users.noreply.github.com>
Yogesh Sirsat <yogeshsirsat56@gmail.com>
Yogesh Sirsat <yogeshsirsat56@gmail.com> <41695888+yogesh-sirsat@users.noreply.github.com>
Zeeshan Equbal <equbalzeeshan@gmail.com>
Zeeshan Equbal <equbalzeeshan@gmail.com> <54993043+zee-bit@users.noreply.github.com>
Zev Benjamin <zev@zulip.com> <zev@dropbox.com>
Zev Benjamin <zev@zulip.com> <zev@humbughq.com>
Zev Benjamin <zev@zulip.com> <zev@mit.edu>
Zixuan James Li <p359101898@gmail.com>
Zixuan James Li <p359101898@gmail.com> <359101898@qq.com>
Zixuan James Li <p359101898@gmail.com> <39874143+PIG208@users.noreply.github.com>

View File

@@ -1,17 +1,8 @@
pnpm-lock.yaml
/api_docs/**/*.md
/corporate/tests/stripe_fixtures
/help/**/*.md
/locale
/static/third
/templates/**/*.md
/tools/setup/emoji/emoji_map.json
/web/third/*
!/web/third/marked
/web/third/marked/*
!/web/third/marked/lib
/web/third/marked/lib/*
!/web/third/marked/lib/marked.d.cts
/zerver/tests/fixtures
/zerver/webhooks/*/doc.md
/zerver/webhooks/github/githubsponsors.md
/zerver/webhooks/*/fixtures

15
.pyre_configuration Normal file
View File

@@ -0,0 +1,15 @@
{
"source_directories": ["."],
"taint_models_path": [
"stubs/taint",
"zulip-py3-venv/lib/pyre_check/taint/"
],
"search_path": [
"stubs/",
"zulip-py3-venv/lib/pyre_check/stubs/"
],
"typeshed": "zulip-py3-venv/lib/pyre_check/typeshed/",
"exclude": [
"/srv/zulip/zulip-py3-venv/.*"
]
}

View File

@@ -1,19 +0,0 @@
# https://docs.readthedocs.io/en/stable/config-file/v2.html
version: 2
build:
os: ubuntu-22.04
tools:
python: "3.10"
jobs:
create_environment:
- asdf plugin add uv
- asdf install uv 0.6.6
- asdf global uv 0.6.6
- UV_PROJECT_ENVIRONMENT=$READTHEDOCS_VIRTUALENV_PATH uv venv
install:
- UV_PROJECT_ENVIRONMENT=$READTHEDOCS_VIRTUALENV_PATH uv sync --frozen --only-group=docs
sphinx:
configuration: docs/conf.py
fail_on_warning: true

33
.tx/config Normal file
View File

@@ -0,0 +1,33 @@
[main]
host = https://www.transifex.com
lang_map = zh-Hans: zh_Hans, zh-Hant: zh_Hant
[zulip.djangopo]
file_filter = locale/<lang>/LC_MESSAGES/django.po
source_file = locale/en/LC_MESSAGES/django.po
source_lang = en
type = PO
[zulip.translationsjson]
file_filter = locale/<lang>/translations.json
source_file = locale/en/translations.json
source_lang = en
type = KEYVALUEJSON
[zulip.mobile]
file_filter = locale/<lang>/mobile.json
source_file = locale/en/mobile.json
source_lang = en
type = KEYVALUEJSON
[zulip-test.djangopo]
file_filter = locale/<lang>/LC_MESSAGES/django.po
source_file = locale/en/LC_MESSAGES/django.po
source_lang = en
type = PO
[zulip-test.translationsjson]
file_filter = locale/<lang>/translations.json
source_file = locale/en/translations.json
source_lang = en
type = KEYVALUEJSON

1
.yarnrc Normal file
View File

@@ -0,0 +1 @@
ignore-scripts true

View File

@@ -66,7 +66,7 @@ organizers may take any action they deem appropriate, up to and including a
temporary ban or permanent expulsion from the community without warning (and
without refund in the case of a paid event).
If someone outside the development community (e.g., a user of the Zulip
If someone outside the development community (e.g. a user of the Zulip
software) engages in unacceptable behavior that affects someone in the
community, we still want to know. Even if we don't have direct control over
the violator, the community organizers can still support the people
@@ -102,72 +102,3 @@ This Code of Conduct is adapted from the
under a
[Creative Commons BY-SA](https://creativecommons.org/licenses/by-sa/4.0/)
license.
## Moderating the Zulip community
Anyone can help moderate the Zulip community by helping make sure that folks are
aware of the [community guidelines](https://zulip.com/development-community/)
and this Code of Conduct, and that we maintain a positive and respectful
atmosphere.
Here are some guidelines for you how can help:
- Be friendly! Welcoming folks, thanking them for their feedback, ideas and effort,
and just trying to keep the atmosphere warm make the whole community function
more smoothly. New participants who feel accepted, listened to and respected
are likely to treat others the same way.
- Be familiar with the [community
guidelines](https://zulip.com/development-community/), and cite them liberally
when a user violates them. Be polite but firm. Some examples:
- @user please note that there is no need to @-mention @\_**Tim Abbott** when
you ask a question. As noted in the [guidelines for this
community](https://zulip.com/development-community/):
> Use @-mentions sparingly… there is generally no need to @-mention a
> core contributor unless you need their timely attention.
- @user, please keep in mind the following [community
guideline](https://zulip.com/development-community/):
> Dont ask the same question in multiple places. Moderators read every
> public stream, and make sure every question gets a reply.
Ive gone ahead and moved the other copy of this message to this thread.
- If asked a question in a direct message that is better discussed in a public
stream:
> Hi @user! Please start by reviewing
> https://zulip.com/development-community/#community-norms to learn how to
> get help in this community.
- Users sometimes think chat.zulip.org is a testing instance. When this happens,
kindly direct them to use the **#test here** stream.
- If you see a message thats posted in the wrong place, go ahead and move it if
you have permissions to do so, even if you dont plan to respond to it.
Leaving the “Send automated notice to new topic” option enabled helps make it
clear what happened to the person who sent the message.
If you are responding to a message that's been moved, mention the user in your
reply, so that the mention serves as a notification of the new location for
their conversation.
- If a user is posting spam, please report it to an administrator. They will:
- Change the user's name to `<name> (spammer)` and deactivate them.
- Delete any spam messages they posted in public streams.
- We care very much about maintaining a respectful tone in our community. If you
see someone being mean or rude, point out that their tone is inappropriate,
and ask them to communicate their perspective in a respectful way in the
future. If you dont feel comfortable doing so yourself, feel free to ask a
member of Zulip's core team to take care of the situation.
- Try to assume the best intentions from others (given the range of
possibilities presented by their visible behavior), and stick with a friendly
and positive tone even when someones behavior is poor or disrespectful.
Everyone has bad days and stressful situations that can result in them
behaving not their best, and while we should be firm about our community
rules, we should also enforce them with kindness.

View File

@@ -1,102 +1,82 @@
# Contributing guide
# Contributing to Zulip
Welcome! This is a step-by-step guide on how to get started contributing code to
the [Zulip](https://zulip.com/) organized team chat [open-source
project](https://github.com/zulip). Thousands of people use Zulip every day, and
your work on Zulip will have a meaningful impact on their experience. We hope
you'll join us!
Welcome to the Zulip community!
To learn about ways to contribute without writing code, please see our
suggestions for how you can [support the Zulip
project](https://zulip.com/help/support-zulip-project).
## Community
## Learning from the docs
The
[Zulip community server](https://zulip.com/development-community/)
is the primary communication forum for the Zulip community. It is a good
place to start whether you have a question, are a new contributor, are a new
user, or anything else. Please review our
[community norms](https://zulip.com/development-community/#community-norms)
before posting. The Zulip community is also governed by a
[code of conduct](https://zulip.readthedocs.io/en/latest/code-of-conduct.html).
Zulip has a documentation-based approach to onboarding new contributors. As you
are getting started, this page will be your go-to for figuring out what to do
next. You will also explore other guides, learning about how to put together
your first pull request, diving into [Zulip's
subsystems](https://zulip.readthedocs.io/en/latest/subsystems/index.html), and
much more. We hope you'll find this process to be a great learning experience.
## Ways to contribute
This page will guide you through the following steps:
To make a code or documentation contribution, read our
[step-by-step guide](#your-first-codebase-contribution) to getting
started with the Zulip codebase. A small sample of the type of work that
needs doing:
1. [Getting started](#getting-started)
1. [Finding an issue to work on](#finding-an-issue-to-work-on)
1. [Getting help](#getting-help) as you work on your first pull request
1. Learning [what makes a great Zulip contributor](#what-makes-a-great-zulip-contributor)
1. [Submitting a pull request](#submitting-a-pull-request)
1. [Going beyond the first issue](#beyond-the-first-issue)
- Bug squashing and feature development on our Python/Django
[backend](https://github.com/zulip/zulip), web
[frontend](https://github.com/zulip/zulip), React Native
[mobile app](https://github.com/zulip/zulip-mobile), or Electron
[desktop app](https://github.com/zulip/zulip-desktop).
- Building out our
[Python API and bots](https://github.com/zulip/python-zulip-api) framework.
- [Writing an integration](https://zulip.com/api/integrations-overview).
- Improving our [user](https://zulip.com/help/) or
[developer](https://zulip.readthedocs.io/en/latest/) documentation.
- [Reviewing code](https://zulip.readthedocs.io/en/latest/contributing/code-reviewing.html)
and manually testing pull requests.
Any time you feel lost, come back to this guide. The information you need is
likely somewhere on this page (perhaps in the list of [common
questions](#common-questions)), or in one of the many references it points to.
**Non-code contributions**: Some of the most valuable ways to contribute
don't require touching the codebase at all. For example, you can:
If you've done all you can with the documentation and are still feeling stuck,
join the [Zulip development community](https://zulip.com/development-community/)
to ask for help! Before you post, be sure to review [community
norms](https://zulip.com/development-community/#community-norms) and [where to
post](https://zulip.com/development-community/#where-do-i-send-my-message) your
question. The Zulip community is governed by a [code of
conduct](https://zulip.readthedocs.io/en/latest/code-of-conduct.html).
- [Report issues](#reporting-issues), including both feature requests and
bug reports.
- [Give feedback](#user-feedback) if you are evaluating or using Zulip.
- [Sponsor Zulip](https://github.com/sponsors/zulip) through the GitHub sponsors program.
- [Translate](https://zulip.readthedocs.io/en/latest/translating/translating.html)
Zulip into your language.
- [Stay connected](#stay-connected) with Zulip, and [help others
find us](#help-others-find-zulip).
## Getting started
## Your first codebase contribution
### Learning how to use Git (the Zulip way)
This section has a step by step guide to starting as a Zulip codebase
contributor. It's long, but don't worry about doing all the steps perfectly;
no one gets it right the first time, and there are a lot of people available
to help.
Zulip uses GitHub for source control and code review, and becoming familiar with
Git is essential for navigating and contributing to the Zulip codebase. [Our
guide to Git](https://zulip.readthedocs.io/en/latest/git/index.html) will help
you get started even if you've never used Git before.
If you're familiar with Git, you'll still want to take a look at [our
Zulip-specific Git
tools](https://zulip.readthedocs.io/en/latest/git/zulip-tools.html).
### Setting up your development environment and diving in
To get started contributing code to Zulip, you will need to set up the
development environment for the Zulip codebase you want to work on. You'll then
want to take some time to familiarize yourself with the code.
#### Server and web app
1. [Install the development
environment](https://zulip.readthedocs.io/en/latest/development/overview.html).
1. Familiarize yourself with [using the development
environment](https://zulip.readthedocs.io/en/latest/development/using.html).
1. Go through the [new application feature
tutorial](https://zulip.readthedocs.io/en/latest/tutorials/new-feature-tutorial.html)
to get familiar with how the Zulip codebase is organized and how to find code
in it.
#### Flutter-based mobile app
1. Set up a development environment following the instructions in [the project
README](https://github.com/zulip/zulip-flutter).
1. Start reading recent commits to see the code we're writing.
Use either a [graphical Git viewer][] like `gitk`, or `git log -p`
with [the "secret" to reading its output][git-log-secret].
1. Pick some of the code that appears in those Git commits and that looks
interesting. Use your IDE to visit that code and to navigate to related code,
reading to see how it works and how the codebase is organized.
[graphical Git viewer]: https://zulip.readthedocs.io/en/latest/git/setup.html#get-a-graphical-client
[git-log-secret]: https://github.com/zulip/zulip-mobile/blob/main/docs/howto/git.md#git-log-secret
#### Desktop app
Follow [this
documentation](https://github.com/zulip/zulip-desktop/blob/main/development.md)
to set up the Zulip Desktop development environment.
#### Terminal app
Follow [this
documentation](https://github.com/zulip/zulip-terminal?tab=readme-ov-file#setting-up-a-development-environment)
to set up the Zulip Terminal development environment.
## Finding an issue to work on
- First, make an account on the
[Zulip community server](https://zulip.com/development-community/),
paying special attention to the community norms. If you'd like, introduce
yourself in
[#new members](https://chat.zulip.org/#narrow/stream/95-new-members), using
your name as the topic. Bonus: tell us about your first impressions of
Zulip, and anything that felt confusing/broken as you started using the
product.
- Read [What makes a great Zulip contributor](#what-makes-a-great-zulip-contributor).
- [Install the development environment](https://zulip.readthedocs.io/en/latest/development/overview.html),
getting help in
[#provision help](https://chat.zulip.org/#narrow/stream/21-provision-help)
if you run into any troubles.
- Familiarize yourself with [using the development environment](https://zulip.readthedocs.io/en/latest/development/using.html).
- Go through the [new application feature
tutorial](https://zulip.readthedocs.io/en/latest/tutorials/new-feature-tutorial.html) to get familiar with
how the Zulip codebase is organized and how to find code in it.
- Read the [Zulip guide to
Git](https://zulip.readthedocs.io/en/latest/git/index.html) if you
are unfamiliar with Git or Zulip's rebase-based Git workflow,
getting help in [#git
help](https://chat.zulip.org/#narrow/stream/44-git-help) if you run
into any troubles. Even Git experts should read the [Zulip-specific
Git tools
page](https://zulip.readthedocs.io/en/latest/git/zulip-tools.html).
### Where to look for an issue
@@ -107,20 +87,12 @@ repository](https://github.com/zulip/zulip/issues?q=is%3Aopen+is%3Aissue+label%3
alone.
You can look through issues tagged with the "help wanted" label, which is used
to indicate the issues that are open for contributions. You'll be able to claim
unassigned issues, which you can find using the `no:assignee` filter in GitHub.
You can also pick up issues that are assigned but are no longer being worked on.
Some repositories use the "good first issue" label to tag issues that are
especially approachable for new contributors.
Here are some handy links for issues to look through:
to indicate the issues that are ready for contributions. Some repositories also
use the "good first issue" label to tag issues that are especially approachable
for new contributors.
- [Server and web app](https://github.com/zulip/zulip/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22)
- Mobile apps: no "help wanted" label, but see the
[project board](https://github.com/orgs/zulip/projects/5/views/4)
for the upcoming Flutter-based app. Look for issues up through the
"Launch" milestone, and that aren't already assigned.
- [Mobile apps](https://github.com/zulip/zulip-mobile/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22)
- [Desktop app](https://github.com/zulip/zulip-desktop/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22)
- [Terminal app](https://github.com/zulip/zulip-terminal/issues?q=is%3Aopen+is%3Aissue+label%3A"help+wanted")
- [Python API bindings and bots](https://github.com/zulip/python-zulip-api/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22)
@@ -133,42 +105,38 @@ changes to tests).
We recommend the following process for finding an issue to work on:
1. Find an issue tagged with the "help wanted" label that is either unassigned,
or looks to be abandoned.
1. Read the description of the issue and make sure you understand it.
1. If it seems promising, poke around the product
1. Read the description of an issue tagged with the "help wanted" label and make
sure you understand it.
2. If it seems promising, poke around the product
(on [chat.zulip.org](https://chat.zulip.org) or in the development
environment) until you know how the piece being
described fits into the bigger picture. If after some exploration the
description seems confusing or ambiguous, post a question on the GitHub
issue, as others may benefit from the clarification as well.
1. When you find an issue you like, try to get started working on it. See if you
3. When you find an issue you like, try to get started working on it. See if you
can find the part of the code you'll need to modify (`git grep` is your
friend!) and get some idea of how you'll approach the problem.
1. If you feel lost, that's OK! Go through these steps again with another issue.
4. If you feel lost, that's OK! Go through these steps again with another issue.
There's plenty to work on, and the exploration you do will help you learn
more about the project.
An assigned issue can be considered abandoned if:
- There is no recent contributor activity.
- There are no open PRs, or an open PR needs work in order to be ready for
review. For example, a PR may need to be updated to address reviewer feedback
or to pass tests.
Note that you are _not_ claiming an issue while you are iterating through steps
1-4. _Before you claim an issue_, you should be confident that you will be able to
tackle it effectively.
If the lists of issues are overwhelming, you can post in
[#new members](https://chat.zulip.org/#narrow/stream/95-new-members) with a
bit about your background and interests, and we'll help you out. The most
important thing to say is whether you're looking for a backend (Python),
frontend (JavaScript and TypeScript), mobile (React Native), desktop (Electron),
documentation (English) or visual design (JavaScript/TypeScript + CSS) issue, and a
bit about your programming experience and available time.
Additional tips for the [main server and web app
repository](https://github.com/zulip/zulip/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22):
- We especially recommend browsing recently opened issues, as there are more
likely to be easy ones for you to find.
- Take a look at issues with the ["good first issue"
label](https://github.com/zulip/zulip/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22),
as they are especially accessible to new contributors. However, you will
likely find issues without this label that are accessible as well.
- All issues are partitioned into areas like
admin, compose, emoji, hotkeys, i18n, onboarding, search, etc. Look
through our [list of labels](https://github.com/zulip/zulip/labels), and
@@ -180,22 +148,15 @@ repository](https://github.com/zulip/zulip/issues?q=is%3Aopen+is%3Aissue+label%3
### Claiming an issue
#### In the main server/web app repository and Zulip Terminal repository
#### In the main server and web app repository
The Zulip server/web app repository
([`zulip/zulip`](https://github.com/zulip/zulip/)) and the Zulip Terminal
repository ([`zulip/zulip-terminal`](https://github.com/zulip/zulip-terminal/))
are set up with a GitHub workflow bot called
[Zulipbot](https://github.com/zulip/zulipbot), which manages issues and pull
requests in order to create a better workflow for Zulip contributors.
After making sure the issue is tagged with a [help
wanted](https://github.com/zulip/zulip/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22)
label, post a comment with `@zulipbot claim` to the issue thread.
[Zulipbot](https://github.com/zulip/zulipbot) is a GitHub workflow bot; it will
assign you to the issue and label the issue as "in progress".
To claim an issue in these repositories, simply post a comment that says
`@zulipbot claim` to the issue thread. If the issue is [tagged with a help
wanted label and is not assigned to someone
else](https://github.com/zulip/zulip/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22+no%3Aassignee),
Zulipbot will immediately assign the issue to you.
Note that new contributors can only claim one issue until their first pull request is
New contributors can only claim one issue until their first pull request is
merged. This is to encourage folks to finish ongoing work before starting
something new. If you would like to pick up a new issue while waiting for review
on an almost-ready pull request, you can post a comment to this effect on the
@@ -203,109 +164,117 @@ issue you're interested in.
#### In other Zulip repositories
There is no bot for other Zulip repositories
([`zulip/zulip-flutter`](https://github.com/zulip/zulip-flutter/), etc.). If
you are interested in claiming an issue in one of these repositories, simply
post a comment on the issue thread saying that you've started work on the
issue and would like to claim it. In your comment, describe what part of the
code you're modifying and how you plan to approach the problem, based on
what you learned in steps 14 [above](#picking-an-issue-to-work-on).
There is no need to @-mention the issue creator in your comment. There is
also no need to post the same information in multiple places, for example in
a chat thread in addition to the GitHub issue.
There is no bot for other repositories, so you can simply post a comment saying
that you'd like to work on the issue.
Please follow the same guidelines as described above: find an issue labeled
"help wanted", and only pick up one issue at a time to start with.
## Getting help
### Working on an issue
You may have questions as you work on your pull request. For example, you might
not be sure about some details of what's required, or have questions about your
implementation approach. Zulip's maintainers are happy to answer thoughtfully
posed questions, and discuss any difficulties that might arise as you work on
your PR.
You're encouraged to ask questions on how to best implement or debug your
changes -- the Zulip maintainers are excited to answer questions to help you
stay unblocked and working efficiently. You can ask questions in the [Zulip
development community](https://zulip.com/development-community/), or on the
GitHub issue or pull request.
If you haven't done so yet, now is the time to join the [Zulip development
community](https://zulip.com/development-community/). If you'd like, introduce
yourself in the [#new
members](https://chat.zulip.org/#narrow/channel/95-new-members) channel, using
your name as the [topic](https://zulip.com/help/introduction-to-topics).
To get early feedback on any UI changes, we encourage you to post screenshots of
your work in the [#design
stream](https://chat.zulip.org/#narrow/stream/101-design) in the [Zulip
development community](https://zulip.com/development-community/)
You can get help in public channels in the community:
For more advice, see [What makes a great Zulip
contributor?](https://zulip.readthedocs.io/en/latest/overview/contributing.html#what-makes-a-great-zulip-contributor)
below.
1. **Review** the [Zulip development community
guidelines](https://zulip.com/development-community/#community-norms).
### Submitting a pull request
1. **Decide where to post.** If there is a discussion thread linked from the
issue you're working on, that's usually the best place to post any
clarification questions about the issue. Otherwise, follow [these
guidelines](https://zulip.com/development-community/#where-do-i-send-my-message)
to figure out where to post your question. Dont stress too much about
picking the right place if youre not sure, as moderators can [move your
question thread to a different
channel](https://zulip.com/help/move-content-to-another-channel) if needed.
When you believe your code is ready, follow the [guide on how to review
code](https://zulip.readthedocs.io/en/latest/contributing/code-reviewing.html#how-to-review-code)
to review your own work. You can often find things you missed by taking a step
back to look over your work before asking others to do so. Catching mistakes
yourself will help your PRs be merged faster, and folks will appreciate the
quality and professionalism of your work.
1. **Write** up your question, being sure to follow our [guide on asking great
questions](https://zulip.readthedocs.io/en/latest/contributing/asking-great-questions.html).
The guide explains what you need to do make sure that folks will be able to
help you out, and that you're making good use of maintainers' limited time.
Then, submit your changes. Carefully reading our [Git guide][git-guide], and in
particular the section on [making a pull request][git-guide-make-pr],
will help avoid many common mistakes.
1. **Review** your message before you send it. Will your question make sense to
someone who is familiar with Zulip, but might not have the details of what
you are working on fresh in mind?
Once you are satisfied with the quality of your PR, follow the
[guidelines on asking for a code
review](https://zulip.readthedocs.io/en/latest/contributing/code-reviewing.html#asking-for-a-code-review)
to request a review. If you are not sure what's best, simply post a
comment on the main GitHub thread for your PR clearly indicating that
it is ready for review, and the project maintainers will take a look
and follow up with next steps.
Well-posed questions will generally get a response within 1-2 business days.
There is no need to @-mention anyone when you ask a question, as maintainers
keep a close eye on all the ongoing discussions.
It's OK if your first issue takes you a while; that's normal! You'll be
able to work a lot faster as you build experience.
## What makes a great Zulip contributor?
If it helps your workflow, you can submit a work-in-progress pull
request before your work is ready for review. Simply prefix the title
of work in progress pull requests with `[WIP]`, and then remove the
prefix when you think it's time for someone else to review your work.
As you're working on your first code contribution, here are some best practices
to keep in mind.
[git-guide]: https://zulip.readthedocs.io/en/latest/git/
[git-guide-make-pr]: https://zulip.readthedocs.io/en/latest/git/pull-requests.html
- [Asking great questions][great-questions]. It's very hard to answer a general
question like, "How do I do this issue?" When asking for help, explain your
current understanding, including what you've done or tried so far and where
you got stuck. Post tracebacks or other error messages if appropriate. For
more advice, check out [our guide][great-questions]!
- Learning and practicing
[Git commit discipline](https://zulip.readthedocs.io/en/latest/contributing/commit-discipline.html).
- Submitting carefully tested code. See our [detailed guide on how to review
code](https://zulip.readthedocs.io/en/latest/contributing/code-reviewing.html#how-to-review-code)
(yours or someone else's).
- Posting
[screenshots or GIFs](https://zulip.readthedocs.io/en/latest/tutorials/screenshot-and-gif-software.html)
for frontend changes.
- Working to [make your pull requests easy to
review](https://zulip.readthedocs.io/en/latest/contributing/reviewable-prs.html).
- Clearly describing what you have implemented and why. For example, if your
implementation differs from the issue description in some way or is a partial
step towards the requirements described in the issue, be sure to call
out those differences.
- Being responsive to feedback on pull requests. This means incorporating or
responding to all suggested changes, and leaving a note if you won't be
able to address things within a few days.
- Being helpful and friendly on the [Zulip community
server](https://zulip.com/development-community/).
### Stages of a pull request
[great-questions]: https://zulip.readthedocs.io/en/latest/contributing/asking-great-questions.html
Your pull request will likely go through several stages of review.
## Submitting a pull request
1. If your PR makes user-facing changes, the UI and user experience may be
reviewed early on, without reference to the code. You will get feedback on
any user-facing bugs in the implementation. To minimize the number of review
round-trips, make sure to [thoroughly
test](https://zulip.readthedocs.io/en/latest/contributing/code-reviewing.html#manual-testing)
your own PR prior to asking for review.
2. There may be choices made in the implementation that the reviewer
will ask you to revisit. This process will go more smoothly if you
specifically call attention to the decisions you made while
drafting the PR and any points about which you are uncertain. The
PR description and comments on your own PR are good ways to do this.
3. Oftentimes, seeing an initial implementation will make it clear that the
product design for a feature needs to be revised, or that additional changes
are needed. The reviewer may therefore ask you to amend or change the
implementation. Some changes may be blockers for getting the PR merged, while
others may be improvements that can happen afterwards. Feel free to ask if
it's unclear which type of feedback you're getting. (Follow-ups can be a
great next issue to work on!)
4. In addition to any UI/user experience review, all PRs will go through one or
more rounds of code review. Your code may initially be [reviewed by other
contributors](https://zulip.readthedocs.io/en/latest/contributing/code-reviewing.html).
This helps us make good use of project maintainers' time, and helps you make
progress on the PR by getting more frequent feedback. A project maintainer
may leave a comment asking someone with expertise in the area you're working
on to review your work.
5. Final code review and integration for server and webapp PRs is generally done
by `@timabbott`.
See the [guide on submitting a pull
request](https://zulip.readthedocs.io/en/latest/contributing/reviewable-prs.html)
for detailed instructions on how to present your proposed changes to Zulip.
#### How to help move the review process forward
The [pull request review process
guide](https://zulip.readthedocs.io/en/latest/contributing/review-process.html)
explains the stages of review your PR will go through, and offers guidance on
how to help the review process move forward.
The key to keeping your review moving through the review process is to:
It's OK if your first issue takes you a while; that's normal! You'll be able to
work a lot faster as you build experience.
- Address _all_ the feedback to the best of your ability.
- Make it clear when the requested changes have been made
and you believe it's time for another look.
- Make it as easy as possible to review the changes you made.
## Beyond the first issue
In order to do this, when you believe you have addressed the previous round of
feedback on your PR as best you can, post an comment asking reviewers to take
another look. Your comment should make it easy to understand what has been done
and what remains by:
- Summarizing the changes made since the last review you received.
- Highlighting remaining questions or decisions, with links to any relevant
chat.zulip.org threads.
- Providing updated screenshots and information on manual testing if
appropriate.
The easier it is to review your work, the more likely you are to receive quick
feedback.
### Beyond the first issue
To find a second issue to work on, we recommend looking through issues with the same
`area:` label as the last issue you resolved. You'll be able to reuse the
@@ -313,76 +282,33 @@ work you did learning how that part of the codebase works. Also, the path to
becoming a core developer often involves taking ownership of one of these area
labels.
## Common questions
- **What if somebody is already working on the issue I want to claim?** There
are lots of issues to work on (likely
[hundreds](https://github.com/zulip/zulip/issues?q=is%3Aissue%20state%3Aopen%20label%3A%22help%20wanted%22%20no%3Aassignee)
in the server repository)! If somebody else is actively working on the issue,
you can find a different one, or help with reviewing their work.
- **What if it looks like the person who's assigned an issue is no longer
working on it?** Post a comment on the issue, e.g., "Hi @ someone! Are you
still working on this one? I'd like to pick it up if not." You can pick up the
issue if they say they don't plan to work on it more.
- **What if I don't get a response?** If you don't get a reply within 2-3
days, go ahead and post a comment that you are working on the issue, and
submit a pull request. If the original assignee ends up submitting a pull
request first, no worries! You can help by providing feedback on their work,
or submit your own PR if you think a different approach is needed (as
described above).
- **What if there is already a pull request for the issue I want to work on?**
See our [guide on continuing unfinished
work](https://zulip.readthedocs.io/en/latest/contributing/continuing-unfinished-work.html).
### Common questions
- **What if somebody is already working on the issue I want do claim?** There
are lots of issue to work on! If somebody else is actively working on the
issue, you can find a different one, or help with
reviewing their work.
- **What if somebody else claims an issue while I'm figuring out whether or not to
work on it?** No worries! You can contribute by providing feedback on
their pull request. If you've made good progress in understanding part of the
codebase, you can also find another "help wanted" issue in the same area to
work on.
- **Can I work on an old issue?** Of course! Open issues marked as “help wanted”
are generally eligible to be worked on. If you find that the context around
the issue has changed (e.g., the UI looks different), do your best to apply
the current patterns, and comment on any differences from the spec in your PR
description.
If picking up a bug, start by checking if you can replicate it. If it no longer
replicates, post a comment on the issue explaining how you tested the
behavior, and what you saw, with screenshots as appropriate. And if you _can_
replicate it, fixing it is great!
If you're starting a major project where the issue was filed more than a
couple of years ago, it's a good idea to post to the development community
discussion thread for that issue to check if the thinking around it has
changed.
- **What if there is already a pull request for the issue I want to work on?**
Start by reviewing the existing work. If you agree with the approach, you can
use the existing pull request (PR) as a starting point for your contribution. If
you think a different approach is needed, you can post a new PR, with a comment that clearly
explains _why_ you decided to start from scratch.
- **Can I come up with my own feature idea and work on it?** We welcome
suggestions of features or other improvements that you feel would be valuable. If you
have a new feature you'd like to add, you can start a conversation [in our
development community](https://zulip.com/development-community/#where-do-i-send-my-message)
explaining the feature idea and the problem that you're hoping to solve.
- **I'm waiting for the next round of review on my PR. Can I pick up
another issue in the meantime?** Someone's first Zulip PR often
requires quite a bit of iteration, so please [make sure your pull
request is reviewable][reviewable-pull-requests] and go through at
least one round of feedback from others before picking up a second
issue. After that, sure! If
[Zulipbot](https://github.com/zulip/zulipbot) does not allow you to
claim an issue, you can post a comment describing the status of your
other work on the issue you're interested in (including links to all open
PRs), and asking for the issue to be assigned to you. Note that addressing
feedback on in-progress PRs should always take priority over starting a new
PR.
- **I think my PR is done, but it hasn't been merged yet. What's going on?**
1. **Double-check that you have addressed all the feedback**, including any comments
on [Git commit
discipline](https://zulip.readthedocs.io/en/latest/contributing/commit-discipline.html),
and that automated tests are passing.
discipline](https://zulip.readthedocs.io/en/latest/contributing/version-control.html#commit-discipline).
2. If all the feedback has been addressed, did you [leave a
comment](https://zulip.readthedocs.io/en/latest/contributing/review-process.html#how-to-help-move-the-review-process-forward)
comment](https://zulip.readthedocs.io/en/latest/overview/contributing.html#how-to-help-move-the-review-process-forward)
explaining that you have done so and **requesting another review**? If not,
it may not be clear to project maintainers or reviewers that your PR is
ready for another look.
@@ -398,21 +324,180 @@ labels.
occasionally take a few weeks for a PR in the final stages of the review
process to be merged.
[reviewable-pull-requests]: https://zulip.readthedocs.io/en/latest/contributing/reviewable-prs.html
## What makes a great Zulip contributor?
Zulip has a lot of experience working with new contributors. In our
experience, these are the best predictors of success:
- Posting good questions. It's very hard to answer a general question like, "How
do I do this issue?" When asking for help, explain
your current understanding, including what you've done or tried so far and where
you got stuck. Post tracebacks or other error messages if appropriate. For
more information, check out the ["Getting help" section of our community
guidelines](https://zulip.com/development-community/#getting-help) and
[this essay][good-questions-blog] for some good advice.
- Learning and practicing
[Git commit discipline](https://zulip.readthedocs.io/en/latest/contributing/version-control.html#commit-discipline).
- Submitting carefully tested code. See our [detailed guide on how to review
code](https://zulip.readthedocs.io/en/latest/contributing/code-reviewing.html#how-to-review-code)
(yours or someone else's).
- Posting
[screenshots or GIFs](https://zulip.readthedocs.io/en/latest/tutorials/screenshot-and-gif-software.html)
for frontend changes.
- Clearly describing what you have implemented and why. For example, if your
implementation differs from the issue description in some way or is a partial
step towards the requirements described in the issue, be sure to call
out those differences.
- Being responsive to feedback on pull requests. This means incorporating or
responding to all suggested changes, and leaving a note if you won't be
able to address things within a few days.
- Being helpful and friendly on the [Zulip community
server](https://zulip.com/development-community/).
[good-questions-blog]: https://jvns.ca/blog/good-questions/
These are also the main criteria we use to select candidates for all
of our outreach programs.
## Reporting issues
If you find an easily reproducible bug and/or are experienced in reporting
bugs, feel free to just open an issue on the relevant project on GitHub.
If you have a feature request or are not yet sure what the underlying bug
is, the best place to post issues is
[#issues](https://chat.zulip.org/#narrow/stream/9-issues) (or
[#mobile](https://chat.zulip.org/#narrow/stream/48-mobile) or
[#desktop](https://chat.zulip.org/#narrow/stream/16-desktop)) on the
[Zulip community server](https://zulip.com/development-community/).
This allows us to interactively figure out what is going on, let you know if
a similar issue has already been opened, and collect any other information
we need. Choose a 2-4 word topic that describes the issue, explain the issue
and how to reproduce it if known, your browser/OS if relevant, and a
[screenshot or screenGIF](https://zulip.readthedocs.io/en/latest/tutorials/screenshot-and-gif-software.html)
if appropriate.
**Reporting security issues**. Please do not report security issues
publicly, including on public streams on chat.zulip.org. You can
email [security@zulip.com](mailto:security@zulip.com). We create a CVE for every
security issue in our released software.
## User feedback
Nearly every feature we develop starts with a user request. If you are part
of a group that is either using or considering using Zulip, we would love to
hear about your experience with the product. If you're not sure what to
write, here are some questions we're always very curious to know the answer
to:
- Evaluation: What is the process by which your organization chose or will
choose a group chat product?
- Pros and cons: What are the pros and cons of Zulip for your organization,
and the pros and cons of other products you are evaluating?
- Features: What are the features that are most important for your
organization? In the best-case scenario, what would your chat solution do
for you?
- Onboarding: If you remember it, what was your impression during your first
few minutes of using Zulip? What did you notice, and how did you feel? Was
there anything that stood out to you as confusing, or broken, or great?
- Organization: What does your organization do? How big is the organization?
A link to your organization's website?
You can contact us in the [#feedback stream of the Zulip development
community](https://chat.zulip.org/#narrow/stream/137-feedback) or
by emailing [support@zulip.com](mailto:support@zulip.com).
## Outreach programs
Zulip regularly participates in [Google Summer of Code
(GSoC)](https://developers.google.com/open-source/gsoc/) and
[Outreachy](https://www.outreachy.org/). We have been a GSoC mentoring
organization since 2016, and we accept 15-20 GSoC participants each summer. In
the past, weve also participated in [Google
Code-In](https://developers.google.com/open-source/gci/), and hosted summer
interns from Harvard, MIT, and Stanford.
Zulip participates in [Google Summer of Code
(GSoC)](https://developers.google.com/open-source/gsoc/) every year.
In the past, we've also participated in
[Outreachy](https://www.outreachy.org/), [Google
Code-In](https://developers.google.com/open-source/gci/), and hosted
summer interns from Harvard, MIT, and Stanford.
Check out our [outreach programs
overview](https://zulip.readthedocs.io/en/latest/outreach/overview.html) to learn
more about participating in an outreach program with Zulip. Most of our program
participants end up sticking around the project long-term, and many have become
core team members, maintaining important parts of the project. We hope you
apply!
While each third-party program has its own rules and requirements, the
Zulip community's approaches all of these programs with these ideas in
mind:
- We try to make the application process as valuable for the applicant as
possible. Expect high-quality code reviews, a supportive community, and
publicly viewable patches you can link to from your resume, regardless of
whether you are selected.
- To apply, you'll have to submit at least one pull request to a Zulip
repository. Most students accepted to one of our programs have
several merged pull requests (including at least one larger PR) by
the time of the application deadline.
- The main criteria we use is quality of your best contributions, and
the bullets listed at
[What makes a great Zulip contributor](#what-makes-a-great-zulip-contributor).
Because we focus on evaluating your best work, it doesn't hurt your
application to makes mistakes in your first few PRs as long as your
work improves.
Most of our outreach program participants end up sticking around the
project long-term, and many have become core team members, maintaining
important parts of the project. We hope you apply!
### Google Summer of Code
The largest outreach program Zulip participates in is GSoC (14
students in 2017; 11 in 2018; 17 in 2019; 18 in 2020; 18 in 2021). While we
don't control how
many slots Google allocates to Zulip, we hope to mentor a similar
number of students in future summers. Check out our [blog
post](https://blog.zulip.com/2021/09/30/google-summer-of-code-2021/) to learn
about the GSoC 2021 experience and our participants' accomplishments.
If you're reading this well before the application deadline and want
to make your application strong, we recommend getting involved in the
community and fixing issues in Zulip now. Having good contributions
and building a reputation for doing good work is the best way to have
a strong application.
Our [GSoC program page][gsoc-guide] has lots more details on how
Zulip does GSoC, as well as project ideas. Note, however, that the project idea
list is maintained only during the GSoC application period, so if
you're looking at some other time of year, the project list is likely
out-of-date.
In some years, we have also run a Zulip Summer of Code (ZSoC)
program for students who we wanted to accept into GSoC but did not have an
official slot for. Student expectations are the
same as with GSoC, and ZSoC has no separate application process; your
GSoC application is your ZSoC application. If we'd like to select you
for ZSoC, we'll contact you when the GSoC results are announced.
[gsoc-guide]: https://zulip.readthedocs.io/en/latest/contributing/gsoc.html
[gsoc-faq]: https://developers.google.com/open-source/gsoc/faq
## Stay connected
Even if you are not logging into the development community on a regular basis,
you can still stay connected with the project.
- Follow us [on Twitter](https://twitter.com/zulip).
- Subscribe to [our blog](https://blog.zulip.org/).
- Join or follow the project [on LinkedIn](https://www.linkedin.com/company/zulip-project/).
## Help others find Zulip
Here are some ways you can help others find Zulip:
- Star us on GitHub. There are four main repositories:
[server/web](https://github.com/zulip/zulip),
[mobile](https://github.com/zulip/zulip-mobile),
[desktop](https://github.com/zulip/zulip-desktop), and
[Python API](https://github.com/zulip/python-zulip-api).
- "Like" and retweet [our tweets](https://twitter.com/zulip).
- Upvote and post feedback on Zulip on comparison websites. A couple specific
ones to highlight:
- [AlternativeTo](https://alternativeto.net/software/zulip-chat-server/). You can also
[upvote Zulip](https://alternativeto.net/software/slack/) on their page
for Slack.
- [Add Zulip to your stack](https://stackshare.io/zulip) on StackShare, star
it, and upvote the reasons why people like Zulip that you find most
compelling.

View File

@@ -1,25 +1,15 @@
# This is a multiarch Dockerfile. See https://docs.docker.com/desktop/multi-arch/
#
# To set up the first time:
# docker buildx create --name multiarch --use
#
# To build:
# docker buildx build --platform linux/amd64,linux/arm64 \
# -f ./Dockerfile-postgresql -t zulip/zulip-postgresql:14 --push .
# To build run `docker build -f Dockerfile-postgresql .` from the root of the
# zulip repo.
# Currently the PostgreSQL images do not support automatic upgrading of
# the on-disk data in volumes. So the base image cannot currently be upgraded
# the on-disk data in volumes. So the base image can not currently be upgraded
# without users needing a manual pgdump and restore.
# https://hub.docker.com/r/groonga/pgroonga/tags
ARG PGROONGA_VERSION=latest
ARG POSTGRESQL_VERSION=14
FROM groonga/pgroonga:$PGROONGA_VERSION-alpine-$POSTGRESQL_VERSION-slim
# Install hunspell, Zulip stop words, and run Zulip database
# init.
FROM groonga/pgroonga:latest-alpine-10-slim
RUN apk add -U --no-cache hunspell-en
RUN ln -sf /usr/share/hunspell/en_US.dic /usr/local/share/postgresql/tsearch_data/en_us.dict && ln -sf /usr/share/hunspell/en_US.aff /usr/local/share/postgresql/tsearch_data/en_us.affix
RUN ln -sf /usr/share/hunspell/en_US.dic /usr/local/share/postgresql/tsearch_data/en_us.dict && ln -sf /usr/share/hunspell/en_US.aff /usr/local/share/postgresql/tsearch_data/en_us.affix
COPY puppet/zulip/files/postgresql/zulip_english.stop /usr/local/share/postgresql/tsearch_data/zulip_english.stop
COPY scripts/setup/create-db.sql /docker-entrypoint-initdb.d/zulip-create-db.sql
COPY scripts/setup/create-pgroonga.sql /docker-entrypoint-initdb.d/zulip-create-pgroonga.sql

View File

@@ -8,8 +8,8 @@ Zulip every day. Zulip is the only [modern team chat app][features] that is
designed for both live and asynchronous conversations.
Zulip is built by a distributed community of developers from all around the
world, with 97+ people who have each contributed 100+ commits. With
over 1,500 contributors merging over 500 commits a month, Zulip is the
world, with 74+ people who have each contributed 100+ commits. With
over 1000 contributors merging over 500 commits a month, Zulip is the
largest and fastest growing open source team chat project.
Come find us on the [development community chat](https://zulip.com/development-community/)!
@@ -17,7 +17,7 @@ Come find us on the [development community chat](https://zulip.com/development-c
[![GitHub Actions build status](https://github.com/zulip/zulip/actions/workflows/zulip-ci.yml/badge.svg)](https://github.com/zulip/zulip/actions/workflows/zulip-ci.yml?query=branch%3Amain)
[![coverage status](https://img.shields.io/codecov/c/github/zulip/zulip/main.svg)](https://codecov.io/gh/zulip/zulip)
[![Mypy coverage](https://img.shields.io/badge/mypy-100%25-green.svg)][mypy-coverage]
[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)
[![code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
[![code style: prettier](https://img.shields.io/badge/code_style-prettier-ff69b4.svg)](https://github.com/prettier/prettier)
[![GitHub release](https://img.shields.io/github/release/zulip/zulip.svg)](https://github.com/zulip/zulip/releases/latest)
[![docs](https://readthedocs.org/projects/zulip/badge/?version=latest)](https://zulip.readthedocs.io/en/latest/)
@@ -33,17 +33,16 @@ Come find us on the [development community chat](https://zulip.com/development-c
## Getting started
- **Contributing code**. Check out our [guide for new
contributors](https://zulip.readthedocs.io/en/latest/contributing/contributing.html)
to get started. We have invested in making Zulips code highly
readable, thoughtfully tested, and easy to modify. Beyond that, we
have written an extraordinary 150K words of documentation for Zulip
contributors.
contributors](https://zulip.readthedocs.io/en/latest/overview/contributing.html)
to get started. We have invested into making Zulips code uniquely readable,
well tested, and easy to modify. Beyond that, we have written an extraordinary
150K words of documentation on how to contribute to Zulip.
- **Contributing non-code**. [Report an
issue](https://zulip.readthedocs.io/en/latest/contributing/contributing.html#reporting-issues),
issue](https://zulip.readthedocs.io/en/latest/overview/contributing.html#reporting-issues),
[translate](https://zulip.readthedocs.io/en/latest/translating/translating.html)
Zulip into your language, or [give us
feedback](https://zulip.readthedocs.io/en/latest/contributing/suggesting-features.html).
feedback](https://zulip.readthedocs.io/en/latest/overview/contributing.html#user-feedback).
We'd love to hear from you, whether you've been using Zulip for years, or are just
trying it out for the first time.
@@ -52,7 +51,7 @@ Come find us on the [development community chat](https://zulip.com/development-c
recommend reading about Zulip's [unique
approach](https://zulip.com/why-zulip/) to organizing conversations.
- **Running a Zulip server**. Self-host Zulip directly on Ubuntu or Debian
- **Running a Zulip server**. Self host Zulip directly on Ubuntu or Debian
Linux, in [Docker](https://github.com/zulip/docker-zulip), or with prebuilt
images for [Digital Ocean](https://marketplace.digitalocean.com/apps/zulip) and
[Render](https://render.com/docs/deploy-zulip).
@@ -65,14 +64,14 @@ Come find us on the [development community chat](https://zulip.com/development-c
projects](https://zulip.com/for/open-source/).
- **Participating in [outreach
programs](https://zulip.readthedocs.io/en/latest/contributing/contributing.html#outreach-programs)**
programs](https://zulip.readthedocs.io/en/latest/overview/contributing.html#outreach-programs)**
like [Google Summer of Code](https://developers.google.com/open-source/gsoc/)
and [Outreachy](https://www.outreachy.org/).
- **Supporting Zulip**. Advocate for your organization to use Zulip, become a
[sponsor](https://github.com/sponsors/zulip), write a review in the mobile app
stores, or [help others find
Zulip](https://zulip.readthedocs.io/en/latest/contributing/contributing.html#help-others-find-zulip).
Zulip](https://zulip.readthedocs.io/en/latest/overview/contributing.html#help-others-find-zulip).
You may also be interested in reading our [blog](https://blog.zulip.org/), and
following us on [Twitter](https://twitter.com/zulip) and

View File

@@ -33,5 +33,5 @@ See also our documentation on the [Zulip release
lifecycle][release-lifecycle].
[security-model]: https://zulip.readthedocs.io/en/latest/production/security-model.html
[upgrades]: https://zulip.readthedocs.io/en/stable/production/upgrade.html#upgrading-to-a-release
[upgrades]: https://zulip.readthedocs.io/en/latest/production/upgrade-or-modify.html#upgrading-to-a-release
[release-lifecycle]: https://zulip.readthedocs.io/en/latest/overview/release-lifecycle.html

20
Vagrantfile vendored
View File

@@ -12,13 +12,11 @@ Vagrant.configure("2") do |config|
vm_num_cpus = "2"
vm_memory = "2048"
ubuntu_mirror = ""
debian_mirror = ""
vboxadd_version = nil
config.vm.box = "bento/ubuntu-22.04"
config.vm.synced_folder ".", "/vagrant", disabled: true
config.vm.synced_folder ".", "/srv/zulip", docker_consistency: "z"
config.vm.synced_folder ".", "/srv/zulip"
vagrant_config_file = ENV["HOME"] + "/.zulip-vagrant-config"
if File.file?(vagrant_config_file)
@@ -34,7 +32,7 @@ Vagrant.configure("2") do |config|
when "HOST_IP_ADDR"; host_ip_addr = value
when "GUEST_CPUS"; vm_num_cpus = value
when "GUEST_MEMORY_MB"; vm_memory = value
when "UBUNTU_MIRROR"; ubuntu_mirror = value
when "DEBIAN_MIRROR"; debian_mirror = value
when "VBOXADD_VERSION"; vboxadd_version = value
end
end
@@ -63,23 +61,23 @@ Vagrant.configure("2") do |config|
config.vm.network "forwarded_port", guest: 9994, host: host_port + 3, host_ip: host_ip_addr
# Specify Docker provider before VirtualBox provider so it's preferred.
config.vm.provider "docker" do |d, override|
override.vm.box = nil
d.build_dir = File.join(__dir__, "tools", "setup", "dev-vagrant-docker")
d.build_args = ["--build-arg", "VAGRANT_UID=#{Process.uid}"]
if !ubuntu_mirror.empty?
d.build_args += ["--build-arg", "UBUNTU_MIRROR=#{ubuntu_mirror}"]
if !debian_mirror.empty?
d.build_args += ["--build-arg", "DEBIAN_MIRROR=#{debian_mirror}"]
end
d.has_ssh = true
d.create_args = ["--ulimit", "nofile=1024:65536"]
end
config.vm.provider "virtualbox" do |vb, override|
override.vm.box = "bento/debian-10"
# It's possible we can get away with just 1.5GB; more testing needed
vb.memory = vm_memory
vb.cpus = vm_num_cpus
if !vboxadd_version.nil?
override.vbguest.installer = Class.new(VagrantVbguest::Installers::Ubuntu) do
override.vbguest.installer = Class.new(VagrantVbguest::Installers::Debian) do
define_method(:host_version) do |reload = false|
VagrantVbguest::Version(vboxadd_version)
end
@@ -90,12 +88,14 @@ Vagrant.configure("2") do |config|
end
config.vm.provider "hyperv" do |h, override|
override.vm.box = "bento/debian-10"
h.memory = vm_memory
h.maxmemory = vm_memory
h.cpus = vm_num_cpus
end
config.vm.provider "parallels" do |prl, override|
override.vm.box = "bento/debian-10"
prl.memory = vm_memory
prl.cpus = vm_num_cpus
end
@@ -104,5 +104,5 @@ Vagrant.configure("2") do |config|
# We want provision to be run with the permissions of the vagrant user.
privileged: false,
path: "tools/setup/vagrant-provision",
env: { "UBUNTU_MIRROR" => ubuntu_mirror }
env: { "DEBIAN_MIRROR" => debian_mirror }
end

View File

@@ -1,15 +1,13 @@
import logging
import time
from collections import OrderedDict, defaultdict
from collections.abc import Callable, Sequence
from datetime import datetime, timedelta, timezone
from typing import TypeAlias, Union
from datetime import datetime, timedelta
from typing import Callable, Dict, Optional, Sequence, Tuple, Type, Union
from django.conf import settings
from django.db import connection, models
from django.utils.timezone import now as timezone_now
from django.db.models import F
from psycopg2.sql import SQL, Composable, Identifier, Literal
from typing_extensions import override
from analytics.models import (
BaseCount,
@@ -20,26 +18,18 @@ from analytics.models import (
UserCount,
installation_epoch,
)
from zerver.lib.logging_util import log_to_file
from zerver.lib.timestamp import ceiling_to_day, ceiling_to_hour, floor_to_hour, verify_UTC
from zerver.models import Message, Realm, Stream, UserActivityInterval, UserProfile
from zerver.models.realm_audit_logs import AuditLogEventType
from zerver.models import Message, Realm, RealmAuditLog, Stream, UserActivityInterval, UserProfile
if settings.ZILENCER_ENABLED:
from zilencer.models import (
RemoteInstallationCount,
RemoteRealm,
RemoteRealmCount,
RemoteZulipServer,
)
logger = logging.getLogger("zulip.analytics")
## Logging setup ##
logger = logging.getLogger("zulip.management")
log_to_file(logger, settings.ANALYTICS_LOG_PATH)
# You can't subtract timedelta.max from a datetime, so use this instead
TIMEDELTA_MAX = timedelta(days=365 * 1000)
## Class definitions ##
@@ -59,7 +49,7 @@ class CountStat:
property: str,
data_collector: "DataCollector",
frequency: str,
interval: timedelta | None = None,
interval: Optional[timedelta] = None,
) -> None:
self.property = property
self.data_collector = data_collector
@@ -72,11 +62,10 @@ class CountStat:
else:
self.interval = self.time_increment
@override
def __repr__(self) -> str:
def __str__(self) -> str:
return f"<CountStat: {self.property}>"
def last_successful_fill(self) -> datetime | None:
def last_successful_fill(self) -> Optional[datetime]:
fillstate = FillState.objects.filter(property=self.property).first()
if fillstate is None:
return None
@@ -84,30 +73,9 @@ class CountStat:
return fillstate.end_time
return fillstate.end_time - self.time_increment
def current_month_accumulated_count_for_user(self, user: UserProfile) -> int:
now = timezone_now()
start_of_month = datetime(now.year, now.month, 1, tzinfo=timezone.utc)
if now.month == 12: # nocoverage
start_of_next_month = datetime(now.year + 1, 1, 1, tzinfo=timezone.utc)
else: # nocoverage
start_of_next_month = datetime(now.year, now.month + 1, 1, tzinfo=timezone.utc)
# We just want to check we are not using BaseCount, otherwise all
# `output_table` have `objects` property.
assert self.data_collector.output_table == UserCount
result = self.data_collector.output_table.objects.filter( # type: ignore[attr-defined] # see above
user=user,
property=self.property,
end_time__gt=start_of_month,
end_time__lte=start_of_next_month,
).aggregate(models.Sum("value"))
total_value = result["value__sum"] or 0
return total_value
class LoggingCountStat(CountStat):
def __init__(self, property: str, output_table: type[BaseCount], frequency: str) -> None:
def __init__(self, property: str, output_table: Type[BaseCount], frequency: str) -> None:
CountStat.__init__(self, property, DataCollector(output_table, None), frequency)
@@ -117,7 +85,7 @@ class DependentCountStat(CountStat):
property: str,
data_collector: "DataCollector",
frequency: str,
interval: timedelta | None = None,
interval: Optional[timedelta] = None,
dependencies: Sequence[str] = [],
) -> None:
CountStat.__init__(self, property, data_collector, frequency, interval=interval)
@@ -127,20 +95,19 @@ class DependentCountStat(CountStat):
class DataCollector:
def __init__(
self,
output_table: type[BaseCount],
pull_function: Callable[[str, datetime, datetime, Realm | None], int] | None,
output_table: Type[BaseCount],
pull_function: Optional[Callable[[str, datetime, datetime, Optional[Realm]], int]],
) -> None:
self.output_table = output_table
self.pull_function = pull_function
def depends_on_realm(self) -> bool:
return self.output_table in (UserCount, StreamCount)
## CountStat-level operations ##
def process_count_stat(stat: CountStat, fill_to_time: datetime, realm: Realm | None = None) -> None:
def process_count_stat(
stat: CountStat, fill_to_time: datetime, realm: Optional[Realm] = None
) -> None:
# TODO: The realm argument is not yet supported, in that we don't
# have a solution for how to update FillState if it is passed. It
# exists solely as partial plumbing for when we do fully implement
@@ -182,7 +149,7 @@ def process_count_stat(stat: CountStat, fill_to_time: datetime, realm: Realm | N
return
fill_to_time = min(fill_to_time, dependency_fill_time)
currently_filled += stat.time_increment
currently_filled = currently_filled + stat.time_increment
while currently_filled <= fill_to_time:
logger.info("START %s %s", stat.property, currently_filled)
start = time.time()
@@ -190,7 +157,7 @@ def process_count_stat(stat: CountStat, fill_to_time: datetime, realm: Realm | N
do_fill_count_stat_at_hour(stat, currently_filled, realm)
do_update_fill_state(fill_state, currently_filled, FillState.DONE)
end = time.time()
currently_filled += stat.time_increment
currently_filled = currently_filled + stat.time_increment
logger.info("DONE %s (%dms)", stat.property, (end - start) * 1000)
@@ -203,7 +170,7 @@ def do_update_fill_state(fill_state: FillState, end_time: datetime, state: int)
# We assume end_time is valid (e.g. is on a day or hour boundary as appropriate)
# and is time-zone-aware. It is the caller's responsibility to enforce this!
def do_fill_count_stat_at_hour(
stat: CountStat, end_time: datetime, realm: Realm | None = None
stat: CountStat, end_time: datetime, realm: Optional[Realm] = None
) -> None:
start_time = end_time - stat.interval
if not isinstance(stat, LoggingCountStat):
@@ -222,7 +189,7 @@ def do_fill_count_stat_at_hour(
def do_delete_counts_at_hour(stat: CountStat, end_time: datetime) -> None:
if isinstance(stat, LoggingCountStat):
InstallationCount.objects.filter(property=stat.property, end_time=end_time).delete()
if stat.data_collector.depends_on_realm():
if stat.data_collector.output_table in [UserCount, StreamCount]:
RealmCount.objects.filter(property=stat.property, end_time=end_time).delete()
else:
UserCount.objects.filter(property=stat.property, end_time=end_time).delete()
@@ -232,18 +199,18 @@ def do_delete_counts_at_hour(stat: CountStat, end_time: datetime) -> None:
def do_aggregate_to_summary_table(
stat: CountStat, end_time: datetime, realm: Realm | None = None
stat: CountStat, end_time: datetime, realm: Optional[Realm] = None
) -> None:
cursor = connection.cursor()
# Aggregate into RealmCount
output_table = stat.data_collector.output_table
if realm is not None:
realm_clause: Composable = SQL("AND zerver_realm.id = {}").format(Literal(realm.id))
realm_clause = SQL("AND zerver_realm.id = {}").format(Literal(realm.id))
else:
realm_clause = SQL("")
if stat.data_collector.depends_on_realm():
if output_table in (UserCount, StreamCount):
realmcount_query = SQL(
"""
INSERT INTO analytics_realmcount
@@ -321,12 +288,11 @@ def do_aggregate_to_summary_table(
## Utility functions called from outside counts.py ##
# called from zerver.actions; should not throw any errors
def do_increment_logging_stat(
model_object_for_bucket: Union[Realm, UserProfile, Stream, "RemoteRealm", "RemoteZulipServer"],
zerver_object: Union[Realm, UserProfile, Stream],
stat: CountStat,
subgroup: str | int | bool | None,
subgroup: Optional[Union[str, int, bool]],
event_time: datetime,
increment: int = 1,
) -> None:
@@ -334,100 +300,31 @@ def do_increment_logging_stat(
return
table = stat.data_collector.output_table
id_args: dict[str, int | None] = {}
conflict_args: list[str] = []
if table == RealmCount:
assert isinstance(model_object_for_bucket, Realm)
id_args = {"realm_id": model_object_for_bucket.id}
conflict_args = ["realm_id"]
assert isinstance(zerver_object, Realm)
id_args: Dict[str, Union[Realm, UserProfile, Stream]] = {"realm": zerver_object}
elif table == UserCount:
assert isinstance(model_object_for_bucket, UserProfile)
id_args = {
"realm_id": model_object_for_bucket.realm_id,
"user_id": model_object_for_bucket.id,
}
conflict_args = ["user_id"]
elif table == StreamCount:
assert isinstance(model_object_for_bucket, Stream)
id_args = {
"realm_id": model_object_for_bucket.realm_id,
"stream_id": model_object_for_bucket.id,
}
conflict_args = ["stream_id"]
elif table == RemoteInstallationCount:
assert isinstance(model_object_for_bucket, RemoteZulipServer)
id_args = {"server_id": model_object_for_bucket.id, "remote_id": None}
conflict_args = ["server_id"]
elif table == RemoteRealmCount:
assert isinstance(model_object_for_bucket, RemoteRealm)
# For RemoteRealmCount (e.g. `mobile_pushes_forwarded::day`),
# we have no `remote_id` nor `realm_id`, since they are not
# imported from the remote server, which is the source of
# truth of those two columns. Their "ON CONFLICT" is thus the
# only unique key we have, which is `remote_realm_id`, and not
# `server_id` / `realm_id`.
id_args = {
"server_id": model_object_for_bucket.server_id,
"remote_realm_id": model_object_for_bucket.id,
"remote_id": None,
"realm_id": None,
}
conflict_args = [
"remote_realm_id",
]
else:
raise AssertionError("Unsupported CountStat output_table")
assert isinstance(zerver_object, UserProfile)
id_args = {"realm": zerver_object.realm, "user": zerver_object}
else: # StreamCount
assert isinstance(zerver_object, Stream)
id_args = {"realm": zerver_object.realm, "stream": zerver_object}
if stat.frequency == CountStat.DAY:
end_time = ceiling_to_day(event_time)
elif stat.frequency == CountStat.HOUR:
else: # CountStat.HOUR:
end_time = ceiling_to_hour(event_time)
else:
raise AssertionError("Unsupported CountStat frequency")
is_subgroup: SQL = SQL("NULL")
if subgroup is not None:
is_subgroup = SQL("NOT NULL")
# For backwards consistency, we cast the subgroup to a string
# in Python; this emulates the behaviour of `get_or_create`,
# which was previously used in this function, and performed
# this cast because the `subgroup` column is defined as a
# `CharField`. Omitting this explicit cast causes a subgroup
# of the boolean False to be passed as the PostgreSQL false,
# which it stringifies as the lower-case `'false'`, not the
# initial-case `'False'` if Python stringifies it.
#
# Other parts of the system (e.g. count_message_by_user_query)
# already use PostgreSQL to cast bools to strings, resulting
# in `subgroup` values of lower-case `'false'` -- for example
# in `messages_sent:is_bot:hour`. Fixing this inconsistency
# via a migration is complicated by these records being
# exchanged over the wire from remote servers.
subgroup = str(subgroup)
conflict_args.append("subgroup")
id_column_names = SQL(", ").join(map(Identifier, id_args.keys()))
id_values = SQL(", ").join(map(Literal, id_args.values()))
conflict_columns = SQL(", ").join(map(Identifier, conflict_args))
sql_query = SQL(
"""
INSERT INTO {table_name}(property, subgroup, end_time, value, {id_column_names})
VALUES (%s, %s, %s, %s, {id_values})
ON CONFLICT (property, end_time, {conflict_columns})
WHERE subgroup IS {is_subgroup}
DO UPDATE SET
value = {table_name}.value + EXCLUDED.value
"""
).format(
table_name=Identifier(table._meta.db_table),
id_column_names=id_column_names,
id_values=id_values,
conflict_columns=conflict_columns,
is_subgroup=is_subgroup,
row, created = table.objects.get_or_create(
property=stat.property,
subgroup=subgroup,
end_time=end_time,
defaults={"value": increment},
**id_args,
)
with connection.cursor() as cursor:
cursor.execute(sql_query, [stat.property, subgroup, end_time, increment])
if not created:
row.value = F("value") + increment
row.save(update_fields=["value"])
def do_drop_all_analytics_tables() -> None:
@@ -448,7 +345,7 @@ def do_drop_single_stat(property: str) -> None:
## DataCollector-level operations ##
QueryFn: TypeAlias = Callable[[dict[str, Composable]], Composable]
QueryFn = Callable[[Dict[str, Composable]], Composable]
def do_pull_by_sql_query(
@@ -456,11 +353,11 @@ def do_pull_by_sql_query(
start_time: datetime,
end_time: datetime,
query: QueryFn,
group_by: tuple[type[models.Model], str] | None,
group_by: Optional[Tuple[Type[models.Model], str]],
) -> int:
if group_by is None:
subgroup: Composable = SQL("NULL")
group_by_clause: Composable = SQL("")
group_by_clause = SQL("")
else:
subgroup = Identifier(group_by[0]._meta.db_table, group_by[1])
group_by_clause = SQL(", {}").format(subgroup)
@@ -490,12 +387,12 @@ def do_pull_by_sql_query(
def sql_data_collector(
output_table: type[BaseCount],
output_table: Type[BaseCount],
query: QueryFn,
group_by: tuple[type[models.Model], str] | None,
group_by: Optional[Tuple[Type[models.Model], str]],
) -> DataCollector:
def pull_function(
property: str, start_time: datetime, end_time: datetime, realm: Realm | None = None
property: str, start_time: datetime, end_time: datetime, realm: Optional[Realm] = None
) -> int:
# The pull function type needs to accept a Realm argument
# because the 'minutes_active::day' CountStat uses
@@ -508,42 +405,8 @@ def sql_data_collector(
return DataCollector(output_table, pull_function)
def count_upload_space_used_by_realm_query(realm: Realm | None) -> QueryFn:
if realm is None:
realm_clause: Composable = SQL("")
else:
realm_clause = SQL("zerver_attachment.realm_id = {} AND").format(Literal(realm.id))
# Note: This query currently has to go through the entire table,
# summing all the sizes of attachments for every realm. This can be improved
# by having a query which looks at the latest CountStat for each realm,
# and sums it with only the new attachments.
# There'd be additional complexity added by the fact that attachments can
# also be deleted. Partially this can be accounted for by subtracting
# ArchivedAttachment sizes, but there's still the issue of attachments
# which can be directly deleted via the API.
return lambda kwargs: SQL(
"""
INSERT INTO analytics_realmcount (realm_id, property, end_time, value)
SELECT
zerver_attachment.realm_id,
%(property)s,
%(time_end)s,
COALESCE(SUM(zerver_attachment.size), 0)
FROM
zerver_attachment
WHERE
{realm_clause}
zerver_attachment.create_time < %(time_end)s
GROUP BY
zerver_attachment.realm_id
"""
).format(**kwargs, realm_clause=realm_clause)
def do_pull_minutes_active(
property: str, start_time: datetime, end_time: datetime, realm: Realm | None = None
property: str, start_time: datetime, end_time: datetime, realm: Optional[Realm] = None
) -> int:
user_activity_intervals = (
UserActivityInterval.objects.filter(
@@ -556,7 +419,7 @@ def do_pull_minutes_active(
.values_list("user_profile_id", "user_profile__realm_id", "start", "end")
)
seconds_active: dict[tuple[int, int], float] = defaultdict(float)
seconds_active: Dict[Tuple[int, int], float] = defaultdict(float)
for user_id, realm_id, interval_start, interval_end in user_activity_intervals:
if realm is None or realm.id == realm_id:
start = max(start_time, interval_start)
@@ -578,17 +441,11 @@ def do_pull_minutes_active(
return len(rows)
def count_message_by_user_query(realm: Realm | None) -> QueryFn:
def count_message_by_user_query(realm: Optional[Realm]) -> QueryFn:
if realm is None:
realm_clause: Composable = SQL("")
realm_clause = SQL("")
else:
# We limit both userprofile and message so that we only see
# users from this realm, but also get the performance speedup
# of limiting messages by realm.
realm_clause = SQL(
"zerver_userprofile.realm_id = {} AND zerver_message.realm_id = {} AND"
).format(Literal(realm.id), Literal(realm.id))
# Uses index: zerver_message_realm_date_sent (or the only-date index)
realm_clause = SQL("zerver_userprofile.realm_id = {} AND").format(Literal(realm.id))
return lambda kwargs: SQL(
"""
INSERT INTO analytics_usercount
@@ -611,17 +468,11 @@ def count_message_by_user_query(realm: Realm | None) -> QueryFn:
# Note: ignores the group_by / group_by_clause.
def count_message_type_by_user_query(realm: Realm | None) -> QueryFn:
def count_message_type_by_user_query(realm: Optional[Realm]) -> QueryFn:
if realm is None:
realm_clause: Composable = SQL("")
realm_clause = SQL("")
else:
# We limit both userprofile and message so that we only see
# users from this realm, but also get the performance speedup
# of limiting messages by realm.
realm_clause = SQL(
"zerver_userprofile.realm_id = {} AND zerver_message.realm_id = {} AND"
).format(Literal(realm.id), Literal(realm.id))
# Uses index: zerver_message_realm_date_sent (or the only-date index)
realm_clause = SQL("zerver_userprofile.realm_id = {} AND").format(Literal(realm.id))
return lambda kwargs: SQL(
"""
INSERT INTO analytics_usercount
@@ -631,9 +482,9 @@ def count_message_type_by_user_query(realm: Realm | None) -> QueryFn:
(
SELECT zerver_userprofile.realm_id, zerver_userprofile.id, count(*),
CASE WHEN
zerver_recipient.type = 1 OR (zerver_recipient.type = 3 AND zerver_huddle.group_size <= 2) THEN 'private_message'
zerver_recipient.type = 1 THEN 'private_message'
WHEN
zerver_recipient.type = 3 AND zerver_huddle.group_size > 2 THEN 'huddle_message'
zerver_recipient.type = 3 THEN 'huddle_message'
WHEN
zerver_stream.invite_only = TRUE THEN 'private_stream'
ELSE 'public_stream'
@@ -650,15 +501,12 @@ def count_message_type_by_user_query(realm: Realm | None) -> QueryFn:
JOIN zerver_recipient
ON
zerver_message.recipient_id = zerver_recipient.id
LEFT JOIN zerver_huddle
ON
zerver_recipient.type_id = zerver_huddle.id
LEFT JOIN zerver_stream
ON
zerver_recipient.type_id = zerver_stream.id
GROUP BY
zerver_userprofile.realm_id, zerver_userprofile.id,
zerver_recipient.type, zerver_stream.invite_only, zerver_huddle.group_size
zerver_recipient.type, zerver_stream.invite_only
) AS subquery
GROUP BY realm_id, id, message_type
"""
@@ -669,14 +517,11 @@ def count_message_type_by_user_query(realm: Realm | None) -> QueryFn:
# use this also subgroup on UserProfile.is_bot. If in the future there is a
# stat that counts messages by stream and doesn't need the UserProfile
# table, consider writing a new query for efficiency.
def count_message_by_stream_query(realm: Realm | None) -> QueryFn:
def count_message_by_stream_query(realm: Optional[Realm]) -> QueryFn:
if realm is None:
realm_clause: Composable = SQL("")
realm_clause = SQL("")
else:
realm_clause = SQL(
"zerver_stream.realm_id = {} AND zerver_message.realm_id = {} AND"
).format(Literal(realm.id), Literal(realm.id))
# Uses index: zerver_message_realm_date_sent (or the only-date index)
realm_clause = SQL("zerver_stream.realm_id = {} AND").format(Literal(realm.id))
return lambda kwargs: SQL(
"""
INSERT INTO analytics_streamcount
@@ -704,53 +549,81 @@ def count_message_by_stream_query(realm: Realm | None) -> QueryFn:
).format(**kwargs, realm_clause=realm_clause)
# Hardcodes the query needed for active_users_audit:is_bot:day.
# Assumes that a user cannot have two RealmAuditLog entries with the
# same event_time and event_type in [AuditLogEventType.USER_CREATED,
# USER_DEACTIVATED, etc]. In particular, it's important to ensure
# that migrations don't cause that to happen.
def check_realmauditlog_by_user_query(realm: Realm | None) -> QueryFn:
# Hardcodes the query needed by active_users:is_bot:day, since that is
# currently the only stat that uses this.
def count_user_by_realm_query(realm: Optional[Realm]) -> QueryFn:
if realm is None:
realm_clause: Composable = SQL("")
realm_clause = SQL("")
else:
realm_clause = SQL("realm_id = {} AND").format(Literal(realm.id))
realm_clause = SQL("zerver_userprofile.realm_id = {} AND").format(Literal(realm.id))
return lambda kwargs: SQL(
"""
INSERT INTO analytics_realmcount
(realm_id, value, property, subgroup, end_time)
SELECT
zerver_userprofile.realm_id, count(*), %(property)s, {subgroup}, %(time_end)s
FROM zerver_userprofile
JOIN (
SELECT DISTINCT ON (modified_user_id)
modified_user_id, event_type
FROM
zerver_realmauditlog
WHERE
event_type IN ({user_created}, {user_activated}, {user_deactivated}, {user_reactivated}) AND
{realm_clause}
event_time < %(time_end)s
ORDER BY
modified_user_id,
event_time DESC
) last_user_event ON last_user_event.modified_user_id = zerver_userprofile.id
zerver_realm.id, count(*), %(property)s, {subgroup}, %(time_end)s
FROM zerver_realm
JOIN zerver_userprofile
ON
zerver_realm.id = zerver_userprofile.realm_id
WHERE
last_user_event.event_type in ({user_created}, {user_activated}, {user_reactivated})
GROUP BY zerver_userprofile.realm_id {group_by_clause}
zerver_realm.date_created < %(time_end)s AND
zerver_userprofile.date_joined >= %(time_start)s AND
zerver_userprofile.date_joined < %(time_end)s AND
{realm_clause}
zerver_userprofile.is_active = TRUE
GROUP BY zerver_realm.id {group_by_clause}
"""
).format(**kwargs, realm_clause=realm_clause)
# Currently hardcodes the query needed for active_users_audit:is_bot:day.
# Assumes that a user cannot have two RealmAuditLog entries with the same event_time and
# event_type in [RealmAuditLog.USER_CREATED, USER_DEACTIVATED, etc].
# In particular, it's important to ensure that migrations don't cause that to happen.
def check_realmauditlog_by_user_query(realm: Optional[Realm]) -> QueryFn:
if realm is None:
realm_clause = SQL("")
else:
realm_clause = SQL("realm_id = {} AND").format(Literal(realm.id))
return lambda kwargs: SQL(
"""
INSERT INTO analytics_usercount
(user_id, realm_id, value, property, subgroup, end_time)
SELECT
ral1.modified_user_id, ral1.realm_id, 1, %(property)s, {subgroup}, %(time_end)s
FROM zerver_realmauditlog ral1
JOIN (
SELECT modified_user_id, max(event_time) AS max_event_time
FROM zerver_realmauditlog
WHERE
event_type in ({user_created}, {user_activated}, {user_deactivated}, {user_reactivated}) AND
{realm_clause}
event_time < %(time_end)s
GROUP BY modified_user_id
) ral2
ON
ral1.event_time = max_event_time AND
ral1.modified_user_id = ral2.modified_user_id
JOIN zerver_userprofile
ON
ral1.modified_user_id = zerver_userprofile.id
WHERE
ral1.event_type in ({user_created}, {user_activated}, {user_reactivated})
"""
).format(
**kwargs,
user_created=Literal(AuditLogEventType.USER_CREATED),
user_activated=Literal(AuditLogEventType.USER_ACTIVATED),
user_deactivated=Literal(AuditLogEventType.USER_DEACTIVATED),
user_reactivated=Literal(AuditLogEventType.USER_REACTIVATED),
user_created=Literal(RealmAuditLog.USER_CREATED),
user_activated=Literal(RealmAuditLog.USER_ACTIVATED),
user_deactivated=Literal(RealmAuditLog.USER_DEACTIVATED),
user_reactivated=Literal(RealmAuditLog.USER_REACTIVATED),
realm_clause=realm_clause,
)
def check_useractivityinterval_by_user_query(realm: Realm | None) -> QueryFn:
def check_useractivityinterval_by_user_query(realm: Optional[Realm]) -> QueryFn:
if realm is None:
realm_clause: Composable = SQL("")
realm_clause = SQL("")
else:
realm_clause = SQL("zerver_userprofile.realm_id = {} AND").format(Literal(realm.id))
return lambda kwargs: SQL(
@@ -772,9 +645,9 @@ def check_useractivityinterval_by_user_query(realm: Realm | None) -> QueryFn:
).format(**kwargs, realm_clause=realm_clause)
def count_realm_active_humans_query(realm: Realm | None) -> QueryFn:
def count_realm_active_humans_query(realm: Optional[Realm]) -> QueryFn:
if realm is None:
realm_clause: Composable = SQL("")
realm_clause = SQL("")
else:
realm_clause = SQL("realm_id = {} AND").format(Literal(realm.id))
return lambda kwargs: SQL(
@@ -782,46 +655,29 @@ def count_realm_active_humans_query(realm: Realm | None) -> QueryFn:
INSERT INTO analytics_realmcount
(realm_id, value, property, subgroup, end_time)
SELECT
active_usercount.realm_id, count(*), %(property)s, NULL, %(time_end)s
usercount1.realm_id, count(*), %(property)s, NULL, %(time_end)s
FROM (
SELECT
realm_id,
user_id
FROM
analytics_usercount
WHERE
property = '15day_actives::day'
{realm_clause}
AND end_time = %(time_end)s
) active_usercount
JOIN zerver_userprofile ON active_usercount.user_id = zerver_userprofile.id
AND active_usercount.realm_id = zerver_userprofile.realm_id
SELECT realm_id, user_id
FROM analytics_usercount
WHERE
property = 'active_users_audit:is_bot:day' AND
subgroup = 'false' AND
{realm_clause}
end_time = %(time_end)s
) usercount1
JOIN (
SELECT DISTINCT ON (modified_user_id)
modified_user_id, event_type
FROM
zerver_realmauditlog
WHERE
event_type IN ({user_created}, {user_activated}, {user_deactivated}, {user_reactivated})
AND event_time < %(time_end)s
ORDER BY
modified_user_id,
event_time DESC
) last_user_event ON last_user_event.modified_user_id = active_usercount.user_id
WHERE
NOT zerver_userprofile.is_bot
AND event_type IN ({user_created}, {user_activated}, {user_reactivated})
GROUP BY
active_usercount.realm_id
SELECT realm_id, user_id
FROM analytics_usercount
WHERE
property = '15day_actives::day' AND
{realm_clause}
end_time = %(time_end)s
) usercount2
ON
usercount1.user_id = usercount2.user_id
GROUP BY usercount1.realm_id
"""
).format(
**kwargs,
user_created=Literal(AuditLogEventType.USER_CREATED),
user_activated=Literal(AuditLogEventType.USER_ACTIVATED),
user_deactivated=Literal(AuditLogEventType.USER_DEACTIVATED),
user_reactivated=Literal(AuditLogEventType.USER_REACTIVATED),
realm_clause=realm_clause,
)
).format(**kwargs, realm_clause=realm_clause)
# Currently unused and untested
@@ -844,7 +700,7 @@ count_stream_by_realm_query = lambda kwargs: SQL(
).format(**kwargs)
def get_count_stats(realm: Realm | None = None) -> dict[str, CountStat]:
def get_count_stats(realm: Optional[Realm] = None) -> Dict[str, CountStat]:
## CountStat declarations ##
count_stats_ = [
@@ -877,22 +733,39 @@ def get_count_stats(realm: Realm | None = None) -> dict[str, CountStat]:
),
CountStat.DAY,
),
# AI credit usage stats for users, in units of $1/10^9, which is safe for
# aggregation because we're using bigints for the values.
LoggingCountStat("ai_credit_usage::day", UserCount, CountStat.DAY),
# Counts the number of active users in the UserProfile.is_active sense.
# Number of users stats
# Stats that count the number of active users in the UserProfile.is_active sense.
# 'active_users_audit:is_bot:day' is the canonical record of which users were
# active on which days (in the UserProfile.is_active sense).
# Important that this stay a daily stat, so that 'realm_active_humans::day' works as expected.
CountStat(
"active_users_audit:is_bot:day",
sql_data_collector(
RealmCount, check_realmauditlog_by_user_query(realm), (UserProfile, "is_bot")
UserCount, check_realmauditlog_by_user_query(realm), (UserProfile, "is_bot")
),
CountStat.DAY,
),
# Important note: LoggingCountStat objects aren't passed the
# Realm argument, because by nature they have a logging
# structure, not a pull-from-database structure, so there's no
# way to compute them for a single realm after the fact (the
# use case for passing a Realm argument).
# Sanity check on 'active_users_audit:is_bot:day', and a archetype for future LoggingCountStats.
# In RealmCount, 'active_users_audit:is_bot:day' should be the partial
# sum sequence of 'active_users_log:is_bot:day', for any realm that
# started after the latter stat was introduced.
LoggingCountStat("active_users_log:is_bot:day", RealmCount, CountStat.DAY),
# Another sanity check on 'active_users_audit:is_bot:day'. Is only an
# approximation, e.g. if a user is deactivated between the end of the
# day and when this stat is run, they won't be counted. However, is the
# simplest of the three to inspect by hand.
CountStat(
"upload_quota_used_bytes::day",
sql_data_collector(RealmCount, count_upload_space_used_by_realm_query(realm), None),
"active_users:is_bot:day",
sql_data_collector(
RealmCount, count_user_by_realm_query(realm), (UserProfile, "is_bot")
),
CountStat.DAY,
interval=TIMEDELTA_MAX,
),
# Messages read stats. messages_read::hour is the total
# number of messages read, whereas
@@ -926,16 +799,8 @@ def get_count_stats(realm: Realm | None = None) -> dict[str, CountStat]:
CountStat(
"minutes_active::day", DataCollector(UserCount, do_pull_minutes_active), CountStat.DAY
),
# Tracks the number of push notifications requested by the server.
# Included in LOGGING_COUNT_STAT_PROPERTIES_NOT_SENT_TO_BOUNCER.
LoggingCountStat(
"mobile_pushes_sent::day",
RealmCount,
CountStat.DAY,
),
# Rate limiting stats
# Used to limit the number of invitation emails sent by a realm.
# Included in LOGGING_COUNT_STAT_PROPERTIES_NOT_SENT_TO_BOUNCER.
# Used to limit the number of invitation emails sent by a realm
LoggingCountStat("invites_sent::day", RealmCount, CountStat.DAY),
# Dependent stats
# Must come after their dependencies.
@@ -944,83 +809,12 @@ def get_count_stats(realm: Realm | None = None) -> dict[str, CountStat]:
"realm_active_humans::day",
sql_data_collector(RealmCount, count_realm_active_humans_query(realm), None),
CountStat.DAY,
dependencies=["15day_actives::day"],
dependencies=["active_users_audit:is_bot:day", "15day_actives::day"],
),
]
if settings.ZILENCER_ENABLED:
# See also the remote_installation versions of these in REMOTE_INSTALLATION_COUNT_STATS.
count_stats_.append(
LoggingCountStat(
"mobile_pushes_received::day",
RemoteRealmCount,
CountStat.DAY,
)
)
count_stats_.append(
LoggingCountStat(
"mobile_pushes_forwarded::day",
RemoteRealmCount,
CountStat.DAY,
)
)
return OrderedDict((stat.property, stat) for stat in count_stats_)
# These properties are tracked by the bouncer itself and therefore syncing them
# from a remote server should not be allowed - or the server would be able to interfere
# with our data.
BOUNCER_ONLY_REMOTE_COUNT_STAT_PROPERTIES = [
"mobile_pushes_received::day",
"mobile_pushes_forwarded::day",
]
# LoggingCountStats with a daily duration and that are directly stored on
# the RealmCount table (instead of via aggregation in process_count_stat),
# can be in a state, after the hourly cron job to update analytics counts,
# where the logged value will be live-updated later (as the end time for
# the stat is still in the future). As these logging counts are designed
# to be used on the self-hosted installation for either debugging or rate
# limiting, sending these incomplete counts to the bouncer has low value.
LOGGING_COUNT_STAT_PROPERTIES_NOT_SENT_TO_BOUNCER = {
"invites_sent::day",
"mobile_pushes_sent::day",
"active_users_log:is_bot:day",
"active_users:is_bot:day",
}
# To avoid refactoring for now COUNT_STATS can be used as before
COUNT_STATS = get_count_stats()
REMOTE_INSTALLATION_COUNT_STATS = OrderedDict()
if settings.ZILENCER_ENABLED:
# REMOTE_INSTALLATION_COUNT_STATS contains duplicates of the
# RemoteRealmCount stats declared above; it is necessary because
# pre-8.0 servers do not send the fields required to identify a
# RemoteRealm.
# Tracks the number of push notifications requested to be sent
# by a remote server.
REMOTE_INSTALLATION_COUNT_STATS["mobile_pushes_received::day"] = LoggingCountStat(
"mobile_pushes_received::day",
RemoteInstallationCount,
CountStat.DAY,
)
# Tracks the number of push notifications successfully sent to
# mobile devices, as requested by the remote server. Therefore
# this should be less than or equal to mobile_pushes_received -
# with potential tiny offsets resulting from a request being
# *received* by the bouncer right before midnight, but *sent* to
# the mobile device right after midnight. This would cause the
# increments to happen to CountStat records for different days.
REMOTE_INSTALLATION_COUNT_STATS["mobile_pushes_forwarded::day"] = LoggingCountStat(
"mobile_pushes_forwarded::day",
RemoteInstallationCount,
CountStat.DAY,
)
ALL_COUNT_STATS = OrderedDict(
list(COUNT_STATS.items()) + list(REMOTE_INSTALLATION_COUNT_STATS.items())
)

View File

@@ -1,5 +1,6 @@
from math import sqrt
from random import Random
from random import gauss, random, seed
from typing import List
from analytics.lib.counts import CountStat
@@ -15,7 +16,7 @@ def generate_time_series_data(
frequency: str = CountStat.DAY,
partial_sum: bool = False,
random_seed: int = 26,
) -> list[int]:
) -> List[int]:
"""
Generate semi-realistic looking time series data for testing analytics graphs.
@@ -35,8 +36,6 @@ def generate_time_series_data(
partial_sum -- If True, return partial sum of the series.
random_seed -- Seed for random number generator.
"""
rng = Random(random_seed)
if frequency == CountStat.HOUR:
length = days * 24
seasonality = [non_business_hours_base] * 24 * 7
@@ -45,13 +44,13 @@ def generate_time_series_data(
seasonality[24 * day + hour] = business_hours_base
holidays = []
for i in range(days):
holidays.extend([rng.random() < holiday_rate] * 24)
holidays.extend([random() < holiday_rate] * 24)
elif frequency == CountStat.DAY:
length = days
seasonality = [8 * business_hours_base + 16 * non_business_hours_base] * 5 + [
24 * non_business_hours_base
] * 2
holidays = [rng.random() < holiday_rate for i in range(days)]
holidays = [random() < holiday_rate for i in range(days)]
else:
raise AssertionError(f"Unknown frequency: {frequency}")
if length < 2:
@@ -59,17 +58,20 @@ def generate_time_series_data(
f"Must be generating at least 2 data points. Currently generating {length}"
)
growth_base = growth ** (1.0 / (length - 1))
values_no_noise = [seasonality[i % len(seasonality)] * (growth_base**i) for i in range(length)]
values_no_noise = [
seasonality[i % len(seasonality)] * (growth_base**i) for i in range(length)
]
noise_scalars = [rng.gauss(0, 1)]
seed(random_seed)
noise_scalars = [gauss(0, 1)]
for i in range(1, length):
noise_scalars.append(
noise_scalars[-1] * autocorrelation + rng.gauss(0, 1) * (1 - autocorrelation)
noise_scalars[-1] * autocorrelation + gauss(0, 1) * (1 - autocorrelation)
)
values = [
0 if holiday else int(v + sqrt(v) * noise_scalar * spikiness)
for v, noise_scalar, holiday in zip(values_no_noise, noise_scalars, holidays, strict=False)
for v, noise_scalar, holiday in zip(values_no_noise, noise_scalars, holidays)
]
if partial_sum:
for i in range(1, length):

View File

@@ -1,4 +1,5 @@
from datetime import datetime, timedelta
from typing import List, Optional
from analytics.lib.counts import CountStat
from zerver.lib.timestamp import floor_to_day, floor_to_hour, verify_UTC
@@ -9,8 +10,8 @@ from zerver.lib.timestamp import floor_to_day, floor_to_hour, verify_UTC
# So informally, time_range(Sep 20, Sep 22, day, None) returns [Sep 20, Sep 21, Sep 22],
# and time_range(Sep 20, Sep 22, day, 5) returns [Sep 18, Sep 19, Sep 20, Sep 21, Sep 22]
def time_range(
start: datetime, end: datetime, frequency: str, min_length: int | None
) -> list[datetime]:
start: datetime, end: datetime, frequency: str, min_length: Optional[int]
) -> List[datetime]:
verify_UTC(start)
verify_UTC(end)
if frequency == CountStat.HOUR:
@@ -29,5 +30,4 @@ def time_range(
while current >= start:
times.append(current)
current -= step
times.reverse()
return times
return list(reversed(times))

View File

@@ -1,15 +1,14 @@
from dataclasses import dataclass
import os
import time
from datetime import timedelta
from typing import Any, Literal
from typing import Any, Dict
from django.core.management.base import BaseCommand
from django.utils.timezone import now as timezone_now
from typing_extensions import override
from analytics.lib.counts import ALL_COUNT_STATS, CountStat
from analytics.lib.counts import COUNT_STATS, CountStat
from analytics.models import installation_epoch
from scripts.lib.zulip_tools import atomic_nagios_write
from zerver.lib.management import ZulipBaseCommand
from zerver.lib.timestamp import TimeZoneNotUTCError, floor_to_day, floor_to_hour, verify_UTC
from zerver.lib.timestamp import TimeZoneNotUTCException, floor_to_day, floor_to_hour, verify_UTC
from zerver.models import Realm
states = {
@@ -20,38 +19,37 @@ states = {
}
@dataclass
class NagiosResult:
status: Literal["ok", "warning", "critical", "unknown"]
message: str
class Command(ZulipBaseCommand):
class Command(BaseCommand):
help = """Checks FillState table.
Run as a cron job that runs every hour."""
@override
def handle(self, *args: Any, **options: Any) -> None:
fill_state = self.get_fill_state()
atomic_nagios_write("check-analytics-state", fill_state.status, fill_state.message)
status = fill_state["status"]
message = fill_state["message"]
def get_fill_state(self) -> NagiosResult:
state_file_path = "/var/lib/nagios_state/check-analytics-state"
state_file_tmp = state_file_path + "-tmp"
with open(state_file_tmp, "w") as f:
f.write(f"{int(time.time())}|{status}|{states[status]}|{message}\n")
os.rename(state_file_tmp, state_file_path)
def get_fill_state(self) -> Dict[str, Any]:
if not Realm.objects.exists():
return NagiosResult(status="ok", message="No realms exist, so not checking FillState.")
return {"status": 0, "message": "No realms exist, so not checking FillState."}
warning_unfilled_properties = []
critical_unfilled_properties = []
for property, stat in ALL_COUNT_STATS.items():
for property, stat in COUNT_STATS.items():
last_fill = stat.last_successful_fill()
if last_fill is None:
last_fill = installation_epoch()
try:
verify_UTC(last_fill)
except TimeZoneNotUTCError:
return NagiosResult(
status="critical", message=f"FillState not in UTC for {property}"
)
except TimeZoneNotUTCException:
return {"status": 2, "message": f"FillState not in UTC for {property}"}
if stat.frequency == CountStat.DAY:
floor_function = floor_to_day
@@ -63,10 +61,10 @@ class Command(ZulipBaseCommand):
critical_threshold = timedelta(minutes=150)
if floor_function(last_fill) != last_fill:
return NagiosResult(
status="critical",
message=f"FillState not on {stat.frequency} boundary for {property}",
)
return {
"status": 2,
"message": f"FillState not on {stat.frequency} boundary for {property}",
}
time_to_last_fill = timezone_now() - last_fill
if time_to_last_fill > critical_threshold:
@@ -75,18 +73,18 @@ class Command(ZulipBaseCommand):
warning_unfilled_properties.append(property)
if len(critical_unfilled_properties) == 0 and len(warning_unfilled_properties) == 0:
return NagiosResult(status="ok", message="FillState looks fine.")
return {"status": 0, "message": "FillState looks fine."}
if len(critical_unfilled_properties) == 0:
return NagiosResult(
status="warning",
message="Missed filling {} once.".format(
return {
"status": 1,
"message": "Missed filling {} once.".format(
", ".join(warning_unfilled_properties),
),
)
return NagiosResult(
status="critical",
message="Missed filling {} once. Missed filling {} at least twice.".format(
}
return {
"status": 2,
"message": "Missed filling {} once. Missed filling {} at least twice.".format(
", ".join(warning_unfilled_properties),
", ".join(critical_unfilled_properties),
),
)
}

View File

@@ -1,21 +1,17 @@
from argparse import ArgumentParser
from typing import Any
from django.core.management.base import CommandError
from typing_extensions import override
from django.core.management.base import BaseCommand, CommandError
from analytics.lib.counts import do_drop_all_analytics_tables
from zerver.lib.management import ZulipBaseCommand
class Command(ZulipBaseCommand):
class Command(BaseCommand):
help = """Clear analytics tables."""
@override
def add_arguments(self, parser: ArgumentParser) -> None:
parser.add_argument("--force", action="store_true", help="Clear analytics tables.")
@override
def handle(self, *args: Any, **options: Any) -> None:
if options["force"]:
do_drop_all_analytics_tables()

View File

@@ -1,25 +1,21 @@
from argparse import ArgumentParser
from typing import Any
from django.core.management.base import CommandError
from typing_extensions import override
from django.core.management.base import BaseCommand, CommandError
from analytics.lib.counts import ALL_COUNT_STATS, do_drop_single_stat
from zerver.lib.management import ZulipBaseCommand
from analytics.lib.counts import COUNT_STATS, do_drop_single_stat
class Command(ZulipBaseCommand):
class Command(BaseCommand):
help = """Clear analytics tables."""
@override
def add_arguments(self, parser: ArgumentParser) -> None:
parser.add_argument("--force", action="store_true", help="Actually do it.")
parser.add_argument("--property", help="The property of the stat to be cleared.")
@override
def handle(self, *args: Any, **options: Any) -> None:
property = options["property"]
if property not in ALL_COUNT_STATS:
if property not in COUNT_STATS:
raise CommandError(f"Invalid property: {property}")
if not options["force"]:
raise CommandError("No action taken. Use --force.")

View File

@@ -1,10 +1,9 @@
from collections.abc import Mapping
from datetime import timedelta
from typing import Any, TypeAlias
from typing import Any, Dict, List, Mapping, Type, Union
from unittest import mock
from django.core.files.uploadedfile import UploadedFile
from django.core.management.base import BaseCommand
from django.utils.timezone import now as timezone_now
from typing_extensions import override
from analytics.lib.counts import COUNT_STATS, CountStat, do_drop_all_analytics_tables
from analytics.lib.fixtures import generate_time_series_data
@@ -18,20 +17,14 @@ from analytics.models import (
UserCount,
)
from zerver.actions.create_realm import do_create_realm
from zerver.actions.users import do_change_user_role
from zerver.lib.create_user import create_user
from zerver.lib.management import ZulipBaseCommand
from zerver.lib.storage import static_path
from zerver.lib.stream_color import STREAM_ASSIGNMENT_COLORS
from zerver.lib.stream_subscription import create_stream_subscription
from zerver.lib.streams import get_default_values_for_stream_permission_group_settings
from zerver.lib.timestamp import floor_to_day
from zerver.lib.upload import upload_message_attachment_from_request
from zerver.models import Client, Realm, RealmAuditLog, Recipient, Stream, UserProfile
from zerver.models.groups import NamedUserGroup, SystemGroups, UserGroupMembership
from zerver.models.realm_audit_logs import AuditLogEventType
from zerver.models import Client, Realm, Recipient, Stream, Subscription, UserProfile
class Command(ZulipBaseCommand):
class Command(BaseCommand):
help = """Populates analytics tables with randomly generated data."""
DAYS_OF_DATA = 100
@@ -47,7 +40,7 @@ class Command(ZulipBaseCommand):
spikiness: float,
holiday_rate: float = 0,
partial_sum: bool = False,
) -> list[int]:
) -> List[int]:
self.random_seed += 1
return generate_time_series_data(
days=self.DAYS_OF_DATA,
@@ -62,7 +55,6 @@ class Command(ZulipBaseCommand):
random_seed=self.random_seed,
)
@override
def handle(self, *args: Any, **options: Any) -> None:
# TODO: This should arguably only delete the objects
# associated with the "analytics" realm.
@@ -87,77 +79,44 @@ class Command(ZulipBaseCommand):
string_id="analytics", name="Analytics", date_created=installation_time
)
owners_system_group = NamedUserGroup.objects.get(
name=SystemGroups.OWNERS, realm=realm, is_system_group=True
)
guests_system_group = NamedUserGroup.objects.get(
name=SystemGroups.EVERYONE, realm=realm, is_system_group=True
)
shylock = create_user(
"shylock@analytics.ds",
"Shylock",
realm,
full_name="Shylock",
role=UserProfile.ROLE_REALM_OWNER,
force_date_joined=installation_time,
)
UserGroupMembership.objects.create(user_profile=shylock, user_group=owners_system_group)
# Create guest user for set_guest_users_statistic.
bassanio = create_user(
"bassanio@analytics.ds",
"Bassanio",
realm,
full_name="Bassanio",
role=UserProfile.ROLE_GUEST,
force_date_joined=installation_time,
)
UserGroupMembership.objects.create(user_profile=bassanio, user_group=guests_system_group)
stream = Stream.objects.create(
name="all",
realm=realm,
date_created=installation_time,
**get_default_values_for_stream_permission_group_settings(realm),
)
with mock.patch("zerver.lib.create_user.timezone_now", return_value=installation_time):
shylock = create_user(
"shylock@analytics.ds",
"Shylock",
realm,
full_name="Shylock",
role=UserProfile.ROLE_REALM_OWNER,
)
do_change_user_role(shylock, UserProfile.ROLE_REALM_OWNER, acting_user=None)
stream = Stream.objects.create(name="all", realm=realm, date_created=installation_time)
recipient = Recipient.objects.create(type_id=stream.id, type=Recipient.STREAM)
stream.recipient = recipient
stream.save(update_fields=["recipient"])
# Subscribe shylock to the stream to avoid invariant failures.
create_stream_subscription(
user_profile=shylock,
recipient=recipient,
stream=stream,
color=STREAM_ASSIGNMENT_COLORS[0],
)
RealmAuditLog.objects.create(
realm=realm,
modified_user=shylock,
modified_stream=stream,
event_last_message_id=0,
event_type=AuditLogEventType.SUBSCRIPTION_CREATED,
event_time=installation_time,
)
# TODO: This should use subscribe_users_to_streams from populate_db.
subs = [
Subscription(
recipient=recipient,
user_profile=shylock,
is_user_active=shylock.is_active,
color=STREAM_ASSIGNMENT_COLORS[0],
),
]
Subscription.objects.bulk_create(subs)
# Create an attachment in the database for set_storage_space_used_statistic.
IMAGE_FILE_PATH = static_path("images/test-images/checkbox.png")
with open(IMAGE_FILE_PATH, "rb") as fp:
upload_message_attachment_from_request(UploadedFile(fp), shylock)
FixtureData: TypeAlias = Mapping[str | int | None, list[int]]
FixtureData = Mapping[Union[str, int, None], List[int]]
def insert_fixture_data(
stat: CountStat,
fixture_data: FixtureData,
table: type[BaseCount],
table: Type[BaseCount],
) -> None:
end_times = time_range(
last_end_time, last_end_time, stat.frequency, len(next(iter(fixture_data.values())))
last_end_time, last_end_time, stat.frequency, len(list(fixture_data.values())[0])
)
if table == InstallationCount:
id_args: dict[str, Any] = {}
id_args: Dict[str, Any] = {}
if table == RealmCount:
id_args = {"realm": realm}
if table == UserCount:
@@ -166,7 +125,7 @@ class Command(ZulipBaseCommand):
id_args = {"stream": stream, "realm": realm}
for subgroup, values in fixture_data.items():
table._default_manager.bulk_create(
table.objects.bulk_create(
table(
property=stat.property,
subgroup=subgroup,
@@ -174,7 +133,7 @@ class Command(ZulipBaseCommand):
value=value,
**id_args,
)
for end_time, value in zip(end_times, values, strict=False)
for end_time, value in zip(end_times, values)
if value != 0
)
@@ -281,7 +240,6 @@ class Command(ZulipBaseCommand):
android, created = Client.objects.get_or_create(name="ZulipAndroid")
iOS, created = Client.objects.get_or_create(name="ZulipiOS")
react_native, created = Client.objects.get_or_create(name="ZulipMobile")
flutter, created = Client.objects.get_or_create(name="ZulipFlutter")
API, created = Client.objects.get_or_create(name="API: Python")
zephyr_mirror, created = Client.objects.get_or_create(name="zephyr_mirror")
unused, created = Client.objects.get_or_create(name="unused")
@@ -299,7 +257,6 @@ class Command(ZulipBaseCommand):
android.id: self.generate_fixture_data(stat, 5, 5, 2, 0.6, 3),
iOS.id: self.generate_fixture_data(stat, 5, 5, 2, 0.6, 3),
react_native.id: self.generate_fixture_data(stat, 5, 5, 10, 0.6, 3),
flutter.id: self.generate_fixture_data(stat, 5, 5, 10, 0.6, 3),
API.id: self.generate_fixture_data(stat, 5, 5, 5, 0.6, 3),
zephyr_mirror.id: self.generate_fixture_data(stat, 1, 1, 3, 0.6, 3),
unused.id: self.generate_fixture_data(stat, 0, 0, 0, 0, 0),
@@ -311,7 +268,6 @@ class Command(ZulipBaseCommand):
old_desktop.id: self.generate_fixture_data(stat, 50, 30, 8, 0.6, 3),
android.id: self.generate_fixture_data(stat, 50, 50, 2, 0.6, 3),
iOS.id: self.generate_fixture_data(stat, 50, 50, 2, 0.6, 3),
flutter.id: self.generate_fixture_data(stat, 5, 5, 10, 0.6, 3),
react_native.id: self.generate_fixture_data(stat, 5, 5, 10, 0.6, 3),
API.id: self.generate_fixture_data(stat, 50, 50, 5, 0.6, 3),
zephyr_mirror.id: self.generate_fixture_data(stat, 10, 10, 3, 0.6, 3),
@@ -329,7 +285,7 @@ class Command(ZulipBaseCommand):
"true": self.generate_fixture_data(stat, 20, 2, 3, 0.2, 3),
}
insert_fixture_data(stat, realm_data, RealmCount)
stream_data: Mapping[int | str | None, list[int]] = {
stream_data: Mapping[Union[int, str, None], List[int]] = {
"false": self.generate_fixture_data(stat, 10, 7, 5, 0.6, 4),
"true": self.generate_fixture_data(stat, 5, 3, 2, 0.4, 2),
}

View File

@@ -1,32 +1,32 @@
import hashlib
import os
import time
from argparse import ArgumentParser
from datetime import timezone
from typing import Any
from typing import Any, Dict
from django.conf import settings
from django.core.management.base import BaseCommand
from django.utils.dateparse import parse_datetime
from django.utils.timezone import now as timezone_now
from typing_extensions import override
from analytics.lib.counts import ALL_COUNT_STATS, logger, process_count_stat
from zerver.lib.management import ZulipBaseCommand, abort_cron_during_deploy, abort_unless_locked
from zerver.lib.remote_server import send_server_data_to_push_bouncer, should_send_analytics_data
from analytics.lib.counts import COUNT_STATS, logger, process_count_stat
from scripts.lib.zulip_tools import ENDC, WARNING
from zerver.lib.remote_server import send_analytics_to_remote_server
from zerver.lib.timestamp import floor_to_hour
from zerver.models import Realm
class Command(ZulipBaseCommand):
class Command(BaseCommand):
help = """Fills Analytics tables.
Run as a cron job that runs every hour."""
@override
def add_arguments(self, parser: ArgumentParser) -> None:
parser.add_argument(
"--time",
"-t",
help="Update stat tables from current state to --time. Defaults to the current time.",
help="Update stat tables from current state to "
"--time. Defaults to the current time.",
default=timezone_now().isoformat(),
)
parser.add_argument("--utc", action="store_true", help="Interpret --time in UTC.")
@@ -37,13 +37,22 @@ class Command(ZulipBaseCommand):
"--verbose", action="store_true", help="Print timing information to stdout."
)
@override
@abort_cron_during_deploy
@abort_unless_locked
def handle(self, *args: Any, **options: Any) -> None:
self.run_update_analytics_counts(options)
try:
os.mkdir(settings.ANALYTICS_LOCK_DIR)
except OSError:
print(
f"{WARNING}Analytics lock {settings.ANALYTICS_LOCK_DIR} is unavailable;"
f" exiting.{ENDC}"
)
return
def run_update_analytics_counts(self, options: dict[str, Any]) -> None:
try:
self.run_update_analytics_counts(options)
finally:
os.rmdir(settings.ANALYTICS_LOCK_DIR)
def run_update_analytics_counts(self, options: Dict[str, Any]) -> None:
# installation_epoch relies on there being at least one realm; we
# shouldn't run the analytics code if that condition isn't satisfied
if not Realm.objects.exists():
@@ -62,9 +71,9 @@ class Command(ZulipBaseCommand):
fill_to_time = floor_to_hour(fill_to_time.astimezone(timezone.utc))
if options["stat"] is not None:
stats = [ALL_COUNT_STATS[options["stat"]]]
stats = [COUNT_STATS[options["stat"]]]
else:
stats = list(ALL_COUNT_STATS.values())
stats = list(COUNT_STATS.values())
logger.info("Starting updating analytics counts through %s", fill_to_time)
if options["verbose"]:
@@ -83,17 +92,5 @@ class Command(ZulipBaseCommand):
)
logger.info("Finished updating analytics counts through %s", fill_to_time)
if should_send_analytics_data():
# Based on the specific value of the setting, the exact details to send
# will be decided. However, we proceed just based on this not being falsey.
# Skew 0-10 minutes based on a hash of settings.ZULIP_ORG_ID, so
# that each server will report in at a somewhat consistent time.
assert settings.ZULIP_ORG_ID
delay = int.from_bytes(
hashlib.sha256(settings.ZULIP_ORG_ID.encode()).digest(), byteorder="big"
) % (60 * 10)
logger.info("Sleeping %d seconds before reporting...", delay)
time.sleep(delay)
send_server_data_to_push_bouncer(consider_usage_statistics=True, raise_on_error=True)
if settings.PUSH_NOTIFICATION_BOUNCER_URL and settings.SUBMIT_USAGE_STATISTICS:
send_analytics_to_remote_server()

View File

@@ -4,6 +4,7 @@ from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("zerver", "0030_realm_org_type"),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),

View File

@@ -1,224 +0,0 @@
# Generated by Django 5.0.7 on 2024-08-13 20:16
import django.db.models.deletion
from django.conf import settings
from django.db import migrations, models
class Migration(migrations.Migration):
replaces = [
("analytics", "0001_initial"),
("analytics", "0002_remove_huddlecount"),
("analytics", "0003_fillstate"),
("analytics", "0004_add_subgroup"),
("analytics", "0005_alter_field_size"),
("analytics", "0006_add_subgroup_to_unique_constraints"),
("analytics", "0007_remove_interval"),
("analytics", "0008_add_count_indexes"),
("analytics", "0009_remove_messages_to_stream_stat"),
("analytics", "0010_clear_messages_sent_values"),
("analytics", "0011_clear_analytics_tables"),
("analytics", "0012_add_on_delete"),
("analytics", "0013_remove_anomaly"),
("analytics", "0014_remove_fillstate_last_modified"),
("analytics", "0015_clear_duplicate_counts"),
("analytics", "0016_unique_constraint_when_subgroup_null"),
("analytics", "0017_regenerate_partial_indexes"),
("analytics", "0018_remove_usercount_active_users_audit"),
("analytics", "0019_remove_unused_counts"),
("analytics", "0020_alter_installationcount_id_alter_realmcount_id_and_more"),
("analytics", "0021_alter_fillstate_id"),
]
initial = True
dependencies = [
# Needed for foreign keys to core models like Realm.
("zerver", "0001_initial"),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name="InstallationCount",
fields=[
(
"id",
models.BigAutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
("property", models.CharField(max_length=32)),
("end_time", models.DateTimeField()),
("value", models.BigIntegerField()),
("subgroup", models.CharField(max_length=16, null=True)),
],
options={
"unique_together": set(),
"constraints": [
models.UniqueConstraint(
condition=models.Q(("subgroup__isnull", False)),
fields=("property", "subgroup", "end_time"),
name="unique_installation_count",
),
models.UniqueConstraint(
condition=models.Q(("subgroup__isnull", True)),
fields=("property", "end_time"),
name="unique_installation_count_null_subgroup",
),
],
},
),
migrations.CreateModel(
name="RealmCount",
fields=[
(
"id",
models.BigAutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
(
"realm",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="zerver.realm"
),
),
("property", models.CharField(max_length=32)),
("end_time", models.DateTimeField()),
("value", models.BigIntegerField()),
("subgroup", models.CharField(max_length=16, null=True)),
],
options={
"indexes": [
models.Index(
fields=["property", "end_time"],
name="analytics_realmcount_property_end_time_3b60396b_idx",
)
],
"unique_together": set(),
"constraints": [
models.UniqueConstraint(
condition=models.Q(("subgroup__isnull", False)),
fields=("realm", "property", "subgroup", "end_time"),
name="unique_realm_count",
),
models.UniqueConstraint(
condition=models.Q(("subgroup__isnull", True)),
fields=("realm", "property", "end_time"),
name="unique_realm_count_null_subgroup",
),
],
},
),
migrations.CreateModel(
name="StreamCount",
fields=[
(
"id",
models.BigAutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
(
"realm",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="zerver.realm"
),
),
(
"stream",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="zerver.stream"
),
),
("property", models.CharField(max_length=32)),
("end_time", models.DateTimeField()),
("value", models.BigIntegerField()),
("subgroup", models.CharField(max_length=16, null=True)),
],
options={
"indexes": [
models.Index(
fields=["property", "realm", "end_time"],
name="analytics_streamcount_property_realm_id_end_time_155ae930_idx",
)
],
"unique_together": set(),
"constraints": [
models.UniqueConstraint(
condition=models.Q(("subgroup__isnull", False)),
fields=("stream", "property", "subgroup", "end_time"),
name="unique_stream_count",
),
models.UniqueConstraint(
condition=models.Q(("subgroup__isnull", True)),
fields=("stream", "property", "end_time"),
name="unique_stream_count_null_subgroup",
),
],
},
),
migrations.CreateModel(
name="UserCount",
fields=[
(
"id",
models.BigAutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
(
"realm",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="zerver.realm"
),
),
(
"user",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL
),
),
("property", models.CharField(max_length=32)),
("end_time", models.DateTimeField()),
("value", models.BigIntegerField()),
("subgroup", models.CharField(max_length=16, null=True)),
],
options={
"indexes": [
models.Index(
fields=["property", "realm", "end_time"],
name="analytics_usercount_property_realm_id_end_time_591dbec1_idx",
)
],
"unique_together": set(),
"constraints": [
models.UniqueConstraint(
condition=models.Q(("subgroup__isnull", False)),
fields=("user", "property", "subgroup", "end_time"),
name="unique_user_count",
),
models.UniqueConstraint(
condition=models.Q(("subgroup__isnull", True)),
fields=("user", "property", "end_time"),
name="unique_user_count_null_subgroup",
),
],
},
),
migrations.CreateModel(
name="FillState",
fields=[
(
"id",
models.BigAutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
("property", models.CharField(max_length=40, unique=True)),
("end_time", models.DateTimeField()),
("state", models.PositiveSmallIntegerField()),
],
),
]

View File

@@ -2,6 +2,7 @@ from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("analytics", "0001_initial"),
]

View File

@@ -2,6 +2,7 @@ from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("analytics", "0002_remove_huddlecount"),
]

View File

@@ -2,6 +2,7 @@ from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("analytics", "0003_fillstate"),
]

View File

@@ -2,6 +2,7 @@ from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("analytics", "0004_add_subgroup"),
]

View File

@@ -2,6 +2,7 @@ from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("analytics", "0005_alter_field_size"),
]

View File

@@ -3,6 +3,7 @@ from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("analytics", "0006_add_subgroup_to_unique_constraints"),
]

View File

@@ -1,33 +1,25 @@
# Generated by Django 1.10.5 on 2017-02-01 22:28
from django.db import migrations, models
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("zerver", "0050_userprofile_avatar_version"),
("analytics", "0007_remove_interval"),
]
operations = [
migrations.AddIndex(
model_name="realmcount",
index=models.Index(
fields=["property", "end_time"],
name="analytics_realmcount_property_end_time_3b60396b_idx",
),
migrations.AlterIndexTogether(
name="realmcount",
index_together={("property", "end_time")},
),
migrations.AddIndex(
model_name="streamcount",
index=models.Index(
fields=["property", "realm", "end_time"],
name="analytics_streamcount_property_realm_id_end_time_155ae930_idx",
),
migrations.AlterIndexTogether(
name="streamcount",
index_together={("property", "realm", "end_time")},
),
migrations.AddIndex(
model_name="usercount",
index=models.Index(
fields=["property", "realm", "end_time"],
name="analytics_usercount_property_realm_id_end_time_591dbec1_idx",
),
migrations.AlterIndexTogether(
name="usercount",
index_together={("property", "realm", "end_time")},
),
]

View File

@@ -1,10 +1,10 @@
from django.db import migrations
from django.db.backends.base.schema import BaseDatabaseSchemaEditor
from django.db.backends.postgresql.schema import DatabaseSchemaEditor
from django.db.migrations.state import StateApps
def delete_messages_sent_to_stream_stat(
apps: StateApps, schema_editor: BaseDatabaseSchemaEditor
apps: StateApps, schema_editor: DatabaseSchemaEditor
) -> None:
UserCount = apps.get_model("analytics", "UserCount")
StreamCount = apps.get_model("analytics", "StreamCount")
@@ -21,10 +21,11 @@ def delete_messages_sent_to_stream_stat(
class Migration(migrations.Migration):
dependencies = [
("analytics", "0008_add_count_indexes"),
]
operations = [
migrations.RunPython(delete_messages_sent_to_stream_stat, elidable=True),
migrations.RunPython(delete_messages_sent_to_stream_stat),
]

View File

@@ -1,10 +1,10 @@
from django.db import migrations
from django.db.backends.base.schema import BaseDatabaseSchemaEditor
from django.db.backends.postgresql.schema import DatabaseSchemaEditor
from django.db.migrations.state import StateApps
def clear_message_sent_by_message_type_values(
apps: StateApps, schema_editor: BaseDatabaseSchemaEditor
apps: StateApps, schema_editor: DatabaseSchemaEditor
) -> None:
UserCount = apps.get_model("analytics", "UserCount")
StreamCount = apps.get_model("analytics", "StreamCount")
@@ -21,8 +21,9 @@ def clear_message_sent_by_message_type_values(
class Migration(migrations.Migration):
dependencies = [("analytics", "0009_remove_messages_to_stream_stat")]
operations = [
migrations.RunPython(clear_message_sent_by_message_type_values, elidable=True),
migrations.RunPython(clear_message_sent_by_message_type_values),
]

View File

@@ -1,9 +1,9 @@
from django.db import migrations
from django.db.backends.base.schema import BaseDatabaseSchemaEditor
from django.db.backends.postgresql.schema import DatabaseSchemaEditor
from django.db.migrations.state import StateApps
def clear_analytics_tables(apps: StateApps, schema_editor: BaseDatabaseSchemaEditor) -> None:
def clear_analytics_tables(apps: StateApps, schema_editor: DatabaseSchemaEditor) -> None:
UserCount = apps.get_model("analytics", "UserCount")
StreamCount = apps.get_model("analytics", "StreamCount")
RealmCount = apps.get_model("analytics", "RealmCount")
@@ -18,10 +18,11 @@ def clear_analytics_tables(apps: StateApps, schema_editor: BaseDatabaseSchemaEdi
class Migration(migrations.Migration):
dependencies = [
("analytics", "0010_clear_messages_sent_values"),
]
operations = [
migrations.RunPython(clear_analytics_tables, elidable=True),
migrations.RunPython(clear_analytics_tables),
]

View File

@@ -5,6 +5,7 @@ from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("analytics", "0011_clear_analytics_tables"),
]

View File

@@ -4,6 +4,7 @@ from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("analytics", "0012_add_on_delete"),
]

View File

@@ -4,6 +4,7 @@ from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("analytics", "0013_remove_anomaly"),
]

View File

@@ -1,10 +1,10 @@
from django.db import migrations
from django.db.backends.base.schema import BaseDatabaseSchemaEditor
from django.db.backends.postgresql.schema import DatabaseSchemaEditor
from django.db.migrations.state import StateApps
from django.db.models import Count, Sum
def clear_duplicate_counts(apps: StateApps, schema_editor: BaseDatabaseSchemaEditor) -> None:
def clear_duplicate_counts(apps: StateApps, schema_editor: DatabaseSchemaEditor) -> None:
"""This is a preparatory migration for our Analytics tables.
The backstory is that Django's unique_together indexes do not properly
@@ -55,12 +55,11 @@ def clear_duplicate_counts(apps: StateApps, schema_editor: BaseDatabaseSchemaEdi
class Migration(migrations.Migration):
dependencies = [
("analytics", "0014_remove_fillstate_last_modified"),
]
operations = [
migrations.RunPython(
clear_duplicate_counts, reverse_code=migrations.RunPython.noop, elidable=True
),
migrations.RunPython(clear_duplicate_counts, reverse_code=migrations.RunPython.noop),
]

View File

@@ -4,6 +4,7 @@ from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("analytics", "0015_clear_duplicate_counts"),
]

View File

@@ -1,114 +0,0 @@
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("analytics", "0016_unique_constraint_when_subgroup_null"),
]
# If the server was installed between 7.0 and 7.4 (or main between
# 2c20028aa451 and 7807bff52635), it contains indexes which (when
# running 7.5 or 7807bff52635 or higher) are never used, because
# they contain an improper cast
# (https://code.djangoproject.com/ticket/34840).
#
# We regenerate the indexes here, by dropping and re-creating
# them, so that we know that they are properly formed.
operations = [
migrations.RemoveConstraint(
model_name="installationcount",
name="unique_installation_count",
),
migrations.AddConstraint(
model_name="installationcount",
constraint=models.UniqueConstraint(
condition=models.Q(subgroup__isnull=False),
fields=("property", "subgroup", "end_time"),
name="unique_installation_count",
),
),
migrations.RemoveConstraint(
model_name="installationcount",
name="unique_installation_count_null_subgroup",
),
migrations.AddConstraint(
model_name="installationcount",
constraint=models.UniqueConstraint(
condition=models.Q(subgroup__isnull=True),
fields=("property", "end_time"),
name="unique_installation_count_null_subgroup",
),
),
migrations.RemoveConstraint(
model_name="realmcount",
name="unique_realm_count",
),
migrations.AddConstraint(
model_name="realmcount",
constraint=models.UniqueConstraint(
condition=models.Q(subgroup__isnull=False),
fields=("realm", "property", "subgroup", "end_time"),
name="unique_realm_count",
),
),
migrations.RemoveConstraint(
model_name="realmcount",
name="unique_realm_count_null_subgroup",
),
migrations.AddConstraint(
model_name="realmcount",
constraint=models.UniqueConstraint(
condition=models.Q(subgroup__isnull=True),
fields=("realm", "property", "end_time"),
name="unique_realm_count_null_subgroup",
),
),
migrations.RemoveConstraint(
model_name="streamcount",
name="unique_stream_count",
),
migrations.AddConstraint(
model_name="streamcount",
constraint=models.UniqueConstraint(
condition=models.Q(subgroup__isnull=False),
fields=("stream", "property", "subgroup", "end_time"),
name="unique_stream_count",
),
),
migrations.RemoveConstraint(
model_name="streamcount",
name="unique_stream_count_null_subgroup",
),
migrations.AddConstraint(
model_name="streamcount",
constraint=models.UniqueConstraint(
condition=models.Q(subgroup__isnull=True),
fields=("stream", "property", "end_time"),
name="unique_stream_count_null_subgroup",
),
),
migrations.RemoveConstraint(
model_name="usercount",
name="unique_user_count",
),
migrations.AddConstraint(
model_name="usercount",
constraint=models.UniqueConstraint(
condition=models.Q(subgroup__isnull=False),
fields=("user", "property", "subgroup", "end_time"),
name="unique_user_count",
),
),
migrations.RemoveConstraint(
model_name="usercount",
name="unique_user_count_null_subgroup",
),
migrations.AddConstraint(
model_name="usercount",
constraint=models.UniqueConstraint(
condition=models.Q(subgroup__isnull=True),
fields=("user", "property", "end_time"),
name="unique_user_count_null_subgroup",
),
),
]

View File

@@ -1,16 +0,0 @@
from django.db import migrations
class Migration(migrations.Migration):
elidable = True
dependencies = [
("analytics", "0017_regenerate_partial_indexes"),
]
operations = [
migrations.RunSQL(
"DELETE FROM analytics_usercount WHERE property = 'active_users_audit:is_bot:day'",
elidable=True,
)
]

View File

@@ -1,27 +0,0 @@
from django.db import migrations
REMOVED_COUNTS = (
"active_users_log:is_bot:day",
"active_users:is_bot:day",
)
class Migration(migrations.Migration):
elidable = True
dependencies = [
("analytics", "0018_remove_usercount_active_users_audit"),
]
operations = [
migrations.RunSQL(
[
("DELETE FROM analytics_realmcount WHERE property IN %s", (REMOVED_COUNTS,)),
(
"DELETE FROM analytics_installationcount WHERE property IN %s",
(REMOVED_COUNTS,),
),
],
elidable=True,
)
]

View File

@@ -1,40 +0,0 @@
from django.db import migrations, models
class Migration(migrations.Migration):
atomic = False
dependencies = [
("analytics", "0019_remove_unused_counts"),
]
operations = [
migrations.AlterField(
model_name="installationcount",
name="id",
field=models.BigAutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
migrations.AlterField(
model_name="realmcount",
name="id",
field=models.BigAutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
migrations.AlterField(
model_name="streamcount",
name="id",
field=models.BigAutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
migrations.AlterField(
model_name="usercount",
name="id",
field=models.BigAutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
]

View File

@@ -1,17 +0,0 @@
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("analytics", "0020_alter_installationcount_id_alter_realmcount_id_and_more"),
]
operations = [
migrations.AlterField(
model_name="fillstate",
name="id",
field=models.BigAutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
]

View File

@@ -1,30 +1,29 @@
from datetime import datetime
import datetime
from typing import Optional
from django.db import models
from django.db.models import Q, UniqueConstraint
from typing_extensions import override
from zerver.lib.timestamp import floor_to_day
from zerver.models import Realm, Stream, UserProfile
class FillState(models.Model):
property = models.CharField(max_length=40, unique=True)
end_time = models.DateTimeField()
property: str = models.CharField(max_length=40, unique=True)
end_time: datetime.datetime = models.DateTimeField()
# Valid states are {DONE, STARTED}
DONE = 1
STARTED = 2
state = models.PositiveSmallIntegerField()
state: int = models.PositiveSmallIntegerField()
@override
def __str__(self) -> str:
return f"{self.property} {self.end_time} {self.state}"
return f"<FillState: {self.property} {self.end_time} {self.state}>"
# The earliest/starting end_time in FillState
# We assume there is at least one realm
def installation_epoch() -> datetime:
def installation_epoch() -> datetime.datetime:
earliest_realm_creation = Realm.objects.aggregate(models.Min("date_created"))[
"date_created__min"
]
@@ -35,10 +34,10 @@ class BaseCount(models.Model):
# Note: When inheriting from BaseCount, you may want to rearrange
# the order of the columns in the migration to make sure they
# match how you'd like the table to be arranged.
property = models.CharField(max_length=32)
subgroup = models.CharField(max_length=16, null=True)
end_time = models.DateTimeField()
value = models.BigIntegerField()
property: str = models.CharField(max_length=32)
subgroup: Optional[str] = models.CharField(max_length=16, null=True)
end_time: datetime.datetime = models.DateTimeField()
value: int = models.BigIntegerField()
class Meta:
abstract = True
@@ -60,9 +59,8 @@ class InstallationCount(BaseCount):
),
]
@override
def __str__(self) -> str:
return f"{self.property} {self.subgroup} {self.value}"
return f"<InstallationCount: {self.property} {self.subgroup} {self.value}>"
class RealmCount(BaseCount):
@@ -82,16 +80,10 @@ class RealmCount(BaseCount):
name="unique_realm_count_null_subgroup",
),
]
indexes = [
models.Index(
fields=["property", "end_time"],
name="analytics_realmcount_property_end_time_3b60396b_idx",
)
]
index_together = ["property", "end_time"]
@override
def __str__(self) -> str:
return f"{self.realm!r} {self.property} {self.subgroup} {self.value}"
return f"<RealmCount: {self.realm} {self.property} {self.subgroup} {self.value}>"
class UserCount(BaseCount):
@@ -114,16 +106,10 @@ class UserCount(BaseCount):
]
# This index dramatically improves the performance of
# aggregating from users to realms
indexes = [
models.Index(
fields=["property", "realm", "end_time"],
name="analytics_usercount_property_realm_id_end_time_591dbec1_idx",
)
]
index_together = ["property", "realm", "end_time"]
@override
def __str__(self) -> str:
return f"{self.user!r} {self.property} {self.subgroup} {self.value}"
return f"<UserCount: {self.user} {self.property} {self.subgroup} {self.value}>"
class StreamCount(BaseCount):
@@ -146,13 +132,9 @@ class StreamCount(BaseCount):
]
# This index dramatically improves the performance of
# aggregating from streams to realms
indexes = [
models.Index(
fields=["property", "realm", "end_time"],
name="analytics_streamcount_property_realm_id_end_time_155ae930_idx",
)
]
index_together = ["property", "realm", "end_time"]
@override
def __str__(self) -> str:
return f"{self.stream!r} {self.property} {self.subgroup} {self.value} {self.id}"
return (
f"<StreamCount: {self.stream} {self.property} {self.subgroup} {self.value} {self.id}>"
)

View File

@@ -0,0 +1,55 @@
from unittest import mock
from django.utils.timezone import now as timezone_now
from zerver.lib.test_classes import ZulipTestCase
from zerver.lib.test_helpers import queries_captured
from zerver.models import Client, UserActivity, UserProfile, flush_per_request_caches
class ActivityTest(ZulipTestCase):
@mock.patch("stripe.Customer.list", return_value=[])
def test_activity(self, unused_mock: mock.Mock) -> None:
self.login("hamlet")
client, _ = Client.objects.get_or_create(name="website")
query = "/json/messages/flags"
last_visit = timezone_now()
count = 150
for activity_user_profile in UserProfile.objects.all():
UserActivity.objects.get_or_create(
user_profile=activity_user_profile,
client=client,
query=query,
count=count,
last_visit=last_visit,
)
# Fails when not staff
result = self.client_get("/activity")
self.assertEqual(result.status_code, 302)
user_profile = self.example_user("hamlet")
user_profile.is_staff = True
user_profile.save(update_fields=["is_staff"])
flush_per_request_caches()
with queries_captured() as queries:
result = self.client_get("/activity")
self.assertEqual(result.status_code, 200)
self.assert_length(queries, 19)
flush_per_request_caches()
with queries_captured() as queries:
result = self.client_get("/realm_activity/zulip/")
self.assertEqual(result.status_code, 200)
self.assert_length(queries, 8)
iago = self.example_user("iago")
flush_per_request_caches()
with queries_captured() as queries:
result = self.client_get(f"/user_activity/{iago.id}/")
self.assertEqual(result.status_code, 200)
self.assert_length(queries, 5)

File diff suppressed because it is too large Load Diff

View File

@@ -1,16 +1,15 @@
from datetime import datetime, timedelta, timezone
from typing import List, Optional
from django.utils.timezone import now as timezone_now
from typing_extensions import override
from analytics.lib.counts import COUNT_STATS, CountStat
from analytics.lib.time_utils import time_range
from analytics.models import FillState, RealmCount, StreamCount, UserCount
from analytics.models import FillState, RealmCount, UserCount
from analytics.views.stats import rewrite_client_arrays, sort_by_totals, sort_client_labels
from zerver.lib.test_classes import ZulipTestCase
from zerver.lib.timestamp import ceiling_to_day, ceiling_to_hour, datetime_to_timestamp
from zerver.models import Client
from zerver.models.realms import get_realm
from zerver.models import Client, get_realm
class TestStatsEndpoint(ZulipTestCase):
@@ -69,12 +68,10 @@ class TestStatsEndpoint(ZulipTestCase):
class TestGetChartData(ZulipTestCase):
@override
def setUp(self) -> None:
super().setUp()
self.realm = get_realm("zulip")
self.user = self.example_user("hamlet")
self.stream_id = self.get_stream_id(self.get_streams(self.user)[0])
self.login_user(self.user)
self.end_times_hour = [
ceiling_to_hour(self.realm.date_created) + timedelta(hours=i) for i in range(4)
@@ -83,11 +80,11 @@ class TestGetChartData(ZulipTestCase):
ceiling_to_day(self.realm.date_created) + timedelta(days=i) for i in range(4)
]
def data(self, i: int) -> list[int]:
def data(self, i: int) -> List[int]:
return [0, 0, i, 0]
def insert_data(
self, stat: CountStat, realm_subgroups: list[str | None], user_subgroups: list[str]
self, stat: CountStat, realm_subgroups: List[Optional[str]], user_subgroups: List[str]
) -> None:
if stat.frequency == CountStat.HOUR:
insert_time = self.end_times_hour[2]
@@ -117,17 +114,6 @@ class TestGetChartData(ZulipTestCase):
)
for i, subgroup in enumerate(user_subgroups)
)
StreamCount.objects.bulk_create(
StreamCount(
property=stat.property,
subgroup=subgroup,
end_time=insert_time,
value=100 + i,
stream_id=self.stream_id,
realm=self.realm,
)
for i, subgroup in enumerate(realm_subgroups)
)
FillState.objects.create(property=stat.property, end_time=fill_time, state=FillState.DONE)
def test_number_of_humans(self) -> None:
@@ -138,7 +124,8 @@ class TestGetChartData(ZulipTestCase):
stat = COUNT_STATS["active_users_audit:is_bot:day"]
self.insert_data(stat, ["false"], [])
result = self.client_get("/json/analytics/chart_data", {"chart_name": "number_of_humans"})
data = self.assert_json_success(result)
self.assert_json_success(result)
data = result.json()
self.assertEqual(
data,
{
@@ -161,7 +148,8 @@ class TestGetChartData(ZulipTestCase):
result = self.client_get(
"/json/analytics/chart_data", {"chart_name": "messages_sent_over_time"}
)
data = self.assert_json_success(result)
self.assert_json_success(result)
data = result.json()
self.assertEqual(
data,
{
@@ -183,7 +171,8 @@ class TestGetChartData(ZulipTestCase):
result = self.client_get(
"/json/analytics/chart_data", {"chart_name": "messages_sent_by_message_type"}
)
data = self.assert_json_success(result)
self.assert_json_success(result)
data = result.json()
self.assertEqual(
data,
{
@@ -191,22 +180,22 @@ class TestGetChartData(ZulipTestCase):
"end_times": [datetime_to_timestamp(dt) for dt in self.end_times_day],
"frequency": CountStat.DAY,
"everyone": {
"Public channels": self.data(100),
"Private channels": self.data(0),
"Direct messages": self.data(101),
"Group direct messages": self.data(0),
"Public streams": self.data(100),
"Private streams": self.data(0),
"Private messages": self.data(101),
"Group private messages": self.data(0),
},
"user": {
"Public channels": self.data(200),
"Private channels": self.data(201),
"Direct messages": self.data(0),
"Group direct messages": self.data(0),
"Public streams": self.data(200),
"Private streams": self.data(201),
"Private messages": self.data(0),
"Group private messages": self.data(0),
},
"display_order": [
"Direct messages",
"Public channels",
"Private channels",
"Group direct messages",
"Private messages",
"Public streams",
"Private streams",
"Group private messages",
],
"result": "success",
},
@@ -226,7 +215,8 @@ class TestGetChartData(ZulipTestCase):
result = self.client_get(
"/json/analytics/chart_data", {"chart_name": "messages_sent_by_client"}
)
data = self.assert_json_success(result)
self.assert_json_success(result)
data = result.json()
self.assertEqual(
data,
{
@@ -250,7 +240,8 @@ class TestGetChartData(ZulipTestCase):
result = self.client_get(
"/json/analytics/chart_data", {"chart_name": "messages_read_over_time"}
)
data = self.assert_json_success(result)
self.assert_json_success(result)
data = result.json()
self.assertEqual(
data,
{
@@ -264,49 +255,6 @@ class TestGetChartData(ZulipTestCase):
},
)
def test_messages_sent_by_stream(self) -> None:
stat = COUNT_STATS["messages_in_stream:is_bot:day"]
self.insert_data(stat, ["true", "false"], [])
result = self.client_get(
f"/json/analytics/chart_data/stream/{self.stream_id}",
{
"chart_name": "messages_sent_by_stream",
},
)
data = self.assert_json_success(result)
self.assertEqual(
data,
{
"msg": "",
"end_times": [datetime_to_timestamp(dt) for dt in self.end_times_day],
"frequency": CountStat.DAY,
"everyone": {"bot": self.data(100), "human": self.data(101)},
"display_order": None,
"result": "success",
},
)
result = self.api_get(
self.example_user("polonius"),
f"/api/v1/analytics/chart_data/stream/{self.stream_id}",
{
"chart_name": "messages_sent_by_stream",
},
)
self.assert_json_error(result, "Not allowed for guest users")
# Verify we correctly forbid access to stats of streams in other realms.
result = self.api_get(
self.mit_user("sipbtest"),
f"/api/v1/analytics/chart_data/stream/{self.stream_id}",
{
"chart_name": "messages_sent_by_stream",
},
subdomain="zephyr",
)
self.assert_json_error(result, "Invalid channel ID")
def test_include_empty_subgroups(self) -> None:
FillState.objects.create(
property="realm_active_humans::day",
@@ -314,7 +262,8 @@ class TestGetChartData(ZulipTestCase):
state=FillState.DONE,
)
result = self.client_get("/json/analytics/chart_data", {"chart_name": "number_of_humans"})
data = self.assert_json_success(result)
self.assert_json_success(result)
data = result.json()
self.assertEqual(data["everyone"], {"_1day": [0], "_15day": [0], "all_time": [0]})
self.assertFalse("user" in data)
@@ -326,7 +275,8 @@ class TestGetChartData(ZulipTestCase):
result = self.client_get(
"/json/analytics/chart_data", {"chart_name": "messages_sent_over_time"}
)
data = self.assert_json_success(result)
self.assert_json_success(result)
data = result.json()
self.assertEqual(data["everyone"], {"human": [0], "bot": [0]})
self.assertEqual(data["user"], {"human": [0], "bot": [0]})
@@ -338,23 +288,24 @@ class TestGetChartData(ZulipTestCase):
result = self.client_get(
"/json/analytics/chart_data", {"chart_name": "messages_sent_by_message_type"}
)
data = self.assert_json_success(result)
self.assert_json_success(result)
data = result.json()
self.assertEqual(
data["everyone"],
{
"Public channels": [0],
"Private channels": [0],
"Direct messages": [0],
"Group direct messages": [0],
"Public streams": [0],
"Private streams": [0],
"Private messages": [0],
"Group private messages": [0],
},
)
self.assertEqual(
data["user"],
{
"Public channels": [0],
"Private channels": [0],
"Direct messages": [0],
"Group direct messages": [0],
"Public streams": [0],
"Private streams": [0],
"Private messages": [0],
"Group private messages": [0],
},
)
@@ -366,7 +317,8 @@ class TestGetChartData(ZulipTestCase):
result = self.client_get(
"/json/analytics/chart_data", {"chart_name": "messages_sent_by_client"}
)
data = self.assert_json_success(result)
self.assert_json_success(result)
data = result.json()
self.assertEqual(data["everyone"], {})
self.assertEqual(data["user"], {})
@@ -388,7 +340,8 @@ class TestGetChartData(ZulipTestCase):
"end": end_time_timestamps[2],
},
)
data = self.assert_json_success(result)
self.assert_json_success(result)
data = result.json()
self.assertEqual(data["end_times"], end_time_timestamps[1:3])
self.assertEqual(
data["everyone"], {"_1day": [0, 100], "_15day": [0, 100], "all_time": [0, 100]}
@@ -416,7 +369,8 @@ class TestGetChartData(ZulipTestCase):
result = self.client_get(
"/json/analytics/chart_data", {"chart_name": "number_of_humans", "min_length": 2}
)
data = self.assert_json_success(result)
self.assert_json_success(result)
data = result.json()
self.assertEqual(
data["end_times"], [datetime_to_timestamp(dt) for dt in self.end_times_day]
)
@@ -428,7 +382,8 @@ class TestGetChartData(ZulipTestCase):
result = self.client_get(
"/json/analytics/chart_data", {"chart_name": "number_of_humans", "min_length": 5}
)
data = self.assert_json_success(result)
self.assert_json_success(result)
data = result.json()
end_times = [
ceiling_to_day(self.realm.date_created) + timedelta(days=i) for i in range(-1, 4)
]
@@ -604,7 +559,7 @@ class TestGetChartData(ZulipTestCase):
class TestGetChartDataHelpers(ZulipTestCase):
def test_sort_by_totals(self) -> None:
empty: list[int] = []
empty: List[int] = []
value_arrays = {"c": [0, 1], "a": [9], "b": [1, 1, 1], "d": empty}
self.assertEqual(sort_by_totals(value_arrays), ["a", "b", "c", "d"])
@@ -660,30 +615,25 @@ class TestMapArrays(ZulipTestCase):
"website": [1, 2, 3],
"ZulipiOS": [1, 2, 3],
"ZulipElectron": [2, 5, 7],
"ZulipMobile": [1, 2, 3],
"ZulipMobile/flutter": [1, 1, 1],
"ZulipFlutter": [1, 1, 1],
"ZulipMobile": [1, 5, 7],
"ZulipPython": [1, 2, 3],
"API: Python": [1, 2, 3],
"SomethingRandom": [4, 5, 6],
"ZulipGitHubWebhook": [7, 7, 9],
"ZulipAndroid": [64, 63, 65],
"ZulipTerminal": [9, 10, 11],
}
result = rewrite_client_arrays(a)
self.assertEqual(
result,
{
"Old desktop app": [32, 36, 39],
"Ancient iOS app": [1, 2, 3],
"Old iOS app": [1, 2, 3],
"Desktop app": [2, 5, 7],
"Old mobile app (React Native)": [1, 2, 3],
"Mobile app (Flutter)": [2, 2, 2],
"Web app": [1, 2, 3],
"Mobile app": [1, 5, 7],
"Website": [1, 2, 3],
"Python API": [2, 4, 6],
"SomethingRandom": [4, 5, 6],
"GitHub webhook": [7, 7, 9],
"Ancient Android app": [64, 63, 65],
"Terminal app": [9, 10, 11],
"Old Android app": [64, 63, 65],
},
)

View File

@@ -0,0 +1,629 @@
from datetime import datetime, timedelta, timezone
from unittest import mock
import orjson
from django.http import HttpResponse
from django.utils.timezone import now as timezone_now
from corporate.lib.stripe import add_months, update_sponsorship_status
from corporate.models import Customer, CustomerPlan, LicenseLedger, get_customer_by_realm
from zerver.actions.invites import do_create_multiuse_invite_link
from zerver.actions.realm_settings import do_send_realm_reactivation_email, do_set_realm_property
from zerver.lib.test_classes import ZulipTestCase
from zerver.lib.test_helpers import reset_emails_in_zulip_realm
from zerver.models import (
MultiuseInvite,
PreregistrationUser,
Realm,
UserMessage,
UserProfile,
get_org_type_display_name,
get_realm,
)
class TestSupportEndpoint(ZulipTestCase):
def test_search(self) -> None:
reset_emails_in_zulip_realm()
def assert_user_details_in_html_response(
html_response: HttpResponse, full_name: str, email: str, role: str
) -> None:
self.assert_in_success_response(
[
'<span class="label">user</span>\n',
f"<h3>{full_name}</h3>",
f"<b>Email</b>: {email}",
"<b>Is active</b>: True<br />",
f"<b>Role</b>: {role}<br />",
],
html_response,
)
def check_hamlet_user_query_result(result: HttpResponse) -> None:
assert_user_details_in_html_response(
result, "King Hamlet", self.example_email("hamlet"), "Member"
)
self.assert_in_success_response(
[
f"<b>Admins</b>: {self.example_email('iago')}\n",
f"<b>Owners</b>: {self.example_email('desdemona')}\n",
'class="copy-button" data-copytext="{}">'.format(self.example_email("iago")),
'class="copy-button" data-copytext="{}">'.format(
self.example_email("desdemona")
),
],
result,
)
def check_othello_user_query_result(result: HttpResponse) -> None:
assert_user_details_in_html_response(
result, "Othello, the Moor of Venice", self.example_email("othello"), "Member"
)
def check_polonius_user_query_result(result: HttpResponse) -> None:
assert_user_details_in_html_response(
result, "Polonius", self.example_email("polonius"), "Guest"
)
def check_zulip_realm_query_result(result: HttpResponse) -> None:
zulip_realm = get_realm("zulip")
first_human_user = zulip_realm.get_first_human_user()
assert first_human_user is not None
self.assert_in_success_response(
[
f"<b>First human user</b>: {first_human_user.delivery_email}\n",
f'<input type="hidden" name="realm_id" value="{zulip_realm.id}"',
"Zulip Dev</h3>",
'<option value="1" selected>Self-hosted</option>',
'<option value="2" >Limited</option>',
'input type="number" name="discount" value="None"',
'<option value="active" selected>Active</option>',
'<option value="deactivated" >Deactivated</option>',
f'<option value="{zulip_realm.org_type}" selected>',
'scrub-realm-button">',
'data-string-id="zulip"',
],
result,
)
def check_lear_realm_query_result(result: HttpResponse) -> None:
lear_realm = get_realm("lear")
self.assert_in_success_response(
[
f'<input type="hidden" name="realm_id" value="{lear_realm.id}"',
"Lear &amp; Co.</h3>",
'<option value="1" selected>Self-hosted</option>',
'<option value="2" >Limited</option>',
'input type="number" name="discount" value="None"',
'<option value="active" selected>Active</option>',
'<option value="deactivated" >Deactivated</option>',
'scrub-realm-button">',
'data-string-id="lear"',
"<b>Name</b>: Zulip Cloud Standard",
"<b>Status</b>: Active",
"<b>Billing schedule</b>: Annual",
"<b>Licenses</b>: 2/10 (Manual)",
"<b>Price per license</b>: $80.0",
"<b>Next invoice date</b>: 02 January 2017",
'<option value="send_invoice" selected>',
'<option value="charge_automatically" >',
],
result,
)
def check_preregistration_user_query_result(
result: HttpResponse, email: str, invite: bool = False
) -> None:
self.assert_in_success_response(
[
'<span class="label">preregistration user</span>\n',
f"<b>Email</b>: {email}",
],
result,
)
if invite:
self.assert_in_success_response(['<span class="label">invite</span>'], result)
self.assert_in_success_response(
[
"<b>Expires in</b>: 1\xa0week, 3\xa0days",
"<b>Status</b>: Link has never been clicked",
],
result,
)
self.assert_in_success_response([], result)
else:
self.assert_not_in_success_response(['<span class="label">invite</span>'], result)
self.assert_in_success_response(
[
"<b>Expires in</b>: 1\xa0day",
"<b>Status</b>: Link has never been clicked",
],
result,
)
def check_realm_creation_query_result(result: HttpResponse, email: str) -> None:
self.assert_in_success_response(
[
'<span class="label">preregistration user</span>\n',
'<span class="label">realm creation</span>\n',
"<b>Link</b>: http://testserver/accounts/do_confirm/",
"<b>Expires in</b>: 1\xa0day",
],
result,
)
def check_multiuse_invite_link_query_result(result: HttpResponse) -> None:
self.assert_in_success_response(
[
'<span class="label">multiuse invite</span>\n',
"<b>Link</b>: http://zulip.testserver/join/",
"<b>Expires in</b>: 1\xa0week, 3\xa0days",
],
result,
)
def check_realm_reactivation_link_query_result(result: HttpResponse) -> None:
self.assert_in_success_response(
[
'<span class="label">realm reactivation</span>\n',
"<b>Link</b>: http://zulip.testserver/reactivate/",
"<b>Expires in</b>: 1\xa0day",
],
result,
)
self.login("cordelia")
result = self.client_get("/activity/support")
self.assertEqual(result.status_code, 302)
self.assertEqual(result["Location"], "/login/")
self.login("iago")
do_set_realm_property(
get_realm("zulip"),
"email_address_visibility",
Realm.EMAIL_ADDRESS_VISIBILITY_NOBODY,
acting_user=None,
)
customer = Customer.objects.create(realm=get_realm("lear"), stripe_customer_id="cus_123")
now = datetime(2016, 1, 2, tzinfo=timezone.utc)
plan = CustomerPlan.objects.create(
customer=customer,
billing_cycle_anchor=now,
billing_schedule=CustomerPlan.ANNUAL,
tier=CustomerPlan.STANDARD,
price_per_license=8000,
next_invoice_date=add_months(now, 12),
)
LicenseLedger.objects.create(
licenses=10,
licenses_at_next_renewal=10,
event_time=timezone_now(),
is_renewal=True,
plan=plan,
)
result = self.client_get("/activity/support")
self.assert_in_success_response(
['<input type="text" name="q" class="input-xxlarge search-query"'], result
)
result = self.client_get("/activity/support", {"q": self.example_email("hamlet")})
check_hamlet_user_query_result(result)
check_zulip_realm_query_result(result)
result = self.client_get("/activity/support", {"q": self.example_email("polonius")})
check_polonius_user_query_result(result)
check_zulip_realm_query_result(result)
result = self.client_get("/activity/support", {"q": "lear"})
check_lear_realm_query_result(result)
result = self.client_get("/activity/support", {"q": "http://lear.testserver"})
check_lear_realm_query_result(result)
with self.settings(REALM_HOSTS={"zulip": "localhost"}):
result = self.client_get("/activity/support", {"q": "http://localhost"})
check_zulip_realm_query_result(result)
result = self.client_get("/activity/support", {"q": "hamlet@zulip.com, lear"})
check_hamlet_user_query_result(result)
check_zulip_realm_query_result(result)
check_lear_realm_query_result(result)
result = self.client_get("/activity/support", {"q": "King hamlet,lear"})
check_hamlet_user_query_result(result)
check_zulip_realm_query_result(result)
check_lear_realm_query_result(result)
result = self.client_get("/activity/support", {"q": "Othello, the Moor of Venice"})
check_othello_user_query_result(result)
check_zulip_realm_query_result(result)
result = self.client_get("/activity/support", {"q": "lear, Hamlet <hamlet@zulip.com>"})
check_hamlet_user_query_result(result)
check_zulip_realm_query_result(result)
check_lear_realm_query_result(result)
with mock.patch(
"analytics.views.support.timezone_now",
return_value=timezone_now() - timedelta(minutes=50),
):
self.client_post("/accounts/home/", {"email": self.nonreg_email("test")})
self.login("iago")
result = self.client_get("/activity/support", {"q": self.nonreg_email("test")})
check_preregistration_user_query_result(result, self.nonreg_email("test"))
check_zulip_realm_query_result(result)
invite_expires_in_days = 10
stream_ids = [self.get_stream_id("Denmark")]
invitee_emails = [self.nonreg_email("test1")]
self.client_post(
"/json/invites",
{
"invitee_emails": invitee_emails,
"stream_ids": orjson.dumps(stream_ids).decode(),
"invite_expires_in_days": invite_expires_in_days,
"invite_as": PreregistrationUser.INVITE_AS["MEMBER"],
},
)
result = self.client_get("/activity/support", {"q": self.nonreg_email("test1")})
check_preregistration_user_query_result(result, self.nonreg_email("test1"), invite=True)
check_zulip_realm_query_result(result)
email = self.nonreg_email("alice")
self.client_post("/new/", {"email": email})
result = self.client_get("/activity/support", {"q": email})
check_realm_creation_query_result(result, email)
do_create_multiuse_invite_link(
self.example_user("hamlet"),
invited_as=1,
invite_expires_in_days=invite_expires_in_days,
)
result = self.client_get("/activity/support", {"q": "zulip"})
check_multiuse_invite_link_query_result(result)
check_zulip_realm_query_result(result)
MultiuseInvite.objects.all().delete()
do_send_realm_reactivation_email(get_realm("zulip"), acting_user=None)
result = self.client_get("/activity/support", {"q": "zulip"})
check_realm_reactivation_link_query_result(result)
check_zulip_realm_query_result(result)
def test_get_org_type_display_name(self) -> None:
self.assertEqual(get_org_type_display_name(Realm.ORG_TYPES["business"]["id"]), "Business")
self.assertEqual(get_org_type_display_name(883), "")
@mock.patch("analytics.views.support.update_billing_method_of_current_plan")
def test_change_billing_method(self, m: mock.Mock) -> None:
cordelia = self.example_user("cordelia")
self.login_user(cordelia)
result = self.client_post(
"/activity/support", {"realm_id": f"{cordelia.realm_id}", "plan_type": "2"}
)
self.assertEqual(result.status_code, 302)
self.assertEqual(result["Location"], "/login/")
iago = self.example_user("iago")
self.login_user(iago)
result = self.client_post(
"/activity/support",
{"realm_id": f"{iago.realm_id}", "billing_method": "charge_automatically"},
)
m.assert_called_once_with(get_realm("zulip"), charge_automatically=True, acting_user=iago)
self.assert_in_success_response(
["Billing method of zulip updated to charge automatically"], result
)
m.reset_mock()
result = self.client_post(
"/activity/support", {"realm_id": f"{iago.realm_id}", "billing_method": "send_invoice"}
)
m.assert_called_once_with(get_realm("zulip"), charge_automatically=False, acting_user=iago)
self.assert_in_success_response(
["Billing method of zulip updated to pay by invoice"], result
)
def test_change_realm_plan_type(self) -> None:
cordelia = self.example_user("cordelia")
self.login_user(cordelia)
result = self.client_post(
"/activity/support", {"realm_id": f"{cordelia.realm_id}", "plan_type": "2"}
)
self.assertEqual(result.status_code, 302)
self.assertEqual(result["Location"], "/login/")
iago = self.example_user("iago")
self.login_user(iago)
with mock.patch("analytics.views.support.do_change_realm_plan_type") as m:
result = self.client_post(
"/activity/support", {"realm_id": f"{iago.realm_id}", "plan_type": "2"}
)
m.assert_called_once_with(get_realm("zulip"), 2, acting_user=iago)
self.assert_in_success_response(
["Plan type of zulip changed from self-hosted to limited"], result
)
with mock.patch("analytics.views.support.do_change_realm_plan_type") as m:
result = self.client_post(
"/activity/support", {"realm_id": f"{iago.realm_id}", "plan_type": "10"}
)
m.assert_called_once_with(get_realm("zulip"), 10, acting_user=iago)
self.assert_in_success_response(
["Plan type of zulip changed from self-hosted to plus"], result
)
def test_change_org_type(self) -> None:
cordelia = self.example_user("cordelia")
self.login_user(cordelia)
result = self.client_post(
"/activity/support", {"realm_id": f"{cordelia.realm_id}", "org_type": "70"}
)
self.assertEqual(result.status_code, 302)
self.assertEqual(result["Location"], "/login/")
iago = self.example_user("iago")
self.login_user(iago)
with mock.patch("analytics.views.support.do_change_realm_org_type") as m:
result = self.client_post(
"/activity/support", {"realm_id": f"{iago.realm_id}", "org_type": "70"}
)
m.assert_called_once_with(get_realm("zulip"), 70, acting_user=iago)
self.assert_in_success_response(
["Org type of zulip changed from Business to Government"], result
)
def test_attach_discount(self) -> None:
cordelia = self.example_user("cordelia")
lear_realm = get_realm("lear")
self.login_user(cordelia)
result = self.client_post(
"/activity/support", {"realm_id": f"{lear_realm.id}", "discount": "25"}
)
self.assertEqual(result.status_code, 302)
self.assertEqual(result["Location"], "/login/")
iago = self.example_user("iago")
self.login("iago")
with mock.patch("analytics.views.support.attach_discount_to_realm") as m:
result = self.client_post(
"/activity/support", {"realm_id": f"{lear_realm.id}", "discount": "25"}
)
m.assert_called_once_with(get_realm("lear"), 25, acting_user=iago)
self.assert_in_success_response(["Discount of lear changed to 25% from 0%"], result)
def test_change_sponsorship_status(self) -> None:
lear_realm = get_realm("lear")
self.assertIsNone(get_customer_by_realm(lear_realm))
cordelia = self.example_user("cordelia")
self.login_user(cordelia)
result = self.client_post(
"/activity/support", {"realm_id": f"{lear_realm.id}", "sponsorship_pending": "true"}
)
self.assertEqual(result.status_code, 302)
self.assertEqual(result["Location"], "/login/")
iago = self.example_user("iago")
self.login_user(iago)
result = self.client_post(
"/activity/support", {"realm_id": f"{lear_realm.id}", "sponsorship_pending": "true"}
)
self.assert_in_success_response(["lear marked as pending sponsorship."], result)
customer = get_customer_by_realm(lear_realm)
assert customer is not None
self.assertTrue(customer.sponsorship_pending)
result = self.client_post(
"/activity/support", {"realm_id": f"{lear_realm.id}", "sponsorship_pending": "false"}
)
self.assert_in_success_response(["lear is no longer pending sponsorship."], result)
customer = get_customer_by_realm(lear_realm)
assert customer is not None
self.assertFalse(customer.sponsorship_pending)
def test_approve_sponsorship(self) -> None:
lear_realm = get_realm("lear")
update_sponsorship_status(lear_realm, True, acting_user=None)
king_user = self.lear_user("king")
king_user.role = UserProfile.ROLE_REALM_OWNER
king_user.save()
cordelia = self.example_user("cordelia")
self.login_user(cordelia)
result = self.client_post(
"/activity/support",
{"realm_id": f"{lear_realm.id}", "approve_sponsorship": "true"},
)
self.assertEqual(result.status_code, 302)
self.assertEqual(result["Location"], "/login/")
iago = self.example_user("iago")
self.login_user(iago)
result = self.client_post(
"/activity/support",
{"realm_id": f"{lear_realm.id}", "approve_sponsorship": "true"},
)
self.assert_in_success_response(["Sponsorship approved for lear"], result)
lear_realm.refresh_from_db()
self.assertEqual(lear_realm.plan_type, Realm.PLAN_TYPE_STANDARD_FREE)
customer = get_customer_by_realm(lear_realm)
assert customer is not None
self.assertFalse(customer.sponsorship_pending)
messages = UserMessage.objects.filter(user_profile=king_user)
self.assertIn(
"request for sponsored hosting has been approved", messages[0].message.content
)
self.assert_length(messages, 1)
def test_activate_or_deactivate_realm(self) -> None:
cordelia = self.example_user("cordelia")
lear_realm = get_realm("lear")
self.login_user(cordelia)
result = self.client_post(
"/activity/support", {"realm_id": f"{lear_realm.id}", "status": "deactivated"}
)
self.assertEqual(result.status_code, 302)
self.assertEqual(result["Location"], "/login/")
self.login("iago")
with mock.patch("analytics.views.support.do_deactivate_realm") as m:
result = self.client_post(
"/activity/support", {"realm_id": f"{lear_realm.id}", "status": "deactivated"}
)
m.assert_called_once_with(lear_realm, acting_user=self.example_user("iago"))
self.assert_in_success_response(["lear deactivated"], result)
with mock.patch("analytics.views.support.do_send_realm_reactivation_email") as m:
result = self.client_post(
"/activity/support", {"realm_id": f"{lear_realm.id}", "status": "active"}
)
m.assert_called_once_with(lear_realm, acting_user=self.example_user("iago"))
self.assert_in_success_response(
["Realm reactivation email sent to admins of lear"], result
)
def test_change_subdomain(self) -> None:
cordelia = self.example_user("cordelia")
lear_realm = get_realm("lear")
self.login_user(cordelia)
result = self.client_post(
"/activity/support", {"realm_id": f"{lear_realm.id}", "new_subdomain": "new_name"}
)
self.assertEqual(result.status_code, 302)
self.assertEqual(result["Location"], "/login/")
self.login("iago")
result = self.client_post(
"/activity/support", {"realm_id": f"{lear_realm.id}", "new_subdomain": "new-name"}
)
self.assertEqual(result.status_code, 302)
self.assertEqual(result["Location"], "/activity/support?q=new-name")
realm_id = lear_realm.id
lear_realm = get_realm("new-name")
self.assertEqual(lear_realm.id, realm_id)
self.assertTrue(Realm.objects.filter(string_id="lear").exists())
self.assertTrue(Realm.objects.filter(string_id="lear")[0].deactivated)
result = self.client_post(
"/activity/support", {"realm_id": f"{lear_realm.id}", "new_subdomain": "new-name"}
)
self.assert_in_success_response(
["Subdomain unavailable. Please choose a different one."], result
)
result = self.client_post(
"/activity/support", {"realm_id": f"{lear_realm.id}", "new_subdomain": "zulip"}
)
self.assert_in_success_response(
["Subdomain unavailable. Please choose a different one."], result
)
result = self.client_post(
"/activity/support", {"realm_id": f"{lear_realm.id}", "new_subdomain": "lear"}
)
self.assert_in_success_response(
["Subdomain unavailable. Please choose a different one."], result
)
def test_downgrade_realm(self) -> None:
cordelia = self.example_user("cordelia")
self.login_user(cordelia)
result = self.client_post(
"/activity/support", {"realm_id": f"{cordelia.realm_id}", "plan_type": "2"}
)
self.assertEqual(result.status_code, 302)
self.assertEqual(result["Location"], "/login/")
iago = self.example_user("iago")
self.login_user(iago)
with mock.patch("analytics.views.support.downgrade_at_the_end_of_billing_cycle") as m:
result = self.client_post(
"/activity/support",
{
"realm_id": f"{iago.realm_id}",
"downgrade_method": "downgrade_at_billing_cycle_end",
},
)
m.assert_called_once_with(get_realm("zulip"))
self.assert_in_success_response(
["zulip marked for downgrade at the end of billing cycle"], result
)
with mock.patch(
"analytics.views.support.downgrade_now_without_creating_additional_invoices"
) as m:
result = self.client_post(
"/activity/support",
{
"realm_id": f"{iago.realm_id}",
"downgrade_method": "downgrade_now_without_additional_licenses",
},
)
m.assert_called_once_with(get_realm("zulip"))
self.assert_in_success_response(
["zulip downgraded without creating additional invoices"], result
)
with mock.patch(
"analytics.views.support.downgrade_now_without_creating_additional_invoices"
) as m1:
with mock.patch("analytics.views.support.void_all_open_invoices", return_value=1) as m2:
result = self.client_post(
"/activity/support",
{
"realm_id": f"{iago.realm_id}",
"downgrade_method": "downgrade_now_void_open_invoices",
},
)
m1.assert_called_once_with(get_realm("zulip"))
m2.assert_called_once_with(get_realm("zulip"))
self.assert_in_success_response(
["zulip downgraded and voided 1 open invoices"], result
)
def test_scrub_realm(self) -> None:
cordelia = self.example_user("cordelia")
lear_realm = get_realm("lear")
self.login_user(cordelia)
result = self.client_post(
"/activity/support", {"realm_id": f"{lear_realm.id}", "discount": "25"}
)
self.assertEqual(result.status_code, 302)
self.assertEqual(result["Location"], "/login/")
self.login("iago")
with mock.patch("analytics.views.support.do_scrub_realm") as m:
result = self.client_post(
"/activity/support", {"realm_id": f"{lear_realm.id}", "scrub_realm": "true"}
)
m.assert_called_once_with(lear_realm, acting_user=self.example_user("iago"))
self.assert_in_success_response(["lear scrubbed"], result)
with mock.patch("analytics.views.support.do_scrub_realm") as m:
result = self.client_post("/activity/support", {"realm_id": f"{lear_realm.id}"})
self.assert_json_error(result, "Invalid parameters")
m.assert_not_called()

View File

@@ -1,38 +1,43 @@
from django.conf import settings
from typing import List, Union
from django.conf.urls import include
from django.urls import path
from django.urls.resolvers import URLPattern, URLResolver
from analytics.views.installation_activity import get_installation_activity
from analytics.views.realm_activity import get_realm_activity
from analytics.views.stats import (
get_chart_data,
get_chart_data_for_installation,
get_chart_data_for_realm,
get_chart_data_for_stream,
get_chart_data_for_remote_installation,
get_chart_data_for_remote_realm,
stats,
stats_for_installation,
stats_for_realm,
stats_for_remote_installation,
stats_for_remote_realm,
)
from analytics.views.support import support
from analytics.views.user_activity import get_user_activity
from zerver.lib.rest import rest_path
i18n_urlpatterns: list[URLPattern | URLResolver] = [
i18n_urlpatterns: List[Union[URLPattern, URLResolver]] = [
# Server admin (user_profile.is_staff) visible stats pages
path("activity", get_installation_activity),
path("activity/support", support, name="support"),
path("realm_activity/<realm_str>/", get_realm_activity),
path("user_activity/<user_profile_id>/", get_user_activity),
path("stats/realm/<realm_str>/", stats_for_realm),
path("stats/installation", stats_for_installation),
path("stats/remote/<int:remote_server_id>/installation", stats_for_remote_installation),
path(
"stats/remote/<int:remote_server_id>/realm/<int:remote_realm_id>/", stats_for_remote_realm
),
# User-visible stats page
path("stats", stats, name="stats"),
]
if settings.ZILENCER_ENABLED:
from analytics.views.stats import stats_for_remote_installation, stats_for_remote_realm
i18n_urlpatterns += [
path("stats/remote/<int:remote_server_id>/installation", stats_for_remote_installation),
path(
"stats/remote/<int:remote_server_id>/realm/<int:remote_realm_id>/",
stats_for_remote_realm,
),
]
# These endpoints are a part of the API (V1), which uses:
# * REST verbs
# * Basic auth (username:password is email:apiKey)
@@ -44,28 +49,18 @@ if settings.ZILENCER_ENABLED:
v1_api_and_json_patterns = [
# get data for the graphs at /stats
rest_path("analytics/chart_data", GET=get_chart_data),
rest_path("analytics/chart_data/stream/<stream_id>", GET=get_chart_data_for_stream),
rest_path("analytics/chart_data/realm/<realm_str>", GET=get_chart_data_for_realm),
rest_path("analytics/chart_data/installation", GET=get_chart_data_for_installation),
rest_path(
"analytics/chart_data/remote/<int:remote_server_id>/installation",
GET=get_chart_data_for_remote_installation,
),
rest_path(
"analytics/chart_data/remote/<int:remote_server_id>/realm/<int:remote_realm_id>",
GET=get_chart_data_for_remote_realm,
),
]
if settings.ZILENCER_ENABLED:
from analytics.views.stats import (
get_chart_data_for_remote_installation,
get_chart_data_for_remote_realm,
)
v1_api_and_json_patterns += [
rest_path(
"analytics/chart_data/remote/<int:remote_server_id>/installation",
GET=get_chart_data_for_remote_installation,
),
rest_path(
"analytics/chart_data/remote/<int:remote_server_id>/realm/<int:remote_realm_id>",
GET=get_chart_data_for_remote_realm,
),
]
i18n_urlpatterns += [
path("api/v1/", include(v1_api_and_json_patterns)),
path("json/", include(v1_api_and_json_patterns)),

View File

@@ -0,0 +1,137 @@
import re
from datetime import datetime
from html import escape
from typing import Any, Dict, List, Optional, Sequence
import pytz
from django.conf import settings
from django.db.backends.utils import CursorWrapper
from django.db.models.query import QuerySet
from django.template import loader
from django.urls import reverse
from markupsafe import Markup as mark_safe
eastern_tz = pytz.timezone("US/Eastern")
if settings.BILLING_ENABLED:
pass
def make_table(
title: str, cols: Sequence[str], rows: Sequence[Any], has_row_class: bool = False
) -> str:
if not has_row_class:
def fix_row(row: Any) -> Dict[str, Any]:
return dict(cells=row, row_class=None)
rows = list(map(fix_row, rows))
data = dict(title=title, cols=cols, rows=rows)
content = loader.render_to_string(
"analytics/ad_hoc_query.html",
dict(data=data),
)
return content
def dictfetchall(cursor: CursorWrapper) -> List[Dict[str, Any]]:
"Returns all rows from a cursor as a dict"
desc = cursor.description
return [dict(zip((col[0] for col in desc), row)) for row in cursor.fetchall()]
def format_date_for_activity_reports(date: Optional[datetime]) -> str:
if date:
return date.astimezone(eastern_tz).strftime("%Y-%m-%d %H:%M")
else:
return ""
def user_activity_link(email: str, user_profile_id: int) -> mark_safe:
from analytics.views.user_activity import get_user_activity
url = reverse(get_user_activity, kwargs=dict(user_profile_id=user_profile_id))
email_link = f'<a href="{escape(url)}">{escape(email)}</a>'
return mark_safe(email_link)
def realm_activity_link(realm_str: str) -> mark_safe:
from analytics.views.realm_activity import get_realm_activity
url = reverse(get_realm_activity, kwargs=dict(realm_str=realm_str))
realm_link = f'<a href="{escape(url)}">{escape(realm_str)}</a>'
return mark_safe(realm_link)
def realm_stats_link(realm_str: str) -> mark_safe:
from analytics.views.stats import stats_for_realm
url = reverse(stats_for_realm, kwargs=dict(realm_str=realm_str))
stats_link = f'<a href="{escape(url)}"><i class="fa fa-pie-chart"></i>{escape(realm_str)}</a>'
return mark_safe(stats_link)
def remote_installation_stats_link(server_id: int, hostname: str) -> mark_safe:
from analytics.views.stats import stats_for_remote_installation
url = reverse(stats_for_remote_installation, kwargs=dict(remote_server_id=server_id))
stats_link = f'<a href="{escape(url)}"><i class="fa fa-pie-chart"></i>{escape(hostname)}</a>'
return mark_safe(stats_link)
def get_user_activity_summary(records: List[QuerySet]) -> Dict[str, Any]:
#: The type annotation used above is clearly overly permissive.
#: We should perhaps use TypedDict to clearly lay out the schema
#: for the user activity summary.
summary: Dict[str, Any] = {}
def update(action: str, record: QuerySet) -> None:
if action not in summary:
summary[action] = dict(
count=record.count,
last_visit=record.last_visit,
)
else:
summary[action]["count"] += record.count
summary[action]["last_visit"] = max(
summary[action]["last_visit"],
record.last_visit,
)
if records:
summary["name"] = records[0].user_profile.full_name
summary["user_profile_id"] = records[0].user_profile.id
for record in records:
client = record.client.name
query = str(record.query)
update("use", record)
if client == "API":
m = re.match("/api/.*/external/(.*)", query)
if m:
client = m.group(1)
update(client, record)
if client.startswith("desktop"):
update("desktop", record)
if client == "website":
update("website", record)
if ("send_message" in query) or re.search("/api/.*/external/.*", query):
update("send", record)
if query in [
"/json/update_pointer",
"/json/users/me/pointer",
"/api/v1/update_pointer",
"update_pointer_backend",
]:
update("pointer", record)
update(client, record)
return summary

View File

@@ -0,0 +1,622 @@
import itertools
import time
from collections import defaultdict
from datetime import datetime, timedelta
from typing import Callable, Dict, List, Optional, Sequence, Tuple, Union
from django.conf import settings
from django.db import connection
from django.http import HttpRequest, HttpResponse
from django.shortcuts import render
from django.template import loader
from django.utils.timezone import now as timezone_now
from markupsafe import Markup as mark_safe
from psycopg2.sql import SQL, Composable, Literal
from analytics.lib.counts import COUNT_STATS
from analytics.views.activity_common import (
dictfetchall,
format_date_for_activity_reports,
make_table,
realm_activity_link,
realm_stats_link,
remote_installation_stats_link,
)
from analytics.views.support import get_plan_name
from zerver.decorator import require_server_admin
from zerver.lib.request import has_request_variables
from zerver.lib.timestamp import timestamp_to_datetime
from zerver.models import Realm, UserActivityInterval, UserProfile, get_org_type_display_name
if settings.BILLING_ENABLED:
from corporate.lib.stripe import (
estimate_annual_recurring_revenue_by_realm,
get_realms_to_default_discount_dict,
)
def get_realm_day_counts() -> Dict[str, Dict[str, str]]:
query = SQL(
"""
select
r.string_id,
(now()::date - date_sent::date) age,
count(*) cnt
from zerver_message m
join zerver_userprofile up on up.id = m.sender_id
join zerver_realm r on r.id = up.realm_id
join zerver_client c on c.id = m.sending_client_id
where
(not up.is_bot)
and
date_sent > now()::date - interval '8 day'
and
c.name not in ('zephyr_mirror', 'ZulipMonitoring')
group by
r.string_id,
age
order by
r.string_id,
age
"""
)
cursor = connection.cursor()
cursor.execute(query)
rows = dictfetchall(cursor)
cursor.close()
counts: Dict[str, Dict[int, int]] = defaultdict(dict)
for row in rows:
counts[row["string_id"]][row["age"]] = row["cnt"]
result = {}
for string_id in counts:
raw_cnts = [counts[string_id].get(age, 0) for age in range(8)]
min_cnt = min(raw_cnts[1:])
max_cnt = max(raw_cnts[1:])
def format_count(cnt: int, style: Optional[str] = None) -> str:
if style is not None:
good_bad = style
elif cnt == min_cnt:
good_bad = "bad"
elif cnt == max_cnt:
good_bad = "good"
else:
good_bad = "neutral"
return f'<td class="number {good_bad}">{cnt}</td>'
cnts = format_count(raw_cnts[0], "neutral") + "".join(map(format_count, raw_cnts[1:]))
result[string_id] = dict(cnts=cnts)
return result
def realm_summary_table(realm_minutes: Dict[str, float]) -> str:
now = timezone_now()
query = SQL(
"""
SELECT
realm.string_id,
realm.date_created,
realm.plan_type,
realm.org_type,
coalesce(wau_table.value, 0) wau_count,
coalesce(dau_table.value, 0) dau_count,
coalesce(user_count_table.value, 0) user_profile_count,
coalesce(bot_count_table.value, 0) bot_count
FROM
zerver_realm as realm
LEFT OUTER JOIN (
SELECT
value _14day_active_humans,
realm_id
from
analytics_realmcount
WHERE
property = 'realm_active_humans::day'
AND end_time = %(realm_active_humans_end_time)s
) as _14day_active_humans_table ON realm.id = _14day_active_humans_table.realm_id
LEFT OUTER JOIN (
SELECT
value,
realm_id
from
analytics_realmcount
WHERE
property = '7day_actives::day'
AND end_time = %(seven_day_actives_end_time)s
) as wau_table ON realm.id = wau_table.realm_id
LEFT OUTER JOIN (
SELECT
value,
realm_id
from
analytics_realmcount
WHERE
property = '1day_actives::day'
AND end_time = %(one_day_actives_end_time)s
) as dau_table ON realm.id = dau_table.realm_id
LEFT OUTER JOIN (
SELECT
value,
realm_id
from
analytics_realmcount
WHERE
property = 'active_users_audit:is_bot:day'
AND subgroup = 'false'
AND end_time = %(active_users_audit_end_time)s
) as user_count_table ON realm.id = user_count_table.realm_id
LEFT OUTER JOIN (
SELECT
value,
realm_id
from
analytics_realmcount
WHERE
property = 'active_users_audit:is_bot:day'
AND subgroup = 'true'
AND end_time = %(active_users_audit_end_time)s
) as bot_count_table ON realm.id = bot_count_table.realm_id
WHERE
_14day_active_humans IS NOT NULL
or realm.plan_type = 3
ORDER BY
dau_count DESC,
string_id ASC
"""
)
cursor = connection.cursor()
cursor.execute(
query,
{
"realm_active_humans_end_time": COUNT_STATS[
"realm_active_humans::day"
].last_successful_fill(),
"seven_day_actives_end_time": COUNT_STATS["7day_actives::day"].last_successful_fill(),
"one_day_actives_end_time": COUNT_STATS["1day_actives::day"].last_successful_fill(),
"active_users_audit_end_time": COUNT_STATS[
"active_users_audit:is_bot:day"
].last_successful_fill(),
},
)
rows = dictfetchall(cursor)
cursor.close()
# Fetch all the realm administrator users
realm_owners: Dict[str, List[str]] = defaultdict(list)
for up in UserProfile.objects.select_related("realm").filter(
role=UserProfile.ROLE_REALM_OWNER,
is_active=True,
):
realm_owners[up.realm.string_id].append(up.delivery_email)
for row in rows:
row["date_created_day"] = row["date_created"].strftime("%Y-%m-%d")
row["age_days"] = int((now - row["date_created"]).total_seconds() / 86400)
row["is_new"] = row["age_days"] < 12 * 7
row["realm_owner_emails"] = ", ".join(realm_owners[row["string_id"]])
# get messages sent per day
counts = get_realm_day_counts()
for row in rows:
try:
row["history"] = counts[row["string_id"]]["cnts"]
except Exception:
row["history"] = ""
# estimate annual subscription revenue
total_arr = 0
if settings.BILLING_ENABLED:
estimated_arrs = estimate_annual_recurring_revenue_by_realm()
realms_to_default_discount = get_realms_to_default_discount_dict()
for row in rows:
row["plan_type_string"] = get_plan_name(row["plan_type"])
string_id = row["string_id"]
if string_id in estimated_arrs:
row["arr"] = estimated_arrs[string_id]
if row["plan_type"] in [Realm.PLAN_TYPE_STANDARD, Realm.PLAN_TYPE_PLUS]:
row["effective_rate"] = 100 - int(realms_to_default_discount.get(string_id, 0))
elif row["plan_type"] == Realm.PLAN_TYPE_STANDARD_FREE:
row["effective_rate"] = 0
elif (
row["plan_type"] == Realm.PLAN_TYPE_LIMITED
and string_id in realms_to_default_discount
):
row["effective_rate"] = 100 - int(realms_to_default_discount[string_id])
else:
row["effective_rate"] = ""
total_arr += sum(estimated_arrs.values())
for row in rows:
row["org_type_string"] = get_org_type_display_name(row["org_type"])
# augment data with realm_minutes
total_hours = 0.0
for row in rows:
string_id = row["string_id"]
minutes = realm_minutes.get(string_id, 0.0)
hours = minutes / 60.0
total_hours += hours
row["hours"] = str(int(hours))
try:
row["hours_per_user"] = "{:.1f}".format(hours / row["dau_count"])
except Exception:
pass
# formatting
for row in rows:
row["stats_link"] = realm_stats_link(row["string_id"])
row["string_id"] = realm_activity_link(row["string_id"])
# Count active sites
def meets_goal(row: Dict[str, int]) -> bool:
return row["dau_count"] >= 5
num_active_sites = len(list(filter(meets_goal, rows)))
# create totals
total_dau_count = 0
total_user_profile_count = 0
total_bot_count = 0
total_wau_count = 0
for row in rows:
total_dau_count += int(row["dau_count"])
total_user_profile_count += int(row["user_profile_count"])
total_bot_count += int(row["bot_count"])
total_wau_count += int(row["wau_count"])
total_row = dict(
string_id="Total",
plan_type_string="",
org_type_string="",
effective_rate="",
arr=total_arr,
stats_link="",
date_created_day="",
realm_owner_emails="",
dau_count=total_dau_count,
user_profile_count=total_user_profile_count,
bot_count=total_bot_count,
hours=int(total_hours),
wau_count=total_wau_count,
)
rows.insert(0, total_row)
content = loader.render_to_string(
"analytics/realm_summary_table.html",
dict(
rows=rows,
num_active_sites=num_active_sites,
utctime=now.strftime("%Y-%m-%d %H:%MZ"),
billing_enabled=settings.BILLING_ENABLED,
),
)
return content
def user_activity_intervals() -> Tuple[mark_safe, Dict[str, float]]:
day_end = timestamp_to_datetime(time.time())
day_start = day_end - timedelta(hours=24)
output = "Per-user online duration for the last 24 hours:\n"
total_duration = timedelta(0)
all_intervals = (
UserActivityInterval.objects.filter(
end__gte=day_start,
start__lte=day_end,
)
.select_related(
"user_profile",
"user_profile__realm",
)
.only(
"start",
"end",
"user_profile__delivery_email",
"user_profile__realm__string_id",
)
.order_by(
"user_profile__realm__string_id",
"user_profile__delivery_email",
)
)
by_string_id = lambda row: row.user_profile.realm.string_id
by_email = lambda row: row.user_profile.delivery_email
realm_minutes = {}
for string_id, realm_intervals in itertools.groupby(all_intervals, by_string_id):
realm_duration = timedelta(0)
output += f"<hr>{string_id}\n"
for email, intervals in itertools.groupby(realm_intervals, by_email):
duration = timedelta(0)
for interval in intervals:
start = max(day_start, interval.start)
end = min(day_end, interval.end)
duration += end - start
total_duration += duration
realm_duration += duration
output += f" {email:<37}{duration}\n"
realm_minutes[string_id] = realm_duration.total_seconds() / 60
output += f"\nTotal duration: {total_duration}\n"
output += f"\nTotal duration in minutes: {total_duration.total_seconds() / 60.}\n"
output += f"Total duration amortized to a month: {total_duration.total_seconds() * 30. / 60.}"
content = mark_safe("<pre>" + output + "</pre>")
return content, realm_minutes
def ad_hoc_queries() -> List[Dict[str, str]]:
def get_page(
query: Composable, cols: Sequence[str], title: str, totals_columns: Sequence[int] = []
) -> Dict[str, str]:
cursor = connection.cursor()
cursor.execute(query)
rows = cursor.fetchall()
rows = list(map(list, rows))
cursor.close()
def fix_rows(
i: int, fixup_func: Union[Callable[[str], mark_safe], Callable[[datetime], str]]
) -> None:
for row in rows:
row[i] = fixup_func(row[i])
total_row = []
for i, col in enumerate(cols):
if col == "Realm":
fix_rows(i, realm_activity_link)
elif col in ["Last time", "Last visit"]:
fix_rows(i, format_date_for_activity_reports)
elif col == "Hostname":
for row in rows:
row[i] = remote_installation_stats_link(row[0], row[i])
if len(totals_columns) > 0:
if i == 0:
total_row.append("Total")
elif i in totals_columns:
total_row.append(str(sum(row[i] for row in rows if row[i] is not None)))
else:
total_row.append("")
if len(totals_columns) > 0:
rows.insert(0, total_row)
content = make_table(title, cols, rows)
return dict(
content=content,
title=title,
)
pages = []
###
for mobile_type in ["Android", "ZulipiOS"]:
title = f"{mobile_type} usage"
query = SQL(
"""
select
realm.string_id,
up.id user_id,
client.name,
sum(count) as hits,
max(last_visit) as last_time
from zerver_useractivity ua
join zerver_client client on client.id = ua.client_id
join zerver_userprofile up on up.id = ua.user_profile_id
join zerver_realm realm on realm.id = up.realm_id
where
client.name like {mobile_type}
group by string_id, up.id, client.name
having max(last_visit) > now() - interval '2 week'
order by string_id, up.id, client.name
"""
).format(
mobile_type=Literal(mobile_type),
)
cols = [
"Realm",
"User id",
"Name",
"Hits",
"Last time",
]
pages.append(get_page(query, cols, title))
###
title = "Desktop users"
query = SQL(
"""
select
realm.string_id,
client.name,
sum(count) as hits,
max(last_visit) as last_time
from zerver_useractivity ua
join zerver_client client on client.id = ua.client_id
join zerver_userprofile up on up.id = ua.user_profile_id
join zerver_realm realm on realm.id = up.realm_id
where
client.name like 'desktop%%'
group by string_id, client.name
having max(last_visit) > now() - interval '2 week'
order by string_id, client.name
"""
)
cols = [
"Realm",
"Client",
"Hits",
"Last time",
]
pages.append(get_page(query, cols, title))
###
title = "Integrations by realm"
query = SQL(
"""
select
realm.string_id,
case
when query like '%%external%%' then split_part(query, '/', 5)
else client.name
end client_name,
sum(count) as hits,
max(last_visit) as last_time
from zerver_useractivity ua
join zerver_client client on client.id = ua.client_id
join zerver_userprofile up on up.id = ua.user_profile_id
join zerver_realm realm on realm.id = up.realm_id
where
(query in ('send_message_backend', '/api/v1/send_message')
and client.name not in ('Android', 'ZulipiOS')
and client.name not like 'test: Zulip%%'
)
or
query like '%%external%%'
group by string_id, client_name
having max(last_visit) > now() - interval '2 week'
order by string_id, client_name
"""
)
cols = [
"Realm",
"Client",
"Hits",
"Last time",
]
pages.append(get_page(query, cols, title))
###
title = "Integrations by client"
query = SQL(
"""
select
case
when query like '%%external%%' then split_part(query, '/', 5)
else client.name
end client_name,
realm.string_id,
sum(count) as hits,
max(last_visit) as last_time
from zerver_useractivity ua
join zerver_client client on client.id = ua.client_id
join zerver_userprofile up on up.id = ua.user_profile_id
join zerver_realm realm on realm.id = up.realm_id
where
(query in ('send_message_backend', '/api/v1/send_message')
and client.name not in ('Android', 'ZulipiOS')
and client.name not like 'test: Zulip%%'
)
or
query like '%%external%%'
group by client_name, string_id
having max(last_visit) > now() - interval '2 week'
order by client_name, string_id
"""
)
cols = [
"Client",
"Realm",
"Hits",
"Last time",
]
pages.append(get_page(query, cols, title))
title = "Remote Zulip servers"
query = SQL(
"""
with icount as (
select
server_id,
max(value) as max_value,
max(end_time) as max_end_time
from zilencer_remoteinstallationcount
where
property='active_users:is_bot:day'
and subgroup='false'
group by server_id
),
remote_push_devices as (
select server_id, count(distinct(user_id)) as push_user_count from zilencer_remotepushdevicetoken
group by server_id
)
select
rserver.id,
rserver.hostname,
rserver.contact_email,
max_value,
push_user_count,
max_end_time
from zilencer_remotezulipserver rserver
left join icount on icount.server_id = rserver.id
left join remote_push_devices on remote_push_devices.server_id = rserver.id
order by max_value DESC NULLS LAST, push_user_count DESC NULLS LAST
"""
)
cols = [
"ID",
"Hostname",
"Contact email",
"Analytics users",
"Mobile users",
"Last update time",
]
pages.append(get_page(query, cols, title, totals_columns=[3, 4]))
return pages
@require_server_admin
@has_request_variables
def get_installation_activity(request: HttpRequest) -> HttpResponse:
duration_content, realm_minutes = user_activity_intervals()
counts_content: str = realm_summary_table(realm_minutes)
data = [
("Counts", counts_content),
("Durations", duration_content),
]
for page in ad_hoc_queries():
data.append((page["title"], page["content"]))
title = "Activity"
return render(
request,
"analytics/activity.html",
context=dict(data=data, title=title, is_home=True),
)

View File

@@ -0,0 +1,259 @@
import itertools
from datetime import datetime
from typing import Any, Dict, List, Optional, Set, Tuple
from django.db import connection
from django.db.models.query import QuerySet
from django.http import HttpRequest, HttpResponse, HttpResponseNotFound
from django.shortcuts import render
from django.utils.timezone import now as timezone_now
from psycopg2.sql import SQL
from analytics.views.activity_common import (
format_date_for_activity_reports,
get_user_activity_summary,
make_table,
user_activity_link,
)
from zerver.decorator import require_server_admin
from zerver.models import Realm, UserActivity
def get_user_activity_records_for_realm(realm: str, is_bot: bool) -> QuerySet:
fields = [
"user_profile__full_name",
"user_profile__delivery_email",
"query",
"client__name",
"count",
"last_visit",
]
records = UserActivity.objects.filter(
user_profile__realm__string_id=realm,
user_profile__is_active=True,
user_profile__is_bot=is_bot,
)
records = records.order_by("user_profile__delivery_email", "-last_visit")
records = records.select_related("user_profile", "client").only(*fields)
return records
def realm_user_summary_table(
all_records: List[QuerySet], admin_emails: Set[str]
) -> Tuple[Dict[str, Any], str]:
user_records = {}
def by_email(record: QuerySet) -> str:
return record.user_profile.delivery_email
for email, records in itertools.groupby(all_records, by_email):
user_records[email] = get_user_activity_summary(list(records))
def get_last_visit(user_summary: Dict[str, Dict[str, datetime]], k: str) -> Optional[datetime]:
if k in user_summary:
return user_summary[k]["last_visit"]
else:
return None
def get_count(user_summary: Dict[str, Dict[str, str]], k: str) -> str:
if k in user_summary:
return user_summary[k]["count"]
else:
return ""
def is_recent(val: datetime) -> bool:
age = timezone_now() - val
return age.total_seconds() < 5 * 60
rows = []
for email, user_summary in user_records.items():
email_link = user_activity_link(email, user_summary["user_profile_id"])
sent_count = get_count(user_summary, "send")
cells = [user_summary["name"], email_link, sent_count]
row_class = ""
for field in ["use", "send", "pointer", "desktop", "ZulipiOS", "Android"]:
visit = get_last_visit(user_summary, field)
if field == "use":
if visit and is_recent(visit):
row_class += " recently_active"
if email in admin_emails:
row_class += " admin"
val = format_date_for_activity_reports(visit)
cells.append(val)
row = dict(cells=cells, row_class=row_class)
rows.append(row)
def by_used_time(row: Dict[str, Any]) -> str:
return row["cells"][3]
rows = sorted(rows, key=by_used_time, reverse=True)
cols = [
"Name",
"Email",
"Total sent",
"Heard from",
"Message sent",
"Pointer motion",
"Desktop",
"ZulipiOS",
"Android",
]
title = "Summary"
content = make_table(title, cols, rows, has_row_class=True)
return user_records, content
def realm_client_table(user_summaries: Dict[str, Dict[str, Any]]) -> str:
exclude_keys = [
"internal",
"name",
"user_profile_id",
"use",
"send",
"pointer",
"website",
"desktop",
]
rows = []
for email, user_summary in user_summaries.items():
email_link = user_activity_link(email, user_summary["user_profile_id"])
name = user_summary["name"]
for k, v in user_summary.items():
if k in exclude_keys:
continue
client = k
count = v["count"]
last_visit = v["last_visit"]
row = [
format_date_for_activity_reports(last_visit),
client,
name,
email_link,
count,
]
rows.append(row)
rows = sorted(rows, key=lambda r: r[0], reverse=True)
cols = [
"Last visit",
"Client",
"Name",
"Email",
"Count",
]
title = "Clients"
return make_table(title, cols, rows)
def sent_messages_report(realm: str) -> str:
title = "Recently sent messages for " + realm
cols = [
"Date",
"Humans",
"Bots",
]
query = SQL(
"""
select
series.day::date,
humans.cnt,
bots.cnt
from (
select generate_series(
(now()::date - interval '2 week'),
now()::date,
interval '1 day'
) as day
) as series
left join (
select
date_sent::date date_sent,
count(*) cnt
from zerver_message m
join zerver_userprofile up on up.id = m.sender_id
join zerver_realm r on r.id = up.realm_id
where
r.string_id = %s
and
(not up.is_bot)
and
date_sent > now() - interval '2 week'
group by
date_sent::date
order by
date_sent::date
) humans on
series.day = humans.date_sent
left join (
select
date_sent::date date_sent,
count(*) cnt
from zerver_message m
join zerver_userprofile up on up.id = m.sender_id
join zerver_realm r on r.id = up.realm_id
where
r.string_id = %s
and
up.is_bot
and
date_sent > now() - interval '2 week'
group by
date_sent::date
order by
date_sent::date
) bots on
series.day = bots.date_sent
"""
)
cursor = connection.cursor()
cursor.execute(query, [realm, realm])
rows = cursor.fetchall()
cursor.close()
return make_table(title, cols, rows)
@require_server_admin
def get_realm_activity(request: HttpRequest, realm_str: str) -> HttpResponse:
data: List[Tuple[str, str]] = []
all_user_records: Dict[str, Any] = {}
try:
admins = Realm.objects.get(string_id=realm_str).get_human_admin_users()
except Realm.DoesNotExist:
return HttpResponseNotFound()
admin_emails = {admin.delivery_email for admin in admins}
for is_bot, page_title in [(False, "Humans"), (True, "Bots")]:
all_records = list(get_user_activity_records_for_realm(realm_str, is_bot))
user_records, content = realm_user_summary_table(all_records, admin_emails)
all_user_records.update(user_records)
data += [(page_title, content)]
page_title = "Clients"
content = realm_client_table(all_user_records)
data += [(page_title, content)]
page_title = "History"
content = sent_messages_report(realm_str)
data += [(page_title, content)]
title = realm_str
return render(
request,
"analytics/activity.html",
context=dict(data=data, realm_link=None, title=title),
)

View File

@@ -1,16 +1,15 @@
import logging
from collections import defaultdict
from datetime import datetime, timedelta, timezone
from typing import Annotated, Any, Optional, TypeAlias, TypeVar, cast
from typing import Any, Dict, List, Optional, Tuple, Type, Union, cast
from django.conf import settings
from django.db.models import QuerySet
from django.db.models.query import QuerySet
from django.http import HttpRequest, HttpResponse, HttpResponseNotFound
from django.shortcuts import render
from django.utils import translation
from django.utils.timezone import now as timezone_now
from django.utils.translation import gettext as _
from pydantic import BeforeValidator, Json, NonNegativeInt
from analytics.lib.counts import COUNT_STATS, CountStat
from analytics.lib.time_utils import time_range
@@ -31,12 +30,11 @@ from zerver.decorator import (
)
from zerver.lib.exceptions import JsonableError
from zerver.lib.i18n import get_and_set_request_language, get_language_translation_data
from zerver.lib.request import REQ, has_request_variables
from zerver.lib.response import json_success
from zerver.lib.streams import access_stream_by_id
from zerver.lib.timestamp import convert_to_UTC
from zerver.lib.typed_endpoint import PathOnly, typed_endpoint
from zerver.models import Client, Realm, Stream, UserProfile
from zerver.models.realms import get_realm
from zerver.lib.validator import to_non_negative_int
from zerver.models import Client, Realm, UserProfile, get_realm
if settings.ZILENCER_ENABLED:
from zilencer.models import RemoteInstallationCount, RemoteRealmCount, RemoteZulipServer
@@ -51,27 +49,17 @@ def is_analytics_ready(realm: Realm) -> bool:
def render_stats(
request: HttpRequest,
data_url_suffix: str,
realm: Realm | None,
*,
title: str | None = None,
target_name: str,
for_installation: bool = False,
remote: bool = False,
analytics_ready: bool = True,
) -> HttpResponse:
assert request.user.is_authenticated
if realm is not None:
# Same query to get guest user count as in get_seat_count in corporate/lib/stripe.py.
guest_users = UserProfile.objects.filter(
realm=realm, is_active=True, is_bot=False, role=UserProfile.ROLE_GUEST
).count()
space_used = realm.currently_used_upload_space_bytes()
if title:
pass
else:
title = realm.name or realm.string_id
else:
assert title
guest_users = None
space_used = None
page_params = dict(
data_url_suffix=data_url_suffix,
for_installation=for_installation,
remote=remote,
)
request_language = get_and_set_request_language(
request,
@@ -79,22 +67,13 @@ def render_stats(
translation.get_language_from_path(request.path_info),
)
# Sync this with stats_params_schema in base_page_params.ts.
page_params = dict(
page_type="stats",
data_url_suffix=data_url_suffix,
upload_space_used=space_used,
guest_users=guest_users,
translation_data=get_language_translation_data(request_language),
)
page_params["translation_data"] = get_language_translation_data(request_language)
return render(
request,
"analytics/stats.html",
context=dict(
target_name=title,
page_params=page_params,
analytics_ready=analytics_ready,
target_name=target_name, page_params=page_params, analytics_ready=analytics_ready
),
)
@@ -107,12 +86,14 @@ def stats(request: HttpRequest) -> HttpResponse:
# TODO: Make @zulip_login_required pass the UserProfile so we
# can use @require_member_or_admin
raise JsonableError(_("Not allowed for guest users"))
return render_stats(request, "", realm, analytics_ready=is_analytics_ready(realm))
return render_stats(
request, "", realm.name or realm.string_id, analytics_ready=is_analytics_ready(realm)
)
@require_server_admin
@typed_endpoint
def stats_for_realm(request: HttpRequest, *, realm_str: PathOnly[str]) -> HttpResponse:
@has_request_variables
def stats_for_realm(request: HttpRequest, realm_str: str) -> HttpResponse:
try:
realm = get_realm(realm_str)
except Realm.DoesNotExist:
@@ -121,117 +102,62 @@ def stats_for_realm(request: HttpRequest, *, realm_str: PathOnly[str]) -> HttpRe
return render_stats(
request,
f"/realm/{realm_str}",
realm,
realm.name or realm.string_id,
analytics_ready=is_analytics_ready(realm),
)
@require_server_admin
@typed_endpoint
@has_request_variables
def stats_for_remote_realm(
request: HttpRequest, *, remote_server_id: PathOnly[int], remote_realm_id: PathOnly[int]
request: HttpRequest, remote_server_id: int, remote_realm_id: int
) -> HttpResponse:
assert settings.ZILENCER_ENABLED
server = RemoteZulipServer.objects.get(id=remote_server_id)
return render_stats(
request,
f"/remote/{server.id}/realm/{remote_realm_id}",
None,
title=f"Realm {remote_realm_id} on server {server.hostname}",
f"Realm {remote_realm_id} on server {server.hostname}",
)
@require_server_admin_api
@typed_endpoint
@has_request_variables
def get_chart_data_for_realm(
request: HttpRequest,
user_profile: UserProfile,
/,
*,
realm_str: PathOnly[str],
chart_name: str,
min_length: Json[NonNegativeInt] | None = None,
start: Annotated[datetime | None, BeforeValidator(to_utc_datetime)] = None,
end: Annotated[datetime | None, BeforeValidator(to_utc_datetime)] = None,
request: HttpRequest, user_profile: UserProfile, realm_str: str, **kwargs: Any
) -> HttpResponse:
try:
realm = get_realm(realm_str)
except Realm.DoesNotExist:
raise JsonableError(_("Invalid organization"))
return do_get_chart_data(
request,
user_profile,
realm=realm,
chart_name=chart_name,
min_length=min_length,
start=start,
end=end,
)
@require_non_guest_user
@typed_endpoint
def get_chart_data_for_stream(
request: HttpRequest,
user_profile: UserProfile,
*,
stream_id: PathOnly[int],
chart_name: str,
min_length: Json[NonNegativeInt] | None = None,
start: Annotated[datetime | None, BeforeValidator(to_utc_datetime)] = None,
end: Annotated[datetime | None, BeforeValidator(to_utc_datetime)] = None,
) -> HttpResponse:
stream, ignored_sub = access_stream_by_id(
user_profile,
stream_id,
require_content_access=False,
)
return do_get_chart_data(
request,
user_profile,
stream=stream,
chart_name=chart_name,
min_length=min_length,
start=start,
end=end,
)
return get_chart_data(request=request, user_profile=user_profile, realm=realm, **kwargs)
@require_server_admin_api
@typed_endpoint
@has_request_variables
def get_chart_data_for_remote_realm(
request: HttpRequest,
user_profile: UserProfile,
/,
*,
remote_server_id: PathOnly[int],
remote_realm_id: PathOnly[int],
chart_name: str,
min_length: Json[NonNegativeInt] | None = None,
start: Annotated[datetime | None, BeforeValidator(to_utc_datetime)] = None,
end: Annotated[datetime | None, BeforeValidator(to_utc_datetime)] = None,
remote_server_id: int,
remote_realm_id: int,
**kwargs: Any,
) -> HttpResponse:
assert settings.ZILENCER_ENABLED
server = RemoteZulipServer.objects.get(id=remote_server_id)
return do_get_chart_data(
request,
user_profile,
return get_chart_data(
request=request,
user_profile=user_profile,
server=server,
remote=True,
remote_realm_id=remote_realm_id,
chart_name=chart_name,
min_length=min_length,
start=start,
end=end,
remote_realm_id=int(remote_realm_id),
**kwargs,
)
@require_server_admin
def stats_for_installation(request: HttpRequest) -> HttpResponse:
assert request.user.is_authenticated
return render_stats(request, "/installation", None, title="installation")
return render_stats(request, "/installation", "installation", True)
@require_server_admin
@@ -241,108 +167,64 @@ def stats_for_remote_installation(request: HttpRequest, remote_server_id: int) -
return render_stats(
request,
f"/remote/{server.id}/installation",
None,
title=f"remote installation {server.hostname}",
f"remote installation {server.hostname}",
True,
True,
)
@require_server_admin_api
@typed_endpoint
@has_request_variables
def get_chart_data_for_installation(
request: HttpRequest,
user_profile: UserProfile,
/,
*,
chart_name: str,
min_length: Json[NonNegativeInt] | None = None,
start: Annotated[datetime | None, BeforeValidator(to_utc_datetime)] = None,
end: Annotated[datetime | None, BeforeValidator(to_utc_datetime)] = None,
request: HttpRequest, user_profile: UserProfile, chart_name: str = REQ(), **kwargs: Any
) -> HttpResponse:
return do_get_chart_data(
request,
user_profile,
for_installation=True,
chart_name=chart_name,
min_length=min_length,
start=start,
end=end,
return get_chart_data(
request=request, user_profile=user_profile, for_installation=True, **kwargs
)
@require_server_admin_api
@typed_endpoint
@has_request_variables
def get_chart_data_for_remote_installation(
request: HttpRequest,
user_profile: UserProfile,
/,
*,
remote_server_id: PathOnly[int],
chart_name: str,
min_length: Json[NonNegativeInt] | None = None,
start: Annotated[datetime | None, BeforeValidator(to_utc_datetime)] = None,
end: Annotated[datetime | None, BeforeValidator(to_utc_datetime)] = None,
remote_server_id: int,
chart_name: str = REQ(),
**kwargs: Any,
) -> HttpResponse:
assert settings.ZILENCER_ENABLED
server = RemoteZulipServer.objects.get(id=remote_server_id)
return do_get_chart_data(
request,
user_profile,
return get_chart_data(
request=request,
user_profile=user_profile,
for_installation=True,
remote=True,
server=server,
chart_name=chart_name,
min_length=min_length,
start=start,
end=end,
**kwargs,
)
@require_non_guest_user
@typed_endpoint
@has_request_variables
def get_chart_data(
request: HttpRequest,
user_profile: UserProfile,
*,
chart_name: str,
min_length: Json[NonNegativeInt] | None = None,
start: Annotated[datetime | None, BeforeValidator(to_utc_datetime)] = None,
end: Annotated[datetime | None, BeforeValidator(to_utc_datetime)] = None,
) -> HttpResponse:
return do_get_chart_data(
request,
user_profile,
chart_name=chart_name,
min_length=min_length,
start=start,
end=end,
)
@require_non_guest_user
def do_get_chart_data(
request: HttpRequest,
user_profile: UserProfile,
*,
# Common parameters supported by all stats endpoints.
chart_name: str,
min_length: NonNegativeInt | None = None,
start: datetime | None = None,
end: datetime | None = None,
# The following parameters are only used by wrapping functions for
# various contexts; the callers are responsible for validating them.
realm: Realm | None = None,
chart_name: str = REQ(),
min_length: Optional[int] = REQ(converter=to_non_negative_int, default=None),
start: Optional[datetime] = REQ(converter=to_utc_datetime, default=None),
end: Optional[datetime] = REQ(converter=to_utc_datetime, default=None),
realm: Optional[Realm] = None,
for_installation: bool = False,
remote: bool = False,
remote_realm_id: int | None = None,
remote_realm_id: Optional[int] = None,
server: Optional["RemoteZulipServer"] = None,
stream: Stream | None = None,
) -> HttpResponse:
TableType: TypeAlias = (
type["RemoteInstallationCount"]
| type[InstallationCount]
| type["RemoteRealmCount"]
| type[RealmCount]
)
TableType = Union[
Type["RemoteInstallationCount"],
Type[InstallationCount],
Type["RemoteRealmCount"],
Type[RealmCount],
]
if for_installation:
if remote:
assert settings.ZILENCER_ENABLED
@@ -359,9 +241,7 @@ def do_get_chart_data(
else:
aggregate_table = RealmCount
tables: (
tuple[TableType] | tuple[TableType, type[UserCount]] | tuple[TableType, type[StreamCount]]
)
tables: Union[Tuple[TableType], Tuple[TableType, Type[UserCount]]]
if chart_name == "number_of_humans":
stats = [
@@ -370,7 +250,7 @@ def do_get_chart_data(
COUNT_STATS["active_users_audit:is_bot:day"],
]
tables = (aggregate_table,)
subgroup_to_label: dict[CountStat, dict[str | None, str]] = {
subgroup_to_label: Dict[CountStat, Dict[Optional[str], str]] = {
stats[0]: {None: "_1day"},
stats[1]: {None: "_15day"},
stats[2]: {"false": "all_time"},
@@ -388,10 +268,10 @@ def do_get_chart_data(
tables = (aggregate_table, UserCount)
subgroup_to_label = {
stats[0]: {
"public_stream": _("Public channels"),
"private_stream": _("Private channels"),
"private_message": _("Direct messages"),
"huddle_message": _("Group direct messages"),
"public_stream": _("Public streams"),
"private_stream": _("Private streams"),
"private_message": _("Private messages"),
"huddle_message": _("Group private messages"),
}
}
labels_sort_function = lambda data: sort_by_totals(data["everyone"])
@@ -411,18 +291,8 @@ def do_get_chart_data(
subgroup_to_label = {stats[0]: {None: "read"}}
labels_sort_function = None
include_empty_subgroups = True
elif chart_name == "messages_sent_by_stream":
if stream is None:
raise JsonableError(
_("Missing channel for chart: {chart_name}").format(chart_name=chart_name)
)
stats = [COUNT_STATS["messages_in_stream:is_bot:day"]]
tables = (aggregate_table, StreamCount)
subgroup_to_label = {stats[0]: {"false": "human", "true": "bot"}}
labels_sort_function = None
include_empty_subgroups = True
else:
raise JsonableError(_("Unknown chart name: {chart_name}").format(chart_name=chart_name))
raise JsonableError(_("Unknown chart name: {}").format(chart_name))
# Most likely someone using our API endpoint. The /stats page does not
# pass a start or end in its requests.
@@ -450,20 +320,18 @@ def do_get_chart_data(
assert server is not None
assert aggregate_table is RemoteInstallationCount or aggregate_table is RemoteRealmCount
aggregate_table_remote = cast(
type[RemoteInstallationCount] | type[RemoteRealmCount], aggregate_table
Union[Type[RemoteInstallationCount], Type[RemoteRealmCount]], aggregate_table
) # https://stackoverflow.com/questions/68540528/mypy-assertions-on-the-types-of-types
if not aggregate_table_remote.objects.filter(server=server).exists():
raise JsonableError(
_("No analytics data available. Please contact your server administrator.")
)
if start is None:
first = (
aggregate_table_remote.objects.filter(server=server).order_by("remote_id").first()
)
first = aggregate_table_remote.objects.filter(server=server).first()
assert first is not None
start = first.end_time
if end is None:
last = aggregate_table_remote.objects.filter(server=server).order_by("remote_id").last()
last = aggregate_table_remote.objects.filter(server=server).last()
assert last is not None
end = last.end_time
else:
@@ -496,7 +364,7 @@ def do_get_chart_data(
assert len({stat.frequency for stat in stats}) == 1
end_times = time_range(start, end, stats[0].frequency, min_length)
data: dict[str, Any] = {
data: Dict[str, Any] = {
"end_times": [int(end_time.timestamp()) for end_time in end_times],
"frequency": stats[0].frequency,
}
@@ -505,7 +373,6 @@ def do_get_chart_data(
InstallationCount: "everyone",
RealmCount: "everyone",
UserCount: "user",
StreamCount: "everyone",
}
if settings.ZILENCER_ENABLED:
aggregation_level[RemoteInstallationCount] = "everyone"
@@ -517,9 +384,6 @@ def do_get_chart_data(
RealmCount: realm.id,
UserCount: user_profile.id,
}
if stream is not None:
id_value[StreamCount] = stream.id
if settings.ZILENCER_ENABLED:
if server is not None:
id_value[RemoteInstallationCount] = server.id
@@ -549,8 +413,9 @@ def do_get_chart_data(
return json_success(request, data=data)
def sort_by_totals(value_arrays: dict[str, list[int]]) -> list[str]:
totals = sorted(((sum(values), label) for label, values in value_arrays.items()), reverse=True)
def sort_by_totals(value_arrays: Dict[str, List[int]]) -> List[str]:
totals = [(sum(values), label) for label, values in value_arrays.items()]
totals.sort(reverse=True)
return [label for total, label in totals]
@@ -560,85 +425,80 @@ def sort_by_totals(value_arrays: dict[str, list[int]]) -> list[str]:
# understanding the realm's traffic and the user's traffic. This function
# tries to rank the clients so that taking the first N elements of the
# sorted list has a reasonable chance of doing so.
def sort_client_labels(data: dict[str, dict[str, list[int]]]) -> list[str]:
def sort_client_labels(data: Dict[str, Dict[str, List[int]]]) -> List[str]:
realm_order = sort_by_totals(data["everyone"])
user_order = sort_by_totals(data["user"])
label_sort_values: dict[str, float] = {label: i for i, label in enumerate(realm_order)}
label_sort_values: Dict[str, float] = {}
for i, label in enumerate(realm_order):
label_sort_values[label] = i
for i, label in enumerate(user_order):
label_sort_values[label] = min(i - 0.1, label_sort_values.get(label, i))
return [label for label, sort_value in sorted(label_sort_values.items(), key=lambda x: x[1])]
CountT = TypeVar("CountT", bound=BaseCount)
def table_filtered_to_id(table: type[CountT], key_id: int) -> QuerySet[CountT]:
def table_filtered_to_id(table: Type[BaseCount], key_id: int) -> QuerySet:
if table == RealmCount:
return table._default_manager.filter(realm_id=key_id)
return RealmCount.objects.filter(realm_id=key_id)
elif table == UserCount:
return table._default_manager.filter(user_id=key_id)
return UserCount.objects.filter(user_id=key_id)
elif table == StreamCount:
return table._default_manager.filter(stream_id=key_id)
return StreamCount.objects.filter(stream_id=key_id)
elif table == InstallationCount:
return table._default_manager.all()
return InstallationCount.objects.all()
elif settings.ZILENCER_ENABLED and table == RemoteInstallationCount:
return table._default_manager.filter(server_id=key_id)
return RemoteInstallationCount.objects.filter(server_id=key_id)
elif settings.ZILENCER_ENABLED and table == RemoteRealmCount:
return table._default_manager.filter(realm_id=key_id)
return RemoteRealmCount.objects.filter(realm_id=key_id)
else:
raise AssertionError(f"Unknown table: {table}")
def client_label_map(name: str) -> str:
if name == "website":
return "Web app"
return "Website"
if name.startswith("desktop app"):
return "Old desktop app"
if name == "ZulipElectron":
return "Desktop app"
if name == "ZulipTerminal":
return "Terminal app"
if name == "ZulipAndroid":
return "Ancient Android app"
return "Old Android app"
if name == "ZulipiOS":
return "Ancient iOS app"
return "Old iOS app"
if name == "ZulipMobile":
return "Old mobile app (React Native)"
if name in ["ZulipFlutter", "ZulipMobile/flutter"]:
return "Mobile app (Flutter)"
return "Mobile app"
if name in ["ZulipPython", "API: Python"]:
return "Python API"
if name.startswith("Zulip") and name.endswith("Webhook"):
return name.removeprefix("Zulip").removesuffix("Webhook") + " webhook"
return name[len("Zulip") : -len("Webhook")] + " webhook"
return name
def rewrite_client_arrays(value_arrays: dict[str, list[int]]) -> dict[str, list[int]]:
mapped_arrays: dict[str, list[int]] = {}
def rewrite_client_arrays(value_arrays: Dict[str, List[int]]) -> Dict[str, List[int]]:
mapped_arrays: Dict[str, List[int]] = {}
for label, array in value_arrays.items():
mapped_label = client_label_map(label)
if mapped_label in mapped_arrays:
for i in range(len(array)):
mapped_arrays[mapped_label][i] += array[i]
for i in range(0, len(array)):
mapped_arrays[mapped_label][i] += value_arrays[label][i]
else:
mapped_arrays[mapped_label] = array.copy()
mapped_arrays[mapped_label] = [value_arrays[label][i] for i in range(0, len(array))]
return mapped_arrays
def get_time_series_by_subgroup(
stat: CountStat,
table: type[BaseCount],
table: Type[BaseCount],
key_id: int,
end_times: list[datetime],
subgroup_to_label: dict[str | None, str],
end_times: List[datetime],
subgroup_to_label: Dict[Optional[str], str],
include_empty_subgroups: bool,
) -> dict[str, list[int]]:
) -> Dict[str, List[int]]:
queryset = (
table_filtered_to_id(table, key_id)
.filter(property=stat.property)
.values_list("subgroup", "end_time", "value")
)
value_dicts: dict[str | None, dict[datetime, int]] = defaultdict(lambda: defaultdict(int))
value_dicts: Dict[Optional[str], Dict[datetime, int]] = defaultdict(lambda: defaultdict(int))
for subgroup, end_time, value in queryset:
value_dicts[subgroup][end_time] = value
value_arrays = {}

343
analytics/views/support.py Normal file
View File

@@ -0,0 +1,343 @@
import urllib
from datetime import timedelta
from decimal import Decimal
from typing import Any, Dict, List, Optional
from urllib.parse import urlencode
from django.conf import settings
from django.core.exceptions import ValidationError
from django.core.validators import URLValidator
from django.http import HttpRequest, HttpResponse, HttpResponseRedirect
from django.shortcuts import render
from django.urls import reverse
from django.utils.timesince import timesince
from django.utils.timezone import now as timezone_now
from django.utils.translation import gettext as _
from confirmation.models import Confirmation, confirmation_url
from confirmation.settings import STATUS_ACTIVE
from zerver.actions.create_realm import do_change_realm_subdomain
from zerver.actions.realm_settings import (
do_change_realm_org_type,
do_change_realm_plan_type,
do_deactivate_realm,
do_scrub_realm,
do_send_realm_reactivation_email,
)
from zerver.decorator import require_server_admin
from zerver.forms import check_subdomain_available
from zerver.lib.exceptions import JsonableError
from zerver.lib.realm_icon import realm_icon_url
from zerver.lib.request import REQ, has_request_variables
from zerver.lib.subdomains import get_subdomain_from_hostname
from zerver.lib.validator import check_bool, check_string_in, to_decimal, to_non_negative_int
from zerver.models import (
MultiuseInvite,
PreregistrationUser,
Realm,
UserProfile,
get_org_type_display_name,
get_realm,
)
from zerver.views.invite import get_invitee_emails_set
if settings.BILLING_ENABLED:
from corporate.lib.stripe import approve_sponsorship as do_approve_sponsorship
from corporate.lib.stripe import (
attach_discount_to_realm,
downgrade_at_the_end_of_billing_cycle,
downgrade_now_without_creating_additional_invoices,
get_discount_for_realm,
get_latest_seat_count,
make_end_of_cycle_updates_if_needed,
update_billing_method_of_current_plan,
update_sponsorship_status,
void_all_open_invoices,
)
from corporate.models import get_current_plan_by_realm, get_customer_by_realm
def get_plan_name(plan_type: int) -> str:
return {
Realm.PLAN_TYPE_SELF_HOSTED: "self-hosted",
Realm.PLAN_TYPE_LIMITED: "limited",
Realm.PLAN_TYPE_STANDARD: "standard",
Realm.PLAN_TYPE_STANDARD_FREE: "open source",
Realm.PLAN_TYPE_PLUS: "plus",
}[plan_type]
def get_confirmations(
types: List[int], object_ids: List[int], hostname: Optional[str] = None
) -> List[Dict[str, Any]]:
lowest_datetime = timezone_now() - timedelta(days=30)
confirmations = Confirmation.objects.filter(
type__in=types, object_id__in=object_ids, date_sent__gte=lowest_datetime
)
confirmation_dicts = []
for confirmation in confirmations:
realm = confirmation.realm
content_object = confirmation.content_object
type = confirmation.type
expiry_date = confirmation.expiry_date
assert content_object is not None
if hasattr(content_object, "status"):
if content_object.status == STATUS_ACTIVE:
link_status = "Link has been clicked"
else:
link_status = "Link has never been clicked"
else:
link_status = ""
now = timezone_now()
if expiry_date is None:
expires_in = "Never"
elif now < expiry_date:
expires_in = timesince(now, expiry_date)
else:
expires_in = "Expired"
url = confirmation_url(confirmation.confirmation_key, realm, type)
confirmation_dicts.append(
{
"object": confirmation.content_object,
"url": url,
"type": type,
"link_status": link_status,
"expires_in": expires_in,
}
)
return confirmation_dicts
VALID_DOWNGRADE_METHODS = [
"downgrade_at_billing_cycle_end",
"downgrade_now_without_additional_licenses",
"downgrade_now_void_open_invoices",
]
VALID_STATUS_VALUES = [
"active",
"deactivated",
]
VALID_BILLING_METHODS = [
"send_invoice",
"charge_automatically",
]
@require_server_admin
@has_request_variables
def support(
request: HttpRequest,
realm_id: Optional[int] = REQ(default=None, converter=to_non_negative_int),
plan_type: Optional[int] = REQ(default=None, converter=to_non_negative_int),
discount: Optional[Decimal] = REQ(default=None, converter=to_decimal),
new_subdomain: Optional[str] = REQ(default=None),
status: Optional[str] = REQ(default=None, str_validator=check_string_in(VALID_STATUS_VALUES)),
billing_method: Optional[str] = REQ(
default=None, str_validator=check_string_in(VALID_BILLING_METHODS)
),
sponsorship_pending: Optional[bool] = REQ(default=None, json_validator=check_bool),
approve_sponsorship: Optional[bool] = REQ(default=None, json_validator=check_bool),
downgrade_method: Optional[str] = REQ(
default=None, str_validator=check_string_in(VALID_DOWNGRADE_METHODS)
),
scrub_realm: Optional[bool] = REQ(default=None, json_validator=check_bool),
query: Optional[str] = REQ("q", default=None),
org_type: Optional[int] = REQ(default=None, converter=to_non_negative_int),
) -> HttpResponse:
context: Dict[str, Any] = {}
if "success_message" in request.session:
context["success_message"] = request.session["success_message"]
del request.session["success_message"]
if settings.BILLING_ENABLED and request.method == "POST":
# We check that request.POST only has two keys in it: The
# realm_id and a field to change.
keys = set(request.POST.keys())
if "csrfmiddlewaretoken" in keys:
keys.remove("csrfmiddlewaretoken")
if len(keys) != 2:
raise JsonableError(_("Invalid parameters"))
realm = Realm.objects.get(id=realm_id)
acting_user = request.user
assert isinstance(acting_user, UserProfile)
if plan_type is not None:
current_plan_type = realm.plan_type
do_change_realm_plan_type(realm, plan_type, acting_user=acting_user)
msg = f"Plan type of {realm.string_id} changed from {get_plan_name(current_plan_type)} to {get_plan_name(plan_type)} "
context["success_message"] = msg
elif org_type is not None:
current_realm_type = realm.org_type
do_change_realm_org_type(realm, org_type, acting_user=acting_user)
msg = f"Org type of {realm.string_id} changed from {get_org_type_display_name(current_realm_type)} to {get_org_type_display_name(org_type)} "
context["success_message"] = msg
elif discount is not None:
current_discount = get_discount_for_realm(realm) or 0
attach_discount_to_realm(realm, discount, acting_user=acting_user)
context[
"success_message"
] = f"Discount of {realm.string_id} changed to {discount}% from {current_discount}%."
elif new_subdomain is not None:
old_subdomain = realm.string_id
try:
check_subdomain_available(new_subdomain)
except ValidationError as error:
context["error_message"] = error.message
else:
do_change_realm_subdomain(realm, new_subdomain, acting_user=acting_user)
request.session[
"success_message"
] = f"Subdomain changed from {old_subdomain} to {new_subdomain}"
return HttpResponseRedirect(
reverse("support") + "?" + urlencode({"q": new_subdomain})
)
elif status is not None:
if status == "active":
do_send_realm_reactivation_email(realm, acting_user=acting_user)
context[
"success_message"
] = f"Realm reactivation email sent to admins of {realm.string_id}."
elif status == "deactivated":
do_deactivate_realm(realm, acting_user=acting_user)
context["success_message"] = f"{realm.string_id} deactivated."
elif billing_method is not None:
if billing_method == "send_invoice":
update_billing_method_of_current_plan(
realm, charge_automatically=False, acting_user=acting_user
)
context[
"success_message"
] = f"Billing method of {realm.string_id} updated to pay by invoice."
elif billing_method == "charge_automatically":
update_billing_method_of_current_plan(
realm, charge_automatically=True, acting_user=acting_user
)
context[
"success_message"
] = f"Billing method of {realm.string_id} updated to charge automatically."
elif sponsorship_pending is not None:
if sponsorship_pending:
update_sponsorship_status(realm, True, acting_user=acting_user)
context["success_message"] = f"{realm.string_id} marked as pending sponsorship."
else:
update_sponsorship_status(realm, False, acting_user=acting_user)
context["success_message"] = f"{realm.string_id} is no longer pending sponsorship."
elif approve_sponsorship:
do_approve_sponsorship(realm, acting_user=acting_user)
context["success_message"] = f"Sponsorship approved for {realm.string_id}"
elif downgrade_method is not None:
if downgrade_method == "downgrade_at_billing_cycle_end":
downgrade_at_the_end_of_billing_cycle(realm)
context[
"success_message"
] = f"{realm.string_id} marked for downgrade at the end of billing cycle"
elif downgrade_method == "downgrade_now_without_additional_licenses":
downgrade_now_without_creating_additional_invoices(realm)
context[
"success_message"
] = f"{realm.string_id} downgraded without creating additional invoices"
elif downgrade_method == "downgrade_now_void_open_invoices":
downgrade_now_without_creating_additional_invoices(realm)
voided_invoices_count = void_all_open_invoices(realm)
context[
"success_message"
] = f"{realm.string_id} downgraded and voided {voided_invoices_count} open invoices"
elif scrub_realm:
do_scrub_realm(realm, acting_user=acting_user)
context["success_message"] = f"{realm.string_id} scrubbed."
if query:
key_words = get_invitee_emails_set(query)
users = set(UserProfile.objects.filter(delivery_email__in=key_words))
realms = set(Realm.objects.filter(string_id__in=key_words))
for key_word in key_words:
try:
URLValidator()(key_word)
parse_result = urllib.parse.urlparse(key_word)
hostname = parse_result.hostname
assert hostname is not None
if parse_result.port:
hostname = f"{hostname}:{parse_result.port}"
subdomain = get_subdomain_from_hostname(hostname)
try:
realms.add(get_realm(subdomain))
except Realm.DoesNotExist:
pass
except ValidationError:
users.update(UserProfile.objects.filter(full_name__iexact=key_word))
for realm in realms:
realm.customer = get_customer_by_realm(realm)
current_plan = get_current_plan_by_realm(realm)
if current_plan is not None:
new_plan, last_ledger_entry = make_end_of_cycle_updates_if_needed(
current_plan, timezone_now()
)
if last_ledger_entry is not None:
if new_plan is not None:
realm.current_plan = new_plan
else:
realm.current_plan = current_plan
realm.current_plan.licenses = last_ledger_entry.licenses
realm.current_plan.licenses_used = get_latest_seat_count(realm)
# full_names can have , in them
users.update(UserProfile.objects.filter(full_name__iexact=query))
context["users"] = users
context["realms"] = realms
confirmations: List[Dict[str, Any]] = []
preregistration_users = PreregistrationUser.objects.filter(email__in=key_words)
confirmations += get_confirmations(
[Confirmation.USER_REGISTRATION, Confirmation.INVITATION, Confirmation.REALM_CREATION],
preregistration_users,
hostname=request.get_host(),
)
multiuse_invites = MultiuseInvite.objects.filter(realm__in=realms)
confirmations += get_confirmations([Confirmation.MULTIUSE_INVITE], multiuse_invites)
confirmations += get_confirmations(
[Confirmation.REALM_REACTIVATION], [realm.id for realm in realms]
)
context["confirmations"] = confirmations
def get_realm_owner_emails_as_string(realm: Realm) -> str:
return ", ".join(
realm.get_human_owner_users()
.order_by("delivery_email")
.values_list("delivery_email", flat=True)
)
def get_realm_admin_emails_as_string(realm: Realm) -> str:
return ", ".join(
realm.get_human_admin_users(include_realm_owners=False)
.order_by("delivery_email")
.values_list("delivery_email", flat=True)
)
context["get_realm_owner_emails_as_string"] = get_realm_owner_emails_as_string
context["get_realm_admin_emails_as_string"] = get_realm_admin_emails_as_string
context["get_discount_for_realm"] = get_discount_for_realm
context["get_org_type_display_name"] = get_org_type_display_name
context["realm_icon_url"] = realm_icon_url
context["Confirmation"] = Confirmation
context["sorted_realm_types"] = sorted(
Realm.ORG_TYPES.values(), key=lambda d: d["display_order"]
)
return render(request, "analytics/support.html", context=context)

View File

@@ -0,0 +1,104 @@
from typing import Any, Dict, List, Tuple
from django.conf import settings
from django.db.models.query import QuerySet
from django.http import HttpRequest, HttpResponse
from django.shortcuts import render
from analytics.views.activity_common import (
format_date_for_activity_reports,
get_user_activity_summary,
make_table,
)
from zerver.decorator import require_server_admin
from zerver.models import UserActivity, UserProfile, get_user_profile_by_id
if settings.BILLING_ENABLED:
pass
def get_user_activity_records(user_profile: UserProfile) -> List[QuerySet]:
fields = [
"user_profile__full_name",
"query",
"client__name",
"count",
"last_visit",
]
records = UserActivity.objects.filter(
user_profile=user_profile,
)
records = records.order_by("-last_visit")
records = records.select_related("user_profile", "client").only(*fields)
return records
def raw_user_activity_table(records: List[QuerySet]) -> str:
cols = [
"query",
"client",
"count",
"last_visit",
]
def row(record: QuerySet) -> List[Any]:
return [
record.query,
record.client.name,
record.count,
format_date_for_activity_reports(record.last_visit),
]
rows = list(map(row, records))
title = "Raw data"
return make_table(title, cols, rows)
def user_activity_summary_table(user_summary: Dict[str, Dict[str, Any]]) -> str:
rows = []
for k, v in user_summary.items():
if k == "name" or k == "user_profile_id":
continue
client = k
count = v["count"]
last_visit = v["last_visit"]
row = [
format_date_for_activity_reports(last_visit),
client,
count,
]
rows.append(row)
rows = sorted(rows, key=lambda r: r[0], reverse=True)
cols = [
"last_visit",
"client",
"count",
]
title = "User activity"
return make_table(title, cols, rows)
@require_server_admin
def get_user_activity(request: HttpRequest, user_profile_id: int) -> HttpResponse:
user_profile = get_user_profile_by_id(user_profile_id)
records = get_user_activity_records(user_profile)
data: List[Tuple[str, str]] = []
user_summary = get_user_activity_summary(records)
content = user_activity_summary_table(user_summary)
data += [("Summary", content)]
content = raw_user_activity_table(records)
data += [("Info", content)]
title = user_profile.delivery_email
return render(
request,
"analytics/activity.html",
context=dict(data=data, title=title),
)

View File

@@ -1,89 +0,0 @@
# API keys
An **API key** is how a bot identifies itself to Zulip. For the official
clients, such as the Python bindings, we recommend [downloading a `zuliprc`
file](/api/configuring-python-bindings#download-a-zuliprc-file). This file
contains an API key and other necessary configuration values for using the
Zulip API with a specific account on a Zulip server.
## Get a bot's API key
{start_tabs}
{tab|desktop-web}
{settings_tab|your-bots}
1. Click **Active bots**.
1. Find your bot. The bot's API key is under **API KEY**.
{end_tabs}
!!! warn ""
Anyone with a bot's API key can impersonate the bot, so be careful with it!
## Get your API key
{start_tabs}
{tab|desktop-web}
{settings_tab|account-and-privacy}
1. Under **API key**, click **Manage your API key**.
1. Enter your password, and click **Get API key**. If you don't know your
password, click **reset it** and follow the instructions from there.
1. Copy your API key.
{end_tabs}
!!! warn ""
Anyone with your API key can impersonate you, so be doubly careful with it.
## Invalidate an API key
To invalidate an existing API key, you have to generate a new key.
### Invalidate a bot's API key
{start_tabs}
{tab|desktop-web}
{settings_tab|your-bots}
1. Click **Active bots**.
1. Find your bot.
1. Under **API KEY**, click the **refresh** (<i class="fa fa-refresh"></i>) icon
to the right of the bot's API key.
{end_tabs}
### Invalidate your API key
{start_tabs}
{tab|desktop-web}
{settings_tab|account-and-privacy}
1. Under **API key**, click **Manage your API key**.
1. Enter your password, and click **Get API key**. If you don't know your
password, click **reset it** and follow the instructions from there.
1. Click **Generate new API key**
{end_tabs}
## Related articles
* [Configuring the Python bindings](/api/configuring-python-bindings)

File diff suppressed because it is too large Load Diff

View File

@@ -1,161 +0,0 @@
# Configuring the Python bindings
Zulip provides a set of tools that allows interacting with its API more
easily, called the [Python bindings](https://pypi.python.org/pypi/zulip/).
One of the most notable use cases for these bindings are bots developed
using Zulip's [bot framework](/api/writing-bots).
In order to use them, you need to configure them with your identity
(account, API key, and Zulip server URL). There are a few ways to
achieve that:
- Using a `zuliprc` file, referenced via the `--config-file` option or
the `config_file` option to the `zulip.Client` constructor
(recommended for bots).
- Using a `zuliprc` file in your home directory at `~/.zuliprc`
(recommended for your own API key).
- Using the [environment
variables](https://en.wikipedia.org/wiki/Environment_variable)
documented below.
- Using the `--api-key`, `--email`, and `--site` variables as command
line parameters.
- Using the `api_key`, `email`, and `site` parameters to the
`zulip.Client` constructor.
## Download a `zuliprc` file
{start_tabs}
{tab|for-a-bot}
{settings_tab|your-bots}
1. Click the **download** (<i class="fa fa-download"></i>) icon on the profile
card of the desired bot to download the bot's `zuliprc` file.
!!! warn ""
Anyone with a bot's API key can impersonate the bot, so be careful with it!
{tab|for-yourself}
{settings_tab|account-and-privacy}
1. Under **API key**, click **Manage your API key**.
1. Enter your password, and click **Get API key**. If you don't know your
password, click **reset it** and follow the
instructions from there.
1. Click **Download zuliprc** to download your `zuliprc` file.
1. (optional) If you'd like your credentials to be used by default
when using the Zulip API on your computer, move the `zuliprc` file
to `~/.zuliprc` in your home directory.
!!! warn ""
Anyone with your API key can impersonate you, so be doubly careful with it.
{end_tabs}
## Configuration keys and environment variables
`zuliprc` is a configuration file written in the
[INI file format](https://en.wikipedia.org/wiki/INI_file),
which contains key-value pairs as shown in the following example:
```
[api]
key=<API key from the web interface>
email=<your email address>
site=<your Zulip server's URI>
...
```
The keys you can use in this file (and their equivalent environment variables)
can be found in the following table:
<table class="table">
<thead>
<tr>
<th><code>zuliprc</code> key</th>
<th>Environment variable</th>
<th>Required</th>
<th>Description</th>
</tr>
</thead>
<tr>
<td><code>key</code></td>
<td><code>ZULIP_API_KEY</code></td>
<td>Yes</td>
<td>
<a href="/api/api-keys">API key</a>, which you can get through
Zulip's web interface.
</td>
</tr>
<tr>
<td><code>email</code></td>
<td><code>ZULIP_EMAIL</code></td>
<td>Yes</td>
<td>
The email address of the user who owns the API key mentioned
above.
</td>
</tr>
<tr>
<td><code>site</code></td>
<td><code>ZULIP_SITE</code></td>
<td>No</td>
<td>
URL where your Zulip server is located.
</td>
</tr>
<tr>
<td><code>client_cert_key</code></td>
<td><code>ZULIP_CERT_KEY</code></td>
<td>No</td>
<td>
Path to the SSL/TLS private key that the binding should use to
connect to the server.
</td>
</tr>
<tr>
<td><code>client_cert</code></td>
<td><code>ZULIP_CERT</code></td>
<td>No*</td>
<td>
The public counterpart of <code>client_cert_key</code>/
<code>ZULIP_CERT_KEY</code>. <i>This setting is required if a cert
key has been set.</i>
</td>
</tr>
<tr>
<td><code>client_bundle</code></td>
<td><code>ZULIP_CERT_BUNDLE</code></td>
<td>No</td>
<td>
Path where the server's PEM-encoded certificate is located. CA
certificates are also accepted, in case those CA's have issued the
server's certificate. Defaults to the built-in CA bundle trusted
by Python.
</td>
</tr>
<tr>
<td><code>insecure</code></td>
<td><code>ZULIP_ALLOW_INSECURE</code></td>
<td>No</td>
<td>
Allows connecting to Zulip servers with an invalid SSL/TLS
certificate. Please note that enabling this will make the HTTPS
connection insecure. Defaults to <code>false</code>.
</td>
</tr>
</table>
## Related articles
* [Installation instructions](/api/installation-instructions)
* [API keys](/api/api-keys)
* [Running bots](/api/running-bots)
* [Deploying bots](/api/deploying-bots)

View File

@@ -1,202 +0,0 @@
# Construct a narrow
A **narrow** is a set of filters for Zulip messages, that can be based
on many different factors (like sender, channel, topic, search
keywords, etc.). Narrows are used in various places in the Zulip
API (most importantly, in the API for fetching messages).
It is simplest to explain the algorithm for encoding a search as a
narrow using a single example. Consider the following search query
(written as it would be entered in the Zulip web app's search box).
It filters for messages sent to channel `announce`, not sent by
`iago@zulip.com`, and containing the words `cool` and `sunglasses`:
```
channel:announce -sender:iago@zulip.com cool sunglasses
```
This query would be JSON-encoded for use in the Zulip API using JSON
as a list of simple objects, as follows:
```json
[
{
"operator": "channel",
"operand": "announce"
},
{
"operator": "sender",
"operand": "iago@zulip.com",
"negated": true
},
{
"operator": "search",
"operand": "cool sunglasses"
}
]
```
The Zulip help center article on [searching for messages](/help/search-for-messages)
documents the majority of the search/narrow options supported by the
Zulip API.
Note that many narrows, including all that lack a `channel` or `channels`
operator, search the current user's personal message history. See
[searching shared history](/help/search-for-messages#search-shared-history)
for details.
Clients should note that the `is:unread` filter takes advantage of the
fact that there is a database index for unread messages, which can be an
important optimization when fetching messages in certain cases (e.g.,
when [adding the `read` flag to a user's personal
messages](/api/update-message-flags-for-narrow)).
Note: When the value of `realm_empty_topic_display_name` found in
the [POST /register](/api/register-queue) response is used as an operand
for the `"topic"` operator in the narrow, it is interpreted
as an empty string.
## Changes
* In Zulip 10.0 (feature level 366), support was added for a new
`is:muted` operator combination, matching messages in topics and
channels that the user has [muted](/help/mute-a-topic).
* Before Zulip 10.0 (feature level 334), empty string was not a valid
topic name for channel messages.
* In Zulip 9.0 (feature level 271), support was added for a new filter
operator, `with`, which uses a [message ID](#message-ids) for its
operand, and is designed for creating permanent links to topics.
* In Zulip 9.0 (feature level 265), support was added for a new
`is:followed` filter, matching messages in topics that the current
user is [following](/help/follow-a-topic).
* In Zulip 9.0 (feature level 250), support was added for two filters
related to stream messages: `channel` and `channels`. The `channel`
operator is an alias for the `stream` operator. The `channels`
operator is an alias for the `streams` operator. Both `channel` and
`channels` return the same exact results as `stream` and `streams`
respectively.
* In Zulip 9.0 (feature level 249), support was added for a new filter,
`has:reaction`, which returns messages that have at least one [emoji
reaction](/help/emoji-reactions).
* In Zulip 7.0 (feature level 177), support was added for three filters
related to direct messages: `is:dm`, `dm` and `dm-including`. The
`dm` operator replaced and deprecated the `pm-with` operator. The
`is:dm` filter replaced and deprecated the `is:private` filter. The
`dm-including` operator replaced and deprecated the `group-pm-with`
operator.
* The `dm-including` and `group-pm-with` operators return slightly
different results. For example, `dm-including:1234` returns all
direct messages (1-on-1 and group) that include the current user
and the user with the unique user ID of `1234`. On the other hand,
`group-pm-with:1234` returned only group direct messages that
included the current user and the user with the unique user ID of
`1234`.
* Both `dm` and `is:dm` are aliases of `pm-with` and `is:private`
respectively, and return the same exact results that the
deprecated filters did.
## Narrows that use IDs
### Message IDs
The `id` and `with` operators use message IDs for their operands. The
message ID operand for these two operators may be encoded as either a
number or a string.
* `id:12345`: Search for only the message with ID `12345`.
* `with:12345`: Search for the conversation that contains the message
with ID `12345`.
The `id` operator returns the message with the specified ID if it exists,
and if it can be accessed by the user.
The `with` operator is designed to be used for permanent links to
topics, which means they should continue to work when the topic is
[moved](/help/move-content-to-another-topic) or
[resolved](/help/resolve-a-topic). If the message with the specified
ID exists, and can be accessed by the user, then it will return
messages with the `channel`/`topic`/`dm` operators corresponding to
the current conversation containing that message, replacing any such
operators included in the original narrow query.
If no such message exists, or the message ID represents a message that
is inaccessible to the user, this operator will be ignored (rather
than throwing an error) if the remaining operators uniquely identify a
conversation (i.e., they contain `channel` and `topic` terms or `dm`
term). This behavior is intended to provide the best possible
experience for links to private channels with protected history.
The [help center](/help/search-for-messages#search-by-message-id) also
documents the `near` operator for searching for messages by ID, but
this narrow operator has no effect on filtering messages when sent to
the server. In practice, when the `near` operator is used to search for
messages, or is part of a URL fragment, the value of its operand should
instead be used for the value of the `anchor` parameter in endpoints
that also accept a `narrow` parameter; see
[GET /messages][anchor-get-messages] and
[POST /messages/flags/narrow][anchor-post-flags].
**Changes**: Prior to Zulip 8.0 (feature level 194), the message ID
operand for the `id` operator needed to be encoded as a string.
```json
[
{
"operator": "id",
"operand": 12345
}
]
```
### Channel and user IDs
There are a few additional narrow/search options (new in Zulip 2.1)
that use either channel IDs or user IDs that are not documented in the
help center because they are primarily useful to API clients:
* `channel:1234`: Search messages sent to the channel with ID `1234`.
* `sender:1234`: Search messages sent by user ID `1234`.
* `dm:1234`: Search the direct message conversation between
you and user ID `1234`.
* `dm:1234,5678`: Search the direct message conversation between
you, user ID `1234`, and user ID `5678`.
* `dm-including:1234`: Search all direct messages (1-on-1 and group)
that include you and user ID `1234`.
!!! tip ""
A user ID can be found by [viewing a user's profile][view-profile]
in the web or desktop apps. A channel ID can be found when [browsing
channels][browse-channels] in the web or desktop apps.
The operands for these search options must be encoded either as an
integer ID or a JSON list of integer IDs. For example, to query
messages sent by a user 1234 to a direct message thread with yourself,
user 1234, and user 5678, the correct JSON-encoded query is:
```json
[
{
"operator": "dm",
"operand": [1234, 5678]
},
{
"operator": "sender",
"operand": 1234
}
]
```
[view-profile]: /help/view-someones-profile
[browse-channels]: /help/introduction-to-channels#browse-and-subscribe-to-channels
[anchor-get-messages]: /api/get-messages#parameter-anchor
[anchor-post-flags]: /api/update-message-flags-for-narrow#parameter-anchor

View File

@@ -1,49 +0,0 @@
{generate_api_header(/scheduled_messages:post)}
## Usage examples
{start_tabs}
{generate_code_example(python)|/scheduled_messages:post|example}
{generate_code_example(javascript)|/scheduled_messages:post|example}
{tab|curl}
``` curl
# Create a scheduled channel message
curl -X POST {{ api_url }}/v1/scheduled_messages \
-u BOT_EMAIL_ADDRESS:BOT_API_KEY \
--data-urlencode type=stream \
--data-urlencode to=9 \
--data-urlencode topic=Hello \
--data-urlencode 'content=Nice to meet everyone!' \
--data-urlencode scheduled_delivery_timestamp=3165826990
# Create a scheduled direct message
curl -X POST {{ api_url }}/v1/messages \
-u BOT_EMAIL_ADDRESS:BOT_API_KEY \
--data-urlencode type=direct \
--data-urlencode 'to=[9, 10]' \
--data-urlencode 'content=Can we meet on Monday?' \
--data-urlencode scheduled_delivery_timestamp=3165826990
```
{end_tabs}
## Parameters
{generate_api_arguments_table|zulip.yaml|/scheduled_messages:post}
{generate_parameter_description(/scheduled_messages:post)}
## Response
{generate_return_values_table|zulip.yaml|/scheduled_messages:post}
{generate_response_description(/scheduled_messages:post)}
#### Example response(s)
{generate_code_example|/scheduled_messages:post|fixture}

View File

@@ -1,6 +0,0 @@
# Create a channel
You can create a channel using Zulip's REST API by submitting a
[subscribe](/api/subscribe) request with a channel name that
doesn't yet exist and passing appropriate parameters to define
the initial configuration of the new channel.

View File

@@ -1,254 +0,0 @@
# Deploying bots in production
Usually, work on a bot starts on a laptop. At some point, you'll want
to deploy your bot in a production environment, so that it'll stay up
regardless of what's happening with your laptop. There are several
options for doing so:
* The simplest is running `zulip-run-bot` inside a `screen` session on
a server. This works, but if your server reboots, you'll need to
manually restart it, so we don't recommend it.
* Using `supervisord` or a similar tool for managing a production
process with `zulip-run-bot`. This consumes a bit of resources
(since you need a persistent process running), but otherwise works
great.
* Using the Zulip Botserver, which is a simple Flask server for
running a bot in production, and connecting that to Zulip's outgoing
webhooks feature. This can be deployed in environments like
Heroku's free tier without running a persistent process.
## Zulip Botserver
The Zulip Botserver is for people who want to
* run bots in production.
* run multiple bots at once.
The Zulip Botserver is a Python (Flask) server that implements Zulip's
outgoing webhooks API. You can of course write your own servers using
the outgoing webhooks API, but the Botserver is designed to make it
easy for a novice Python programmer to write a new bot and deploy it
in production.
### How Botserver works
Zulip Botserver starts a web server that listens to incoming messages
from your main Zulip server. The sequence of events in a successful
Botserver interaction are:
1. Your bot user is mentioned or receives a direct message:
```
@**My Bot User** hello world
```
1. The Zulip server sends a POST request to your Botserver endpoint URL:
```
{
"message":{
"content":"@**My Bot User** hello world",
},
"bot_email":"myuserbot-bot@example.com",
"trigger":"mention",
"token":"XXXX"
}
```
This URL is configured in the Zulip web-app in your Bot User's settings.
1. The Botserver searches for a bot to handle the message, and executes your
bot's `handle_message` code.
Your bot's code should work just like it does with `zulip-run-bot`.
### Installing the Zulip Botserver
Install the `zulip_botserver` package:
```
pip3 install zulip_botserver
```
### Create a bot in your Zulip organization
{start_tabs}
1. Navigate to the **Bots** tab of the **Personal settings** menu, and click
**Add a new bot**.
1. Set the **Bot type** to **Outgoing webhook**.
1. Set the **endpoint URL** to `https://<host>:<port>` where `host` is the
hostname of the server you'll be running the Botserver on, and `port` is
the port number. The default port is `5002`.
1. Click **Create bot**. You should see the new bot user in the
**Active bots** panel.
{end_tabs}
### Running a bot using the Zulip Botserver
{start_tabs}
1. [Create your bot](#create-a-bot-in-your-zulip-organization) in your Zulip
organization.
1. Download the `zuliprc` file for the bot created above from the
**Bots** tab of the **Personal settings** menu, by clicking the download
(<i class="fa fa-download"></i>) icon under the bot's name.
1. Run the Botserver, where `helloworld` is the name of the bot you
want to run:
`zulip-botserver --config-file <path_to_zuliprc> --bot-name=helloworld`
You can specify the port number and various other options; run
`zulip-botserver --help` to see how to do this.
{end_tabs}
Congrats, everything is set up! Test your Botserver like you would
test a normal bot.
### Running multiple bots using the Zulip Botserver
The Zulip Botserver also supports running multiple bots from a single
Botserver process.
{start_tabs}
1. [Create your bots](#create-a-bot-in-your-zulip-organization)
in your Zulip organization.
1. Download the `botserverrc` file from the **Bots** tab of the
**Personal settings** menu, using the **Download config of all active
outgoing webhook bots in Zulip Botserver format** option.
1. Open the `botserverrc`. It should contain one or more sections that look
like this:
```
[helloworld]
email=foo-bot@hostname
key=dOHHlyqgpt5g0tVuVl6NHxDLlc9eFRX4
site=http://hostname
token=aQVQmSd6j6IHphJ9m1jhgHdbnhl5ZcsY
bot-config-file=~/path/to/helloworld.conf
```
Each section contains the configuration for an outgoing webhook bot.
1. For each bot, enter the name of the bot you want to run in the square
brackets `[]`, e.g., the above example applies to the `helloworld` bot.
To run an external bot, enter the path to the bot's python file instead,
e.g., `[~/Documents/my_bot_script.py]`.
!!! tip ""
The `bot-config-file` setting is needed only for bots that
use a config file.
1. Run the Zulip Botserver by passing the `botserverrc` to it.
```
zulip-botserver --config-file <path-to-botserverrc> --hostname <address> --port <port>
```
If omitted, `hostname` defaults to `127.0.0.1` and `port` to `5002`.
{end_tabs}
### Running Zulip Botserver with supervisord
[supervisord](http://supervisord.org/) is a popular tool for running
services in production. It helps ensure the service starts on boot,
manages log files, restarts the service if it crashes, etc. This
section documents how to run the Zulip Botserver using *supervisord*.
Running the Zulip Botserver with *supervisord* works almost like
running it manually.
{start_tabs}
1. Install *supervisord* via your package manager; e.g., on Debian/Ubuntu:
```
sudo apt-get install supervisor
```
1. Configure *supervisord*. *supervisord* stores its configuration in
`/etc/supervisor/conf.d`.
* Do **one** of the following:
* Download the [sample config file][supervisord-config-file]
and store it in `/etc/supervisor/conf.d/zulip-botserver.conf`.
* Copy the following section into your existing supervisord config file.
[program:zulip-botserver]
command=zulip-botserver --config-file=<path/to/your/botserverrc>
--hostname <address> --port <port>
startsecs=3
stdout_logfile=/var/log/zulip-botserver.log ; all output of your Botserver will be logged here
redirect_stderr=true
* Edit the `<>` sections according to your preferences.
[supervisord-config-file]: https://raw.githubusercontent.com/zulip/python-zulip-api/main/zulip_botserver/zulip-botserver-supervisord.conf
1. Update *supervisord* to read the configuration file:
```
supervisorctl reread
supervisorctl update
```
(or you can use `/etc/init.d/supervisord restart`, but this is less
disruptive if you're using *supervisord* for other services as well).
1. Test if your setup is successful:
```
supervisorctl status
```
The output should include a line similar to this:
> zulip-botserver RUNNING pid 28154, uptime 0:00:27
The standard output of the Botserver will be logged to the path in
your *supervisord* configuration.
{end_tabs}
If you are hosting the Botserver yourself (as opposed to using a
hosting service that provides SSL), we recommend securing your
Botserver with SSL using an `nginx` or `Apache` reverse proxy and
[Certbot](https://certbot.eff.org/).
### Troubleshooting
- Make sure the API key you're using is for an [outgoing webhook
bot](/api/outgoing-webhooks) and you've
correctly configured the URL for your Botserver.
- Your Botserver needs to be accessible from your Zulip server over
HTTP(S). Make sure any firewall allows the connection. We
recommend using [zulip-run-bot](running-bots) instead for
development/testing on a laptop or other non-server system.
If your Zulip server is self-hosted, you can test by running `curl
http://zulipbotserver.example.com:5002` from your Zulip server;
the output should be:
```
$ curl http://zulipbotserver.example.com:5002/
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>405 Method Not Allowed</title>
<h1>Method Not Allowed</h1>
<p>The method is not allowed for the requested URL.</p>
```
## Related articles
* [Non-webhook integrations](/api/non-webhook-integrations)
* [Running bots](/api/running-bots)
* [Writing bots](/api/writing-bots)

View File

@@ -1,122 +0,0 @@
# Group-setting values
Settings defining permissions in Zulip are increasingly represented
using [user groups](/help/user-groups), which offer much more flexible
configuration than the older [roles](/api/roles-and-permissions) system.
!!! warn ""
**Note**: Many group-valued settings are configured to require
a single system group for their value via
`server_supported_permission_settings`, pending web app UI
changes to fully support group-setting values.
**Changes**: Before Zulip 10.0 (feature level 309), only system
groups were permitted values for group-setting values in
production environments, regardless of the values in
`server_supported_permission_settings`.
In the API, these settings are represented using a **group-setting
value**, which can take two forms:
- An integer user group ID, which can be either a named user group
visible in the UI or a [role-based system group](#system-groups).
- An object with fields `direct_member_ids`, containing a list of
integer user IDs, and `direct_subgroup_ids`, containing a list of
integer group IDs. The setting's value is the union of the
identified collection of users and groups.
Group-setting values in the object form can be thought of as an
anonymous group. They function very much like a named user group
object, and remove the naming and UI overhead involved in creating
a visible user group just to store the value of a single setting.
The server will canonicalize an object with an empty `direct_member_ids`
list and a `direct_subgroup_ids` list that contains just a single group
ID to the integer format.
## System groups
The Zulip server maintains a collection of system groups that
correspond to the users with a given role; this makes it convenient to
store concepts like "all administrators" in a group-setting
value. These use a special naming convention and can be recognized by
the `is_system_group` property on their group object.
The following system groups are maintained by the Zulip server:
- `role:internet`: Everyone on the Internet has this permission; this
is used to configure the [public access
option](/help/public-access-option).
- `role:everyone`: All users, including guests.
- `role:members`: All users, excluding guests.
- `role:fullmembers`: All [full
members](https://zulip.com/api/roles-and-permissions#determining-if-a-user-is-a-full-member)
of the organization.
- `role:moderators`: All users with at least the moderator role.
- `role:administrators`: All users with at least the administrator
role.
- `role:owners`: All users with the owner role.
- `role:nobody`: The formal empty group. Used in the API to represent
disabling a feature.
Client UI for setting a permission or displaying a group (when
silently mentioned, for example) is encouraged to display system
groups using their description, rather than using their `role:}`
names, which are chosen to be unique and clear in the API.
System groups should generally not be displayed in UI for
administering an organization's user groups, since they are not
directly mutable.
## Updating group-setting values
The Zulip API uses a special format for modifying an existing setting
using a group-setting value.
A **group-setting update** is an object with a `new` field and an
optional `old` field, each containing a group-setting value. The
setting's value will be set to the membership expressed by the `new`
field.
The `old` field expresses the client's understanding of the current
value of the setting. If the `old` field is present and does not match
the actual current value of the setting, then the request will fail
with error code `EXPECTATION_MISMATCH` and no changes will be applied.
When a user edits the setting in a UI, the resulting API request
should generally always include the `old` field, giving the value
the list had when the user started editing. This accurately expresses
the user's intent, and if two users edit the same list around the
same time, it prevents a situation where the second change
accidentally reverts the first one without either user noticing.
Omitting `old` is appropriate where the intent really is a new complete
list rather than an edit, for example in an integration that syncs the
list from an external source of truth.
## Permitted values
Not every possible group-setting value is a valid configuration for a
given group-based setting. For example, as a security hardening
measure, some administrative permissions should never be exercised by
guest users, and the system group for all users, including guests,
should not be offered to users as an option for those settings.
Others have restrictions to only permit system groups due to UI
components not yet having been migrated to support a broader set of
values. In order to avoid this configuration ending up hardcoded in
clients, every permission setting using this framework has an entry in
the `server_supported_permission_settings` section of the [`POST
/register`](/api/register-queue) response.
Clients that support mutating group-settings values must parse that
part of the `register` payload in order to compute the set of
permitted values to offer to the user and avoid server-side errors
when trying to save a value.
Note specifically that the `allow_everyone_group` field, which
determines whether the setting can have the value of "all user
accounts, including guests" also controls whether guests users can
exercise the permission regardless of their membership in the
group-setting value.

View File

@@ -1,81 +0,0 @@
# HTTP headers
This page documents the HTTP headers used by the Zulip API.
Most important is that API clients authenticate to the server using
HTTP Basic authentication. If you're using the official [Python or
JavaScript bindings](/api/installation-instructions), this is taken
care of when you configure said bindings.
Otherwise, see the `curl` example on each endpoint's documentation
page, which details the request format.
Documented below are additional HTTP headers and header conventions
generally used by Zulip:
## The `User-Agent` header
Clients are not required to pass a `User-Agent` HTTP header, but we
highly recommend doing so when writing an integration. It's easy to do
and it can help save time when debugging issues related to an API
client.
If provided, the Zulip server will parse the `User-Agent` HTTP header
in order to identify specific clients and integrations. This
information is used by the server for logging, [usage
statistics](/help/analytics), and on rare occasions, for
backwards-compatibility logic to preserve support for older versions
of official clients.
Official Zulip clients and integrations use a `User-Agent` that starts
with something like `ZulipMobile/20.0.103 `, encoding the name of the
application and it's version.
Zulip's official API bindings have reasonable defaults for
`User-Agent`. For example, the official Zulip Python bindings have a
default `User-Agent` starting with `ZulipPython/{version}`, where
`version` is the version of the library.
You can give your bot/integration its own name by passing the `client`
parameter when initializing the Python bindings. For example, the
official Zulip Nagios integration is initialized like this:
``` python
client = zulip.Client(
config_file=opts.config, client=f"ZulipNagios/{VERSION}"
)
```
If you are working on an integration that you plan to share outside
your organization, you can get help picking a good name in
[#integrations][integrations-channel] in the [Zulip development
community](https://zulip.com/development-community/).
## Rate-limiting response headers
To help clients avoid exceeding rate limits, Zulip sets the following
HTTP headers in all API responses:
* `X-RateLimit-Remaining`: The number of additional requests of this
type that the client can send before exceeding its limit.
* `X-RateLimit-Limit`: The limit that would be applicable to a client
that had not made any recent requests of this type. This is useful
for designing a client's burst behavior so as to avoid ever reaching
a rate limit.
* `X-RateLimit-Reset`: The time at which the client will no longer
have any rate limits applied to it (and thus could do a burst of
`X-RateLimit-Limit` requests).
[Zulip's rate limiting rules are configurable][rate-limiting-rules],
and can vary by server and over time. The default configuration
currently limits:
* Every user is limited to 200 total API requests per minute.
* Separate, much lower limits for authentication/login attempts.
When the Zulip server has configured multiple rate limits that apply
to a given request, the values returned will be for the strictest
limit.
[rate-limiting-rules]: https://zulip.readthedocs.io/en/latest/production/security-model.html#rate-limiting
[integrations-channel]: https://chat.zulip.org/#narrow/channel/127-integrations/

View File

@@ -1,168 +0,0 @@
#### Messages
* [Send a message](/api/send-message)
* [Upload a file](/api/upload-file)
* [Edit a message](/api/update-message)
* [Delete a message](/api/delete-message)
* [Get messages](/api/get-messages)
* [Construct a narrow](/api/construct-narrow)
* [Add an emoji reaction](/api/add-reaction)
* [Remove an emoji reaction](/api/remove-reaction)
* [Render a message](/api/render-message)
* [Fetch a single message](/api/get-message)
* [Check if messages match a narrow](/api/check-messages-match-narrow)
* [Get a message's edit history](/api/get-message-history)
* [Update personal message flags](/api/update-message-flags)
* [Update personal message flags for narrow](/api/update-message-flags-for-narrow)
* [Mark all messages as read](/api/mark-all-as-read)
* [Mark messages in a channel as read](/api/mark-stream-as-read)
* [Mark messages in a topic as read](/api/mark-topic-as-read)
* [Get a message's read receipts](/api/get-read-receipts)
* [Report a message](/api/report-message)
#### Scheduled messages
* [Get scheduled messages](/api/get-scheduled-messages)
* [Create a scheduled message](/api/create-scheduled-message)
* [Edit a scheduled message](/api/update-scheduled-message)
* [Delete a scheduled message](/api/delete-scheduled-message)
#### Message reminders
* [Create a message reminder](/api/create-message-reminder)
* [Get reminders](/api/get-reminders)
* [Delete a reminder](/api/delete-reminder)
#### Drafts
* [Get drafts](/api/get-drafts)
* [Create drafts](/api/create-drafts)
* [Edit a draft](/api/edit-draft)
* [Delete a draft](/api/delete-draft)
* [Get all saved snippets](/api/get-saved-snippets)
* [Create a saved snippet](/api/create-saved-snippet)
* [Edit a saved snippet](/api/edit-saved-snippet)
* [Delete a saved snippet](/api/delete-saved-snippet)
#### Navigation views
* [Get all navigation views](/api/get-navigation-views)
* [Add a navigation view](/api/add-navigation-view)
* [Update the navigation view](/api/edit-navigation-view)
* [Remove a navigation view](/api/remove-navigation-view)
#### Channels
* [Get subscribed channels](/api/get-subscriptions)
* [Subscribe to a channel](/api/subscribe)
* [Unsubscribe from a channel](/api/unsubscribe)
* [Get subscription status](/api/get-subscription-status)
* [Get channel subscribers](/api/get-subscribers)
* [Update subscription settings](/api/update-subscription-settings)
* [Get all channels](/api/get-streams)
* [Get a channel by ID](/api/get-stream-by-id)
* [Get channel ID](/api/get-stream-id)
* [Create a channel](/api/create-channel)
* [Update a channel](/api/update-stream)
* [Archive a channel](/api/archive-stream)
* [Get channel's email address](/api/get-stream-email-address)
* [Get topics in a channel](/api/get-stream-topics)
* [Topic muting](/api/mute-topic)
* [Update personal preferences for a topic](/api/update-user-topic)
* [Delete a topic](/api/delete-topic)
* [Add a default channel](/api/add-default-stream)
* [Remove a default channel](/api/remove-default-stream)
* [Create a channel folder](/api/create-channel-folder)
* [Get channel folders](/api/get-channel-folders)
* [Reorder channel folders](/api/patch-channel-folders)
* [Update a channel folder](/api/update-channel-folder)
#### Users
* [Get a user](/api/get-user)
* [Get a user by email](/api/get-user-by-email)
* [Get own user](/api/get-own-user)
* [Get users](/api/get-users)
* [Create a user](/api/create-user)
* [Update a user](/api/update-user)
* [Update a user by email](/api/update-user-by-email)
* [Deactivate a user](/api/deactivate-user)
* [Deactivate own user](/api/deactivate-own-user)
* [Reactivate a user](/api/reactivate-user)
* [Get a user's status](/api/get-user-status)
* [Update your status](/api/update-status)
* [Update user status](/api/update-status-for-user)
* [Set "typing" status](/api/set-typing-status)
* [Set "typing" status for message editing](/api/set-typing-status-for-message-edit)
* [Get a user's presence](/api/get-user-presence)
* [Get presence of all users](/api/get-presence)
* [Update your presence](/api/update-presence)
* [Get attachments](/api/get-attachments)
* [Delete an attachment](/api/remove-attachment)
* [Update settings](/api/update-settings)
* [Get user groups](/api/get-user-groups)
* [Create a user group](/api/create-user-group)
* [Update a user group](/api/update-user-group)
* [Deactivate a user group](/api/deactivate-user-group)
* [Update user group members](/api/update-user-group-members)
* [Update subgroups of a user group](/api/update-user-group-subgroups)
* [Get user group membership status](/api/get-is-user-group-member)
* [Get user group members](/api/get-user-group-members)
* [Get subgroups of a user group](/api/get-user-group-subgroups)
* [Mute a user](/api/mute-user)
* [Unmute a user](/api/unmute-user)
* [Get all alert words](/api/get-alert-words)
* [Add alert words](/api/add-alert-words)
* [Remove alert words](/api/remove-alert-words)
#### Invitations
* [Get all invitations](/api/get-invites)
* [Send invitations](/api/send-invites)
* [Create a reusable invitation link](/api/create-invite-link)
* [Resend an email invitation](/api/resend-email-invite)
* [Revoke an email invitation](/api/revoke-email-invite)
* [Revoke a reusable invitation link](/api/revoke-invite-link)
#### Server & organizations
* [Get server settings](/api/get-server-settings)
* [Get linkifiers](/api/get-linkifiers)
* [Add a linkifier](/api/add-linkifier)
* [Update a linkifier](/api/update-linkifier)
* [Remove a linkifier](/api/remove-linkifier)
* [Reorder linkifiers](/api/reorder-linkifiers)
* [Add a code playground](/api/add-code-playground)
* [Remove a code playground](/api/remove-code-playground)
* [Get all custom emoji](/api/get-custom-emoji)
* [Upload custom emoji](/api/upload-custom-emoji)
* [Deactivate custom emoji](/api/deactivate-custom-emoji)
* [Get all custom profile fields](/api/get-custom-profile-fields)
* [Reorder custom profile fields](/api/reorder-custom-profile-fields)
* [Create a custom profile field](/api/create-custom-profile-field)
* [Update realm-level defaults of user settings](/api/update-realm-user-settings-defaults)
* [Get all data exports](/api/get-realm-exports)
* [Create a data export](/api/export-realm)
* [Get data export consent state](/api/get-realm-export-consents)
* [Test welcome bot custom message](/api/test-welcome-bot-custom-message)
#### Real-time events
* [Real time events API](/api/real-time-events)
* [Register an event queue](/api/register-queue)
* [Get events from an event queue](/api/get-events)
* [Delete an event queue](/api/delete-queue)
#### Specialty endpoints
* [Fetch an API key (production)](/api/fetch-api-key)
* [Fetch an API key (development only)](/api/dev-fetch-api-key)
* [Send an E2EE test notification to mobile device(s)](/api/e2ee-test-notify)
* [Register E2EE push device](/api/register-push-device)
* [Mobile notifications](/api/mobile-notifications)
* [Send a test notification to mobile device(s)](/api/test-notify)
* [Add an APNs device token](/api/add-apns-token)
* [Remove an APNs device token](/api/remove-apns-token)
* [Add an FCM registration token](/api/add-fcm-token)
* [Remove an FCM registration token](/api/remove-fcm-token)
* [Create BigBlueButton video call](/api/create-big-blue-button-video-call)

View File

@@ -1,223 +0,0 @@
# Incoming webhook integrations
An incoming webhook allows a third-party service to push data to Zulip when
something happens. There are several ways to set up an incoming webhook in
Zulip:
* Use our [REST API](/api/rest) endpoint for [sending
messages](/api/send-message). This works great for internal tools
or cases where the third-party tool wants to control the formatting
of the messages in Zulip.
* Use one of our supported [integration
frameworks](/integrations/meta-integration), such as the
[Slack-compatible incoming webhook](/integrations/doc/slack_incoming),
[Zapier integration](/integrations/doc/zapier), or
[IFTTT integration](/integrations/doc/ifttt).
* Implementing an incoming webhook integration (detailed on this page),
where all the logic for formatting the Zulip messages lives in the
Zulip server. This is how most of [Zulip's official
integrations](/integrations/) work, because they enable Zulip to
support third-party services that just have an "outgoing webhook"
feature (without the third party needing to do any work specific to
Zulip).
In an incoming webhook integration, the third-party service's
"outgoing webhook" feature sends an `HTTP POST` to a special URL when
it has something for you, and then the Zulip "incoming webhook"
integration handles that incoming data to format and send a message in
Zulip.
New official Zulip webhook integrations can take just a few hours to
write, including tests and documentation, if you use the right
process.
## Quick guide
* Set up the
[Zulip development environment](https://zulip.readthedocs.io/en/latest/development/overview.html).
* Use [Zulip's JSON integration](/integrations/doc/json),
<https://webhook.site/>, or a similar site to capture an example
webhook payload from the third-party service. Create a
`zerver/webhooks/<mywebhook>/fixtures/` directory, and add the
captured JSON payload as a test fixture.
* Create an `Integration` object, and add it to the `WEBHOOK_INTEGRATIONS`
list in `zerver/lib/integrations.py`. Search for `WebhookIntegration` in that
file to find an existing one to copy.
* Write a draft webhook handler in `zerver/webhooks/<mywebhook>/view.py`. There
are a lot of examples in the `zerver/webhooks/` directory that you can copy.
We recommend templating from a short one, like `zendesk`.
* Write a test for your fixture in `zerver/webhooks/<mywebhook>/tests.py`.
Run the test for your integration like this:
```
tools/test-backend zerver/webhooks/<mywebhook>/
```
Iterate on debugging the test and webhooks handler until it all
works.
* Capture payloads for the other common types of `POST`s the third-party
service will make, and add tests for them; usually this part of the
process is pretty fast.
* Document the integration in `zerver/webhooks/<mywebhook>/doc.md`(required for
getting it merged into Zulip). You can use existing documentation, like
[this one](https://raw.githubusercontent.com/zulip/zulip/main/zerver/webhooks/github/doc.md),
as a template. This should not take more than 15 minutes, even if you don't speak English
as a first language (we'll clean up the text before merging).
## Hello world walkthrough
Check out the [detailed walkthrough](incoming-webhooks-walkthrough) for step-by-step
instructions.
## Checklist
### Files that need to be created
Select a name for your incoming webhook and use it consistently. The examples
below are for a webhook named `MyWebHook`.
* `zerver/webhooks/mywebhook/__init__.py`: Empty file that is an obligatory
part of every python package. Remember to `git add` it.
* `zerver/webhooks/mywebhook/view.py`: The main webhook integration function,
called `api_mywebhook_webhook`, along with any necessary helper functions.
* `zerver/webhooks/mywebhook/fixtures/message_type.json`: Sample JSON payload data
used by tests. Add one fixture file per type of message supported by your
integration.
* `zerver/webhooks/mywebhook/tests.py`: Tests for your webhook.
* `zerver/webhooks/mywebhook/doc.md`: End-user documentation explaining
how to add the integration.
* `static/images/integrations/logos/mywebhook.svg`: A square logo for the
platform/server/product you are integrating. Used on the documentation
pages as well as the sender's avatar for messages sent by the integration.
* `static/images/integrations/mywebhook/001.png`: A screenshot of a message
sent by the integration, used on the documentation page. This can be
generated by running `tools/screenshots/generate-integration-docs-screenshot --integration mywebhook`.
* `static/images/integrations/bot_avatars/mywebhook.png`: A square logo for the
platform/server/product you are integrating which is used to create the avatar
for generating screenshots with. This can be generated automatically from
`static/images/integrations/logos/mywebhook.svg` by running
`tools/setup/generate_integration_bots_avatars.py`.
### Files that need to be updated
* `zerver/lib/integrations.py`: Add your integration to
`WEBHOOK_INTEGRATIONS`. This will automatically register a
URL for the incoming webhook of the form `api/v1/external/mywebhook` and
associate it with the function called `api_mywebhook_webhook` in
`zerver/webhooks/mywebhook/view.py`. Also add your integration to
`DOC_SCREENSHOT_CONFIG`. This will allow you to automatically generate
a screenshot for the documentation by running
`tools/screenshots/generate-integration-docs-screenshot --integration mywebhook`.
## Common Helpers
* If your integration will receive a test webhook payload, you can use
`get_setup_webhook_message` to create our standard message for test payloads.
You can import this from `zerver/lib/webhooks/common.py`, and it will generate
a message like this: "GitHub webhook is successfully configured! 🎉"
## General advice
* Consider using our Zulip markup to make the output from your
integration especially attractive or useful (e.g., emoji, Markdown
emphasis, or @-mentions).
* Use topics effectively to ensure sequential messages about the same
thing are threaded together; this makes for much better consumption
by users. E.g., for a bug tracker integration, put the bug number in
the topic for all messages; for an integration like Nagios, put the
service in the topic.
* Integrations that don't match a team's workflow can often be
uselessly spammy. Give careful thought to providing options for
triggering Zulip messages only for certain message types, certain
projects, or sending different messages to different channels/topics,
to make it easy for teams to configure the integration to support
their workflow.
* Consistently capitalize the name of the integration in the
documentation and the Client name the way the vendor does. It's OK
to use all-lower-case in the implementation.
* Sometimes it can be helpful to contact the vendor if it appears they
don't have an API or webhook we can use; sometimes the right API
is just not properly documented.
* A helpful tool for testing your integration is
[UltraHook](http://www.ultrahook.com/), which allows you to receive webhook
calls via your local Zulip development environment. This enables you to do end-to-end
testing with live data from the service you're integrating and can help you
spot why something isn't working or if the service is using custom HTTP
headers.
## URL specification
The base URL for an incoming webhook integration bot, where
`INTEGRATION_NAME` is the name of the specific webhook integration and
`API_KEY` is the API key of the bot created by the user for the
integration, is:
```
{{ api_url }}/v1/external/INTEGRATION_NAME?api_key=API_KEY
```
The list of existing webhook integrations can be found by browsing the
[Integrations documentation](/integrations/) or in
`zerver/lib/integrations.py` at `WEBHOOK_INTEGRATIONS`.
Parameters accepted in the URL include:
### api_key *(required)*
The API key of the bot created by the user for the integration. To get a
bot's API key, see the [API keys](/api/api-keys) documentation.
### stream
The channel for the integration to send notifications to. Can be either
the channel ID or the [URL-encoded][url-encoder] channel name. By default
the integration will send direct messages to the bot's owner.
!!! tip ""
A channel ID can be found when [browsing channels][browse-channels]
in the web or desktop apps.
### topic
The topic in the specified channel for the integration to send
notifications to. The topic should also be [URL-encoded][url-encoder].
By default the integration will have a topic configured for channel
messages.
### only_events, exclude_events
Some incoming webhook integrations support these parameters to filter
which events will trigger a notification. You can append either
`&only_events=["event_a","event_b"]` or
`&exclude_events=["event_a","event_b"]` (or both, with different events)
to the URL, with an arbitrary number of supported events.
You can use UNIX-style wildcards like `*` to include multiple events.
For example, `test*` matches every event that starts with `test`.
!!! tip ""
For a list of supported events, see a specific [integration's
documentation](/integrations) page.
[browse-channels]: /help/introduction-to-channels#browse-and-subscribe-to-channels
[add-bot]: /help/add-a-bot-or-integration
[url-encoder]: https://www.urlencoder.org/
## Related articles
* [Integrations overview](/api/integrations-overview)
* [Incoming webhook walkthrough](/api/incoming-webhooks-walkthrough)
* [Non-webhook integrations](/api/non-webhook-integrations)

View File

@@ -1,743 +0,0 @@
# Incoming webhook walkthrough
Below, we explain each part of a simple incoming webhook integration,
called **Hello World**. This integration sends a "hello" message to the `test`
channel and includes a link to the Wikipedia article of the day, which
it formats from json data it receives in the http request.
Use this walkthrough to learn how to write your first webhook
integration.
## Step 0: Create fixtures
The first step in creating an incoming webhook is to examine the data that the
service you want to integrate will be sending to Zulip.
* Use [Zulip's JSON integration](/integrations/doc/json),
<https://webhook.site/>, or a similar tool to capture webhook
payload(s) from the service you are integrating. Examining this data
allows you to do two things:
1. Determine how you will need to structure your webhook code, including what
message types your integration should support and how.
2. Create fixtures for your webhook tests.
A test fixture is a small file containing test data, one for each test.
Fixtures enable the testing of webhook integration code without the need to
actually contact the service being integrated.
Because `Hello World` is a very simple integration that does one
thing, it requires only one fixture,
`zerver/webhooks/helloworld/fixtures/hello.json`:
```json
{
"featured_title":"Marilyn Monroe",
"featured_url":"https://en.wikipedia.org/wiki/Marilyn_Monroe",
}
```
When writing your own incoming webhook integration, you'll want to write a test function
for each distinct message condition your integration supports. You'll also need a
corresponding fixture for each of these tests. Depending on the type of data
the 3rd party service sends, your fixture may contain JSON, URL encoded text, or
some other kind of data. See [Step 5: Create automated tests](#step-5-create-automated-tests) or
[Testing](https://zulip.readthedocs.io/en/latest/testing/testing.html) for further details.
### HTTP Headers
Some third-party webhook APIs, such as GitHub's, don't encode all the
information about an event in the JSON request body. Instead, they
put key details like the event type in a separate HTTP header
(generally this is clear in their API documentation). In order to
test Zulip's handling of that integration, you will need to record
which HTTP headers are used with each fixture you capture.
Since this is integration-dependent, Zulip offers a simple API for
doing this, which is probably best explained by looking at the example
for GitHub: `zerver/webhooks/github/view.py`; basically, as part of
writing your integration, you'll write a special function in your
view.py file that maps the filename of the fixture to the set of HTTP
headers to use. This function must be named "fixture_to_headers". Most
integrations will use the same strategy as the GitHub integration:
encoding the third party variable header data (usually just an event
type) in the fixture filename, in such a case, you won't need to
explicitly write the logic for such a special function again,
instead you can just use the same helper method that the GitHub
integration uses.
## Step 1: Initialize your webhook python package
In the `zerver/webhooks/` directory, create new subdirectory that will
contain all of the corresponding code. In our example, it will be
`helloworld`. The new directory will be a python package, so you have
to create an empty `__init__.py` file in that directory via, for
example, `touch zerver/webhooks/helloworld/__init__.py`.
## Step 2: Create main webhook code
The majority of the code for your new integration will be in a single
python file, `zerver/webhooks/mywebhook/view.py`.
The Hello World integration is in `zerver/webhooks/helloworld/view.py`:
```python
from django.http import HttpRequest, HttpResponse
from zerver.decorator import webhook_view
from zerver.lib.response import json_success
from zerver.lib.typed_endpoint import JsonBodyPayload, typed_endpoint
from zerver.lib.validator import WildValue, check_string
from zerver.lib.webhooks.common import check_send_webhook_message
from zerver.models import UserProfile
@webhook_view("HelloWorld")
@typed_endpoint
def api_helloworld_webhook(
request: HttpRequest,
user_profile: UserProfile,
*,
payload: JsonBodyPayload[WildValue],
) -> HttpResponse:
# construct the body of the message
body = "Hello! I am happy to be here! :smile:"
# try to add the Wikipedia article of the day
body_template = (
"\nThe Wikipedia featured article for today is **[{featured_title}]({featured_url})**"
)
body += body_template.format(
featured_title=payload["featured_title"].tame(check_string),
featured_url=payload["featured_url"].tame(check_string),
)
topic = "Hello World"
# send the message
check_send_webhook_message(request, user_profile, topic, body)
return json_success(request)
```
The above code imports the required functions and defines the main webhook
function `api_helloworld_webhook`, decorating it with `webhook_view` and
`typed_endpoint`. The `typed_endpoint` decorator allows you to
access request variables with `JsonBodyPayload()`. You can find more about `JsonBodyPayload` and request variables in [Writing views](
https://zulip.readthedocs.io/en/latest/tutorials/writing-views.html#request-variables).
You must pass the name of your integration to the
`webhook_view` decorator; that name will be used to
describe your integration in Zulip's analytics (e.g., the `/stats`
page). Here we have used `HelloWorld`. To be consistent with other
integrations, use the name of the product you are integrating in camel
case, spelled as the product spells its own name (except always first
letter upper-case).
The `webhook_view` decorator indicates that the 3rd party service will
send the authorization as an API key in the query parameters. If your service uses
HTTP basic authentication, you would instead use the `authenticated_rest_api_view`
decorator.
You should name your webhook function as such
`api_webhookname_webhook` where `webhookname` is the name of your
integration and is always lower-case.
At minimum, the webhook function must accept `request` (Django
[HttpRequest](https://docs.djangoproject.com/en/5.0/ref/request-response/#django.http.HttpRequest)
object), and `user_profile` (Zulip's user object). You may also want to
define additional parameters using the `typed_endpoint` decorator.
In the example above, we have defined `payload` which is populated
from the body of the http request, `stream` with a default of `test`
(available by default in the Zulip development environment), and
`topic` with a default of `Hello World`. If your webhook uses a custom channel,
it must exist before a message can be created in it. (See
[Step 4: Create automated tests](#step-5-create-automated-tests) for how to handle this in tests.)
The line that begins `# type` is a mypy type annotation. See [this
page](https://zulip.readthedocs.io/en/latest/testing/mypy.html) for details about
how to properly annotate your webhook functions.
In the body of the function we define the body of the message as `Hello! I am
happy to be here! :smile:`. The `:smile:` indicates an emoji. Then we append a
link to the Wikipedia article of the day as provided by the json payload.
* Sometimes, it might occur that a json payload does not contain all required keys your
integration checks for. In such a case, any `KeyError` thrown is handled by the server
backend and will create an appropriate response.
Then we send a message with `check_send_webhook_message`, which will
validate the message and do the following:
* Send a public (channel) message if the `stream` query parameter is
specified in the webhook URL.
* If the `stream` query parameter isn't specified, it will send a direct
message to the owner of the webhook bot.
Finally, we return a 200 http status with a JSON format success message via
`json_success(request)`.
## Step 3: Create an API endpoint for the webhook
In order for an incoming webhook to be externally available, it must be mapped
to a URL. This is done in `zerver/lib/integrations.py`.
Look for the lines beginning with:
```python
WEBHOOK_INTEGRATIONS: List[WebhookIntegration] = [
```
And you'll find the entry for Hello World:
```python
WebhookIntegration("helloworld", ["misc"], display_name="Hello World"),
```
This tells the Zulip API to call the `api_helloworld_webhook` function in
`zerver/webhooks/helloworld/view.py` when it receives a request at
`/api/v1/external/helloworld`.
This line also tells Zulip to generate an entry for Hello World on the Zulip
integrations page using `static/images/integrations/logos/helloworld.svg` as its
icon. The second positional argument defines a list of categories for the
integration.
At this point, if you're following along and/or writing your own Hello World
webhook, you have written enough code to test your integration. There are three
tools which you can use to test your webhook - 2 command line tools and a GUI.
### Webhooks requiring custom configuration
In cases where an incoming webhook integration supports optional URL parameters,
one can use the `url_options` feature. It's a field in the `WebhookIntegration`
class that is used when [generating a URL for an integration](/help/generate-integration-url)
in the web app, which encodes the user input for each URL parameter in the
incoming webhook's URL.
These URL options are declared as follows:
```python
WebhookIntegration(
'helloworld',
...
url_options=[
WebhookUrlOption(
name='ignore_private_repositories',
label='Exclude notifications from private repositories',
validator=check_string
),
],
)
```
`url_options` is a list describing the parameters the web app UI should offer when
generating the incoming webhook URL:
- `name`: The parameter name that is used to encode the user input in the
integration's webhook URL.
- `label`: A short descriptive label for this URL parameter in the web app UI.
- `validator`: A validator function, which is used to determine the input type
for this option in the UI, and to indicate how to validate the input.
Currently, the web app UI only supports these validators:
- `check_bool` for checkbox/select input.
- `check_string` for text input.
!!! warn ""
**Note**: To add support for other validators, you can update
`web/src/integration_url_modal.ts`. Common validators are available in
`zerver/lib/validator.py`.
In rare cases, it may be necessary for an incoming webhook to require
additional user configuration beyond what is specified in the POST
URL. A typical use case for this would be APIs that require clients
to do a callback to get details beyond an opaque object ID that one
would want to include in a Zulip notification message.
The `config_options` field in the `WebhookIntegration` class is reserved
for this use case.
### WebhookUrlOption presets
The `build_preset_config` method creates `WebhookUrlOption` objects with
pre-configured fields. These preset URL options primarily serve two
purposes:
- To construct common `WebhookUrlOption` objects that are used in various
incoming webhook integrations.
- To construct `WebhookUrlOption` objects with special UI in the web-app
for [generating incoming webhook URLs](/help/generate-integration-url).
Using a preset URL option with the `build_preset_config` method:
```python
# zerver/lib/integrations.py
from zerver.lib.webhooks.common import PresetUrlOption, WebhookUrlOption
# -- snip --
WebhookIntegration(
"github",
# -- snip --
url_options=[
WebhookUrlOption.build_preset_config(PresetUrlOption.BRANCHES),
],
),
```
Currently configured preset URL options:
- **`BRANCHES`**: This preset is intended to be used for [version control
integrations](/integrations/version-control), and adds UI for the user to
configure which branches of a project's repository will trigger Zulip
notification messages. When the user specifies which branches to receive
notifications from, the `branches` parameter will be added to the [generated
integration URL](/help/generate-integration-url). For example, if the user
input `main` and `dev` for the branches of their repository, then
`&branches=main%2Cdev` would be appended to the generated integration URL.
- **`IGNORE_PRIVATE_REPOSITORIES`**: This preset is intended to be used for
[version control integrations](/integrations/version-control), and adds UI
for the user exclude private repositories from triggering Zulip
notification messages. When the user selects this option, the
`ignore_private_repositories` boolean parameter will be added to the
[generated integration URL](/help/generate-integration-url).
- **`MAPPING`**: This preset is intended to be used for [chat-app
integrations](/integrations/communication) (like Slack), and adds a
special option, **Matching Zulip channel**, to the UI for where to send
Zulip notification messages. This special option maps the notification
messages to Zulip channels that match the messages' original channel
name in the third-party app. When selected, this requires setting a
single topic for notification messages, and adds `&mapping=channels`
to the [generated integration URL](/help/generate-integration-url).
## Step 4: Manually testing the webhook
For either one of the command line tools, first, you'll need to get an
API key from the **Bots** section of your Zulip user's **Personal
settings**. To test the webhook, you'll need to [create a
bot](https://zulip.com/help/add-a-bot-or-integration) with the
**Incoming webhook** type. Replace `<api_key>` with your bot's API key
in the examples presented below! This is how Zulip knows that the
request was made by an authorized user.
### Curl
Using curl:
```bash
curl -X POST -H "Content-Type: application/json" -d '{ "featured_title":"Marilyn Monroe", "featured_url":"https://en.wikipedia.org/wiki/Marilyn_Monroe" }' http://localhost:9991/api/v1/external/helloworld\?api_key\=<api_key>
```
After running the above command, you should see something similar to:
```json
{"msg":"","result":"success"}
```
### Management command: send_webhook_fixture_message
Using `manage.py` from within the Zulip development environment:
```console
(zulip-server) vagrant@vagrant:/srv/zulip$
./manage.py send_webhook_fixture_message \
--fixture=zerver/webhooks/helloworld/fixtures/hello.json \
'--url=http://localhost:9991/api/v1/external/helloworld?api_key=<api_key>'
```
After running the above command, you should see something similar to:
```
2016-07-07 15:06:59,187 INFO 127.0.0.1 POST 200 143ms (mem: 6ms/13) (md: 43ms/1) (db: 20ms/9q) (+start: 147ms) /api/v1/external/helloworld (helloworld-bot@zulip.com via ZulipHelloWorldWebhook)
```
Some webhooks require custom HTTP headers, which can be passed using
`./manage.py send_webhook_fixture_message --custom-headers`. For
example:
--custom-headers='{"X-Custom-Header": "value"}'
The format is a JSON dictionary, so make sure that the header names do
not contain any spaces in them and that you use the precise quoting
approach shown above.
For more information about `manage.py` command-line tools in Zulip, see
the [management commands][management-commands] documentation.
[management-commands]: https://zulip.readthedocs.io/en/latest/production/management-commands.html
### Integrations Dev Panel
This is the GUI tool.
{start_tabs}
1. Run `./tools/run-dev` then go to http://localhost:9991/devtools/integrations/.
1. Set the following mandatory fields:
**Bot** - Any incoming webhook bot.
**Integration** - One of the integrations.
**Fixture** - Though not mandatory, it's recommended that you select one and then tweak it if necessary.
The remaining fields are optional, and the URL will automatically be generated.
1. Click **Send**!
{end_tabs}
By opening Zulip in one tab and then this tool in another, you can quickly tweak
your code and send sample messages for many different test fixtures.
Note: Custom HTTP Headers must be entered as a JSON dictionary, if you want to use any in the first place that is.
Feel free to use 4-spaces as tabs for indentation if you'd like!
Your sample notification may look like:
<img class="screenshot" src="/static/images/api/helloworld-webhook.png" alt="screenshot" />
## Step 5: Create automated tests
Every webhook integration should have a corresponding test file:
`zerver/webhooks/mywebhook/tests.py`.
The Hello World integration's tests are in `zerver/webhooks/helloworld/tests.py`
You should name the class `<WebhookName>HookTests` and have it inherit from
the base class `WebhookTestCase`. For our HelloWorld webhook, we name the test
class `HelloWorldHookTests`:
```python
class HelloWorldHookTests(WebhookTestCase):
CHANNEL_NAME = "test"
URL_TEMPLATE = "/api/v1/external/helloworld?&api_key={api_key}&stream={stream}"
DIRECT_MESSAGE_URL_TEMPLATE = "/api/v1/external/helloworld?&api_key={api_key}"
WEBHOOK_DIR_NAME = "helloworld"
# Note: Include a test function per each distinct message condition your integration supports
def test_hello_message(self) -> None:
expected_topic = "Hello World"
expected_message = "Hello! I am happy to be here! :smile:\nThe Wikipedia featured article for today is **[Marilyn Monroe](https://en.wikipedia.org/wiki/Marilyn_Monroe)**"
# use fixture named helloworld_hello
self.check_webhook(
"hello",
expected_topic,
expected_message,
content_type="application/x-www-form-urlencoded",
)
```
In the above example, `CHANNEL_NAME`, `URL_TEMPLATE`, and `WEBHOOK_DIR_NAME` refer
to class attributes from the base class, `WebhookTestCase`. These are needed by
the helper function `check_webhook` to determine how to execute
your test. `CHANNEL_NAME` should be set to your default channel. If it doesn't exist,
`check_webhook` will create it while executing your test.
If your test expects a channel name from a test fixture, the value in the fixture
and the value you set for `CHANNEL_NAME` must match. The test helpers use `CHANNEL_NAME`
to create the destination channel, and then create the message to send using the
value from the fixture. If these don't match, the test will fail.
`URL_TEMPLATE` defines how the test runner will call your incoming webhook, in the same way
you would provide a webhook URL to the 3rd party service. `api_key={api_key}` says
that an API key is expected.
When writing tests for your webhook, you'll want to include one test function
(and corresponding fixture) per each distinct message condition that your
integration supports.
If, for example, we added support for sending a goodbye message to our `Hello
World` webhook, we would add another test function to `HelloWorldHookTests`
class called something like `test_goodbye_message`:
```python
def test_goodbye_message(self) -> None:
expected_topic = "Hello World"
expected_message = "Hello! I am happy to be here! :smile:\nThe Wikipedia featured article for today is **[Goodbye](https://en.wikipedia.org/wiki/Goodbye)**"
# use fixture named helloworld_goodbye
self.check_webhook(
"goodbye",
expected_topic,
expected_message,
content_type="application/x-www-form-urlencoded",
)
```
As well as a new fixture `goodbye.json` in
`zerver/webhooks/helloworld/fixtures/`:
```json
{
"featured_title":"Goodbye",
"featured_url":"https://en.wikipedia.org/wiki/Goodbye",
}
```
Also consider if your integration should have negative tests, a test where the
data from the test fixture should result in an error. For details see
[Negative tests](#negative-tests), below.
Once you have written some tests, you can run just these new tests from within
the Zulip development environment with this command:
```console
(zulip-server) vagrant@vagrant:/srv/zulip$
./tools/test-backend zerver/webhooks/helloworld
```
(Note: You must run the tests from the top level of your development directory.
The standard location in a Vagrant environment is `/srv/zulip`. If you are not
using Vagrant, use the directory where you have your development environment.)
You will see some script output and if all the tests have passed, you will see:
```console
Running zerver.webhooks.helloworld.tests.HelloWorldHookTests.test_goodbye_message
Running zerver.webhooks.helloworld.tests.HelloWorldHookTests.test_hello_message
DONE!
```
## Step 6: Create documentation
Next, we add end-user documentation for our integration. You
can see the existing examples at <https://zulip.com/integrations>
or by accessing `/integrations` in your Zulip development environment.
There are two parts to the end-user documentation on this page.
The first is the lozenge in the grid of integrations, showing your
integration logo and name, which links to the full documentation.
This is generated automatically once you've registered the integration
in `WEBHOOK_INTEGRATIONS` in `zerver/lib/integrations.py`, and supports
some customization via options to the `WebhookIntegration` class.
Second, you need to write the actual documentation content in
`zerver/webhooks/mywebhook/doc.md`.
```md
Learn how Zulip integrations work with this simple Hello World example!
1. The Hello World webhook will use the `test` channel, which is created
by default in the Zulip development environment. If you are running
Zulip in production, you should make sure that this channel exists.
1. {!create-an-incoming-webhook.md!}
1. {!generate-webhook-url-basic.md!}
1. To trigger a notification using this example webhook, you can use
`send_webhook_fixture_message` from a [Zulip development
environment](https://zulip.readthedocs.io/en/latest/development/overview.html):
```
(zulip-server) vagrant@vagrant:/srv/zulip$
./manage.py send_webhook_fixture_message \
> --fixture=zerver/tests/fixtures/helloworld/hello.json \
> '--url=http://localhost:9991/api/v1/external/helloworld?api_key=abcdefgh&stream=stream%20name;'
```
Or, use curl:
```
curl -X POST -H "Content-Type: application/json" -d '{ "featured_title":"Marilyn Monroe", "featured_url":"https://en.wikipedia.org/wiki/Marilyn_Monroe" }' http://localhost:9991/api/v1/external/helloworld\?api_key=abcdefgh&stream=stream%20name;
```
{!congrats.md!}
![Hello World integration](/static/images/integrations/helloworld/001.png)
```
`{!create-an-incoming-webhook.md!}` and `{!congrats.md!}` are examples of
a Markdown macro. Zulip has a macro-based Markdown/Jinja2 framework that
includes macros for common instructions in Zulip's webhooks/integrations
documentation.
See
[our guide on documenting an integration][integration-docs-guide]
for further details, including how to easily create the message
screenshot. Mostly you should plan on templating off an existing guide, like
[this one](https://raw.githubusercontent.com/zulip/zulip/main/zerver/webhooks/github/doc.md).
[integration-docs-guide]: https://zulip.readthedocs.io/en/latest/documentation/integrations.html
## Step 7: Preparing a pull request to zulip/zulip
When you have finished your webhook integration, follow these guidelines before
pushing the code to your fork and submitting a pull request to zulip/zulip:
- Run tests including linters and ensure you have addressed any issues they
report. See [Testing](https://zulip.readthedocs.io/en/latest/testing/testing.html)
and [Linters](https://zulip.readthedocs.io/en/latest/testing/linters.html) for details.
- Read through [Code styles and conventions](
https://zulip.readthedocs.io/en/latest/contributing/code-style.html) and take a look
through your code to double-check that you've followed Zulip's guidelines.
- Take a look at your Git history to ensure your commits have been clear and
logical (see [Commit discipline](
https://zulip.readthedocs.io/en/latest/contributing/commit-discipline.html) for tips). If not,
consider revising them with `git rebase --interactive`. For most incoming webhooks,
you'll want to squash your changes into a single commit and include a good,
clear commit message.
If you would like feedback on your integration as you go, feel free to post a
message on the [public Zulip instance](https://chat.zulip.org/#narrow/channel/integrations).
You can also create a [draft pull request](
https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-pull-requests#draft-pull-requests) while you
are still working on your integration. See the
[Git guide](https://zulip.readthedocs.io/en/latest/git/pull-requests.html#create-a-pull-request)
for more on Zulip's pull request process.
## Advanced topics
More complex implementation or testing needs may require additional code, beyond
what the standard helper functions provide. This section discusses some of
these situations.
### Negative tests
A negative test is one that should result in an error, such as incorrect data.
The helper functions may interpret this as a test failure, when it should instead
be a successful test of an error condition. To correctly test these cases, you
must explicitly code your test's execution (using other helpers, as needed)
rather than call the usual helper function.
Here is an example from the WordPress integration:
```python
def test_unknown_action_no_data(self) -> None:
# Mimic check_webhook() to manually execute a negative test.
# Otherwise its call to send_webhook_payload() would assert on the non-success
# we are testing. The value of result is the error message the webhook should
# return if no params are sent. The fixture for this test is an empty file.
# subscribe to the target channel
self.subscribe(self.test_user, self.CHANNEL_NAME)
# post to the webhook url
post_params = {'stream_name': self.CHANNEL_NAME,
'content_type': 'application/x-www-form-urlencoded'}
result = self.client_post(self.url, 'unknown_action', **post_params)
# check that we got the expected error message
self.assert_json_error(result, "Unknown WordPress webhook action: WordPress action")
```
In a normal test, `check_webhook` would handle all the setup
and then check that the incoming webhook's response matches the expected result. If
the webhook returns an error, the test fails. Instead, explicitly do the
setup it would have done, and check the result yourself.
Here, `subscribe_to_stream` is a test helper that uses `TEST_USER_EMAIL` and
`CHANNEL_NAME` (attributes from the base class) to register the user to receive
messages in the given channel. If the channel doesn't exist, it creates it.
`client_post`, another helper, performs the HTTP POST that calls the incoming
webhook. As long as `self.url` is correct, you don't need to construct the webhook
URL yourself. (In most cases, it is.)
`assert_json_error` then checks if the result matches the expected error.
If you had used `check_webhook`, it would have called
`send_webhook_payload`, which checks the result with `assert_json_success`.
### Custom query parameters
Custom arguments passed in URL query parameters work as expected in the webhook
code, but require special handling in tests.
For example, here is the definition of a webhook function that gets both `stream`
and `topic` from the query parameters:
```python
@typed_endpoint
def api_querytest_webhook(request: HttpRequest, user_profile: UserProfile,
payload: Annotated[str, ApiParamConfig(argument_type_is_body=True)],
stream: str = "test",
topic: str= "Default Alert":
```
In actual use, you might configure the 3rd party service to call your Zulip
integration with a URL like this:
```
http://myhost/api/v1/external/querytest?api_key=abcdefgh&stream=alerts&topic=queries
```
It provides values for `stream` and `topic`, and the webhook can get those
using `@typed_endpoint` without any special handling. How does this work in a test?
The new attribute `TOPIC` exists only in our class so far. In order to
construct a URL with a query parameter for `topic`, you can pass the
attribute `TOPIC` as a keyword argument to `build_webhook_url`, like so:
```python
class QuerytestHookTests(WebhookTestCase):
CHANNEL_NAME = 'querytest'
TOPIC = "Default topic"
URL_TEMPLATE = "/api/v1/external/querytest?api_key={api_key}&stream={stream}"
FIXTURE_DIR_NAME = 'querytest'
def test_querytest_test_one(self) -> None:
# construct the URL used for this test
self.TOPIC = "Query test"
self.url = self.build_webhook_url(topic=self.TOPIC)
# define the expected message contents
expected_topic = "Query test"
expected_message = "This is a test of custom query parameters."
self.check_webhook('test_one', expected_topic, expected_message,
content_type="application/x-www-form-urlencoded")
```
You can also override `get_body` or `get_payload` if your test data
needs to be constructed in an unusual way.
For more, see the definition for the base class, `WebhookTestCase`
in `zerver/lib/test_classes.py`, or just grep for examples.
### Custom HTTP event-type headers
Some third-party services set a custom HTTP header to indicate the event type that
generates a particular payload. To extract such headers, we recommend using the
`validate_extract_webhook_http_header` function in `zerver/lib/webhooks/common.py`,
like so:
```python
event = validate_extract_webhook_http_header(request, header, integration_name)
```
`request` is the `HttpRequest` object passed to your main webhook function. `header`
is the name of the custom header you'd like to extract, such as `X-Event-Key`, and
`integration_name` is the name of the third-party service in question, such as
`GitHub`.
Because such headers are how some integrations indicate the event types of their
payloads, the absence of such a header usually indicates a configuration
issue, where one either entered the URL for a different integration, or happens to
be running an older version of the integration that doesn't set that header.
If the requisite header is missing, this function sends a direct message to the
owner of the webhook bot, notifying them of the missing header.
### Handling unexpected webhook event types
Many third-party services have dozens of different event types. In
some cases, we may choose to explicitly ignore specific events. In
other cases, there may be events that are new or events that we don't
know about. In such cases, we recommend raising
`UnsupportedWebhookEventTypeError` (found in `zerver/lib/exceptions.py`),
with a string describing the unsupported event type, like so:
```
raise UnsupportedWebhookEventTypeError(event_type)
```
## Related articles
* [Integrations overview](/api/integrations-overview)
* [Incoming webhook integrations](/api/incoming-webhooks-overview)

View File

@@ -1,26 +0,0 @@
# The Zulip API
Zulip's APIs allow you to integrate other services with Zulip. This
guide should help you find the API you need:
* First, check if the tool you'd like to integrate with Zulip
[already has a native integration](/integrations/).
* Next, check if [Zapier](https://zapier.com/apps) or
[IFTTT](https://ifttt.com/search) has an integration.
[Zulip's Zapier integration](/integrations/doc/zapier) and
[Zulip's IFTTT integration](/integrations/doc/ifttt) often allow
integrating a new service with Zulip without writing any code.
* If you'd like to send content into Zulip, you can
[write a native incoming webhook integration](/api/incoming-webhooks-overview)
or use [Zulip's API for sending messages](/api/send-message).
* If you're building an interactive bot that reacts to activity inside
Zulip, you'll want to look at Zulip's
[Python framework for interactive bots](/api/running-bots) or
[Zulip's real-time events API](/api/get-events).
And if you still need to build your own integration with Zulip, check out
the full [REST API](/api/rest), generally starting with
[installing the API client bindings](/api/installation-instructions).
In case you already know how you want to build your integration and you're
just looking for an API key, we've got you covered [here](/api/api-keys).

View File

@@ -1,133 +0,0 @@
# Integrations overview
Integrations let you connect Zulip with other products. For example, you can get
notification messages in Zulip when an issue in your tracker is updated, or for
alerts from your monitoring tool.
Zulip offers [over 120 native integrations](/integrations/), which take
advantage of Zulip's [topics](/help/introduction-to-topics) to organize
notification messages. Additionally, thousands of integrations are available
through [Zapier](https://zapier.com/apps) and [IFTTT](https://ifttt.com/search).
You can also [connect any webhook designed to work with
Slack](/integrations/doc/slack_incoming) to Zulip.
If you don't find an integration you need, you can:
- [Write your own integration](#write-your-own-integration). You can [submit a
pull
request](https://zulip.readthedocs.io/en/latest/contributing/reviewable-prs.html)
to get your integration merged into the main Zulip repository.
- [File an issue](https://github.com/zulip/zulip/issues/new/choose) to request
an integration (if it's a nice-to-have).
- [Contact Zulip Sales](mailto:sales@zulip.com) to inquire about a custom
development contract.
## Set up an integration
### Native integrations
{start_tabs}
1. [Search Zulip's integrations](/integrations/) for the product you'd like to
connect to Zulip.
1. Click on the card for the product, and follow the instructions on the page.
{end_tabs}
### Integrate via Zapier or IFTTT
If you don't see a native Zulip integration, you can access thousands of
additional integrations through [Zapier](https://zapier.com/apps) and
[IFTTT](https://ifttt.com/search).
{start_tabs}
1. Search [Zapier](https://zapier.com/apps) or [IFTTT](https://ifttt.com/search)
for the product you'd like to connect to Zulip.
1. Follow the integration instructions for [Zapier](/integrations/doc/zapier) or
[IFTTT](/integrations/doc/ifttt).
{end_tabs}
### Integrate via Slack-compatible webhook API
Zulip can process incoming webhook messages written to work with [Slack's
webhook API](https://api.slack.com/messaging/webhooks). This makes it easy to
quickly move your integrations when [migrating your
organization](/help/import-from-slack) from Slack to Zulip, or integrate any
product that has a Slack webhook integration with Zulip .
!!! warn ""
**Note:** In the long term, the recommended approach is to use
Zulip's native integrations, which take advantage of Zulip's topics.
There may also be some quirks when Slack's formatting system is
translated into Zulip's.
{start_tabs}
1. [Create a bot](/help/add-a-bot-or-integration) for the Slack-compatible
webhook. Make sure that you select **Incoming webhook** as the **Bot type**.
1. Decide where to send Slack-compatible webhook notifications, and [generate
the integration URL](https://zulip.com/help/generate-integration-url).
1. Use the generated URL anywhere you would use a Slack webhook.
{end_tabs}
### Integrate via email
If the product you'd like to integrate can send email notifications, you can
[send those emails to a Zulip channel](/help/message-a-channel-by-email). The
email subject will become the Zulip topic, and the email body will become the
Zulip message.
For example, you can configure your personal GitHub notifications to go to a
Zulip channel rather than your email inbox. Notifications for each issue or pull
request will be grouped into a single topic.
## Write your own integration
You can write your own Zulip integrations using the well-documented APIs below.
For example, if your company develops software, you can create a custom
integration to connect your product to Zulip.
If you need help, best-effort community support is available in the [Zulip
development community](https://zulip.com/development-community/). To inquire
about options for custom development, [contact Zulip
Sales](mailto:sales@zulip.com).
### Sending content into Zulip
* If the third-party service supports outgoing webhooks, you likely want to
build an [incoming webhook integration](/api/incoming-webhooks-overview).
* If it doesn't, you may want to write a
[script or plugin integration](/api/non-webhook-integrations).
* The [`zulip-send` tool](/api/send-message) makes it easy to send Zulip
messages from shell scripts.
* Finally, you can
[send messages using Zulip's API](/api/send-message), with bindings for
Python, JavaScript and [other languages](/api/client-libraries).
### Sending and receiving content
* To react to activity inside Zulip, look at Zulip's
[Python framework for interactive bots](/api/running-bots) or
[Zulip's real-time events API](/api/get-events).
* If what you want isn't covered by the above, check out the full
[REST API](/api/rest). The web, mobile, desktop, and terminal apps are
built on top of this API, so it can do anything a human user can do. Most
but not all of the endpoints are documented on this site; if you need
something that isn't there check out Zulip's
[REST endpoints](https://github.com/zulip/zulip/blob/main/zproject/urls.py).
## Related articles
* [Bots overview](/help/bots-overview)
* [Set up integrations](/help/set-up-integrations)
* [Add a bot or integration](/help/add-a-bot-or-integration)
* [Generate integration URL](/help/generate-integration-url)
* [Request an integration](/help/request-an-integration)

View File

@@ -1,447 +0,0 @@
# Message formatting
Zulip supports an extended version of Markdown for messages, as well as
some HTML level special behavior. The Zulip help center article on [message
formatting](/help/format-your-message-using-markdown) is the primary
documentation for Zulip's markup features. This article is currently a
changelog for updates to these features.
The [render a message](/api/render-message) endpoint can be used to get
the current HTML version of any Markdown syntax for message content.
## Code blocks
**Changes**: As of Zulip 4.0 (feature level 33), [code blocks][help-code]
can have a `data-code-language` attribute attached to the outer HTML
`div` element, which records the programming language that was selected
for syntax highlighting. This field is used in the
[playgrounds][help-playgrounds] feature for code blocks.
## Global times
**Changes**: In Zulip 3.0 (feature level 8), added [global time
mentions][help-global-time] to supported Markdown message formatting
features.
## Links to channels, topics, and messages
Zulip's markup supports special readable Markdown syntax for [linking
to channels, topics, and messages](/help/link-to-a-message-or-conversation).
Sample HTML formats are as follows:
``` html
<!-- Syntax: #**announce** -->
<a class="stream" data-stream-id="9"
href="/#narrow/channel/9-announce">
#announce
</a>
<!-- Syntax: #**announce>Zulip updates** -->
<a class="stream-topic" data-stream-id="9"
href="/#narrow/channel/9-announce/topic/Zulip.20updates/with/214">
#announce &gt; Zulip updates
</a>
<!-- Syntax: #**announce>Zulip updates**
Generated only if topic had no messages or the link was rendered
before Zulip 10.0 (feature level 347) -->
<a class="stream-topic" data-stream-id="9"
href="/#narrow/channel/9-announce/topic/Zulip.20updates">
#announce &gt; Zulip updates
</a>
<!-- Syntax: #**announce>Zulip updates@214** -->
<a class="message-link"
href="/#narrow/channel/9-announce/topic/Zulip.20updates/near/214">
#announce &gt; Zulip updates @ 💬
</a>
```
The `near` and `with` operators are documented in more detail in the
[search and URL documentation](/api/construct-narrow). When rendering
topic links with the `with` operator, the code doing the rendering may
pick the ID arbitrarily among messages accessible to the client and/or
acting user at the time of rendering. Currently, the server chooses
the message ID to use for `with` operators as the latest message ID in
the topic accessible to the user who wrote the message.
The older stream/topic link elements include a `data-stream-id`, which
historically was used in order to display the current channel name if
the channel had been renamed. That field is **deprecated**, because
displaying an updated value for the most common forms of this syntax
requires parsing the URL to get the topic to use anyway.
When a topic is an empty string, it is replaced with
`realm_empty_topic_display_name` found in the [`POST /register`](/api/register-queue)
response and wrapped with the `<em>` tag.
Sample HTML formats with `"realm_empty_topic_display_name": "general chat"`
are as follows:
```html
<!-- Syntax: #**announce>** -->
<a class="stream-topic" data-stream-id="9"
href="/#narrow/channel/9-announce/topic/with/214">
#announce &gt; <em>general chat</em>
</a>
<!-- Syntax: #**announce>**
Generated only if topic had no messages or the link was rendered
before Zulip 10.0 (feature level 347) -->
<a class="stream-topic" data-stream-id="9"
href="/#narrow/channel/9-announce/topic/">
#announce &gt; <em>general chat</em>
</a>
<!-- Syntax: #**announce>@214** -->
<a class="message-link"
href="/#narrow/channel/9-announce/topic//near/214">
#announce &gt; <em>general chat</em> @ 💬
</a>
```
**Changes**: In Zulip 11.0 (feature level 400), the server switched
its strategy for `with` URL construction to choose the latest
accessible message ID in a topic. Previously, it used the oldest.
Before Zulip 10.0 (feature level 347), the `with` field
was never used in topic link URLs generated by the server; the markup
currently used only for empty topics was used for all topic links.
Before Zulip 10.0 (feature level 346), empty string
was not a valid topic name in syntaxes for linking to topics and
messages.
In Zulip 10.0 (feature level 319), added Markdown syntax
for linking to a specific message in a conversation. Declared the
`data-stream-id` field to be deprecated as detailed above.
In Zulip 11.0 (feature level 383), clients can decide what
channel view a.stream channel link elements take you to -- i.e.,
the href for those is the default behavior of the link that also
encodes the channel alongside the data-stream-id field, but clients
can override that default based on `web_channel_default_view` setting.
## Image previews
When a Zulip message is sent linking to an uploaded image, Zulip will
generate an image preview element with the following format.
``` html
<div class="message_inline_image">
<a href="/user_uploads/path/to/image.png" title="image.png">
<img data-original-dimensions="1920x1080"
data-original-content-type="image/png"
src="/user_uploads/thumbnail/path/to/image.png/840x560.webp">
</a>
</div>
```
If the server has not yet generated thumbnails for the image yet at
the time the message is sent, the `img` element will be a temporary
loading indicator image and have the `image-loading-placeholder`
class, which clients can use to identify loading indicators and
replace them with a more native loading indicator element if
desired. For example:
``` html
<div class="message_inline_image">
<a href="/user_uploads/path/to/image.png" title="image.png">
<img class="image-loading-placeholder"
data-original-dimensions="1920x1080"
data-original-content-type="image/png"
src="/path/to/spinner.png">
</a>
</div>
```
Once the server has a working thumbnail, such messages will be updated
via an `update_message` event, with the `rendering_only: true` flag
(telling clients not to adjust message edit history), with appropriate
adjusted `rendered_content`. A client should process those events by
just using the updated rendering. If thumbnailing failed, the same
type of event will edit the message's rendered form to remove the
image preview element, so no special client-side logic should be
required to process such errors.
Note that in the uncommon situation that the thumbnailing system is
backlogged, an individual message containing multiple image previews
may be re-rendered multiple times as each image finishes thumbnailing
and triggers a message update.
Clients are recommended to do the following when processing image
previews:
- Clients that would like to use the image's aspect ratio to lay out
one or more images in the message feed may use the
`data-original-dimensions` attribute, which is present even if the
image is a placeholder spinner. This attribute encodes the
dimensions of the original image as `{width}x{height}`. These
dimensions are for the image as rendered, _after_ any EXIF rotation
and mirroring has been applied.
- If the client would like to control the thumbnail resolution used,
it can replace the final section of the URL (`840x560.webp` in the
example above) with the `name` of its preferred format from the set
of supported formats provided by the server in the
`server_thumbnail_formats` portion of the `register`
response. Clients should not make any assumptions about what format
the server will use as the "default" thumbnail resolution, as it may
change over time.
- Download button type elements should provide the original image
(encoded via the `href` of the containing `a` tag).
- The content-type of the original image is provided on a
`data-original-content-type` attribute, so clients can decide if
they are capable of rendering the original image.
- For images whose formats which are not widely-accepted by browsers
(e.g., HEIC and TIFF), the image may contain a
`data-transcoded-image` attribute, which specifies a high-resolution
thumbnail format which clients may use instead of the original
image.
- Lightbox elements for viewing an image should be designed to
immediately display any already-downloaded thumbnail while fetching
the original-quality image or an appropriate higher-quality
thumbnail from the server, to be transparently swapped in once it is
available. Clients that would like to size the lightbox based on the
size of the original image can use the `data-original-dimensions`
attribute, as described above.
- Animated images will have a `data-animated` attribute on the `img`
tag. As detailed in `server_thumbnail_formats`, both animated and
still images are available for clients to use, depending on their
preference. See, for example, the [web setting][help-previews]
to control whether animated images are autoplayed in the message
feed.
- Clients should not assume that the requested format is the format
that they will receive; in rare cases where the client has an
out-of-date list of `server_thumbnail_formats`, the server will
provide an approximation of the client's requested format. Because
of this, clients should not assume that the pixel dimensions or file
format match what they requested.
- No other processing of the URLs is recommended.
**Changes**: In Zulip 10.0 (feature level 336), added
`data-original-content-type` attribute to convey the type of the
original image, and optional `data-transcoded-image` attribute for
images with formats which are not widely supported by browsers.
**Changes**: In Zulip 9.2 (feature levels 278-279, and 287+), added
`data-original-dimensions` to the `image-loading-placeholder` spinner
images, containing the dimensions of the original image.
In Zulip 9.0 (feature level 276), added `data-original-dimensions`
attribute to images that have been thumbnailed, containing the
dimensions of the full-size version of the image. Thumbnailing itself
was reintroduced at feature level 275.
Previously, with the exception of Zulip servers that used the beta
Thumbor-based implementation years ago, all image previews in Zulip
messages were not thumbnailed; the `a` tag and the `img` tag would both
point to the original image.
Clients that correctly implement the current API should handle
Thumbor-based older thumbnails correctly, as long as they do not
assume that `data-original-dimensions` is present. Clients should not
assume that messages sent prior to the introduction of thumbnailing
have been re-rendered to use the new format or have thumbnails
available.
## Video embeddings and previews
When a Zulip message is sent linking to an uploaded video, Zulip may
generate a video preview element with the following format.
``` html
<div class="message_inline_image message_inline_video">
<a href="/user_uploads/path/to/video.mp4">
<video preload="metadata" src="/user_uploads/path/to/video.mp4">
</video>
</a>
</div>
```
## Audio Players
When the Markdown media syntax is used with an uploaded file with an
audio `Content-Type`, Zulip will generate an HTML5 `<audio>` player
element. Supported MIME types are currently `audio/aac`, `audio/flac`,
`audio/mpeg`, and `audio/wav`.
For example, `[file.mp3](/user_uploads/path/to/file.mp3)` renders as:
``` html
<audio controls preload="metadata"
src="/user_uploads/path/to/file.mp3" title="file.mp3">
</audio>
```
If the Zulip server has rewritten the URL of the audio file, it will
provide the URL in a `data-original-url` parameter. The Zulip server
does this for all non-uploaded file audio URLs.
``` html
<audio controls preload="metadata"
data-original-url="https://example.com/path/to/original/file.mp3"
src="https://zulipcdn.example.com/path/to/playable/file.mp3" title="file.mp3">
</audio>
```
Clients that cannot render an audio player are recommended to convert
audio elements into a link to the original URL.
The Zulip server does not validate whether uploaded files with an
audio `Content-Type` are actually playable.
**Changes**: New in Zulip 11.0 (feature level 405).
## Mentions and silent mentions
Zulip markup supports [mentioning](/help/mention-a-user-or-group)
users, user groups, and a few special "wildcard" mentions (the three
spellings of a channel wildcard mention: `@**all**`, `@**everyone**`,
`@**channel**` and the topic wildcard mention `@**topic**`).
Mentions result in a message being highlighted for the target user(s),
both in the UI and in notifications, and may also result in the target
user(s) following the conversation, [depending on their
settings](/help/follow-a-topic#follow-topics-where-you-are-mentioned).
Silent mentions of users or groups have none of those side effects,
but nonetheless uniquely identify the user or group
identified. (There's no such thing as a silent wildcard mention).
Permissions for mentioning users work as follows:
- Any user can mention any other user, though mentions by [muted
users](/help/mute-a-user) are automatically marked as read and thus do
not trigger notifications or otherwise get highlighted like unread
mentions.
- Wildcard mentions are permitted except where [organization-level
restrictions](/help/restrict-wildcard-mentions) apply.
- User groups can be mentioned if and only if the acting user is in
the `can_mention_group` group for that group. All user groups can be
silently mentioned by any user.
- System groups, when (silently) mentioned, should be displayed using
their description, not their `role:nobody` style API names; see the
main [system group
documentation](/api/group-setting-values#system-groups) for
details. System groups can only be silently mentioned right now,
because they happen to all use the empty `Nobody` group for
`can_mention_group`; clients should just use `can_mention_group` to
determine which groups to offer in typeahead in similar contexts.
- Requests to send or edit a message that are impermissible due to
including a mention where the acting user does not have permission to
mention the target will return an error. Mention syntax that does not
correspond to a real user or group is ignored.
Sample markup for `@**Example User**`:
``` html
<span class="user-mention" data-user-id="31">@Example User</span>
```
Sample markup for `@_**Example User**`:
``` html
<span class="user-mention silent" data-user-id="31">Example User</span>
```
Sample markup for `@**topic**`:
``` html
<span class="topic-mention">@topic</span>
```
Sample markup for `@**channel**`:
``` html
<span class="user-mention channel-wildcard-mention"
data-user-id="*">@channel</span>
```
Sample markup for `@*support*`, assuming "support" is a valid group:
``` html
<span class="user-group-mention"
data-user-group-id="17">@support</span>
```
Sample markup for `@_*support*`, assuming "support" is a valid group:
``` html
<span class="user-group-mention silent"
data-user-group-id="17">support</span>
```
Sample markup for `@_*role:administrators*`:
``` html
<span class="user-group-mention silent"
data-user-group-id="5">Administrators</span>
```
When processing mentions, clients should look up the user or group
referenced by ID, and update the textual name for the mention to the
current name for the user or group with that ID. Note that for system
groups, this requires special logic to look up the user-facing name
for that group; see [system
groups](/api/group-setting-values#system-groups) for details.
**Changes**: Prior to Zulip 10.0 (feature level 333), it was not
possible to silently mention [system
groups](/api/group-setting-values#system-groups).
In Zulip 9.0 (feature level 247), `channel` was added to the supported
[wildcard][help-mention-all] options used in the
[mentions][help-mentions] Markdown message formatting feature.
## Spoilers
**Changes**: In Zulip 3.0 (feature level 15), added
[spoilers][help-spoilers] to supported Markdown message formatting
features.
## Removed features
### Removed legacy Dropbox link preview markup
In Zulip 11.0 (feature level 395), the Zulip server stopped generating
legacy Dropbox link previews. Dropbox links are now previewed just
like standard Zulip image/link previews. However, some legacy Dropbox
previews may exist in existing messages.
Clients are recommended to prune these previews from message HTML;
since they always appear after the actual link, there is no loss of
information/functionality. They can be recognized via the classes
`message_inline_ref`, `message_inline_image_desc`, and
`message_inline_image_title`:
``` html
<div class="message_inline_ref">
<a href="https://www.dropbox.com/sh/cm39k9e04z7fhim/AAAII5NK-9daee3FcF41anEua?dl=" title="Saves">
<img src="/path/to/folder_dropbox.png">
</a>
<div><div class="message_inline_image_title">Saves</div>
<desc class="message_inline_image_desc"></desc>
</div>
</div>
```
### Removed legacy avatar markup
In Zulip 4.0 (feature level 24), the rarely used `!avatar()`
and `!gravatar()` markup syntax, which was never documented and had an
inconsistent syntax, were removed.
## Related articles
* [Markdown formatting](/help/format-your-message-using-markdown)
* [Send a message](/api/send-message)
* [Render a message](/api/render-message)
[help-code]: /help/code-blocks
[help-playgrounds]: /help/code-blocks#code-playgrounds
[help-spoilers]: /help/spoilers
[help-global-time]: /help/global-times
[help-mentions]: /help/mention-a-user-or-group
[help-mention-all]: /help/mention-a-user-or-group#mention-everyone-on-a-channel
[help-previews]: /help/image-video-and-website-previews#configure-how-animated-images-are-played

View File

@@ -1,130 +0,0 @@
# Mobile notifications
Zulip Server 11.0+ supports end-to-end encryption (E2EE) for mobile
push notifications. Mobile push notifications sent by all Zulip
servers go through Zulip's mobile push notifications service, which
then delivers the notifications through the appropriate
platform-specific push notification service (Google's FCM or Apple's
APNs). E2EE push notifications ensure that mobile notification message
content and metadata is not visible to intermediaries.
Mobile clients that have [registered an E2EE push
device](/api/register-push-device) will receive mobile notifications
end-to-end encrypted by their Zulip server.
This page documents the format of the encrypted JSON-format payloads
that the client will receive through this protocol. The same encrypted
payload formats are used for both Firebase Cloud Messaging (FCM) and
Apple Push Notification service (APNs).
## Payload examples
### New channel message
Sample JSON data that gets encrypted:
```json
{
"channel_id": 10,
"channel_name": "Denmark",
"content": "@test_user_group",
"mentioned_user_group_id": 41,
"mentioned_user_group_name": "test_user_group",
"message_id": 45,
"realm_name": "Zulip Dev",
"realm_url": "http://zulip.testserver",
"recipient_type": "channel",
"sender_avatar_url": "https://secure.gravatar.com/avatar/818c212b9f8830dfef491b3f7da99a14?d=identicon&version=1",
"sender_full_name": "aaron",
"sender_id": 6,
"time": 1754385395,
"topic": "test",
"type": "message",
"user_id": 10
}
```
- The `mentioned_user_group_id` and `mentioned_user_group_name` fields
are only present for messages that mention a group containing the
current user, and triggered a mobile notification because of that
group mention. For example, messages that mention both the user
directly and a group containing the user, these fields will not be
present in the payload, because the direct mention has precedence.
**Changes**: New in Zulip 11.0 (feature level 413).
### New direct message
Sample JSON data that gets encrypted:
```json
{
"content": "test content",
"message_id": 46,
"pm_users": "6,10,12,15",
"realm_name": "Zulip Dev",
"realm_url": "http://zulip.testserver",
"recipient_type": "direct",
"sender_avatar_url": "https://secure.gravatar.com/avatar/818c212b9f8830dfef491b3f7da99a14?d=identicon&version=1",
"sender_full_name": "aaron",
"sender_id": 6,
"time": 1754385290,
"type": "message",
"user_id": 10
}
```
- **Group direct messages**: The `pm_users` string field is only
present for group direct messages, containing a sorted comma-separated
list of all user IDs in the group direct message conversation,
including both `user_id` and `sender_id`.
**Changes**: New in Zulip 11.0 (feature level 413).
### New group direct message
### Remove notifications
When a batch of messages that had previously been included in mobile
notifications are marked as read, are deleted, become inaccessible, or
otherwise should no longer be displayed to the user, a removal
notification is sent.
Sample JSON data that gets encrypted:
```json
{
"message_ids": [
31,
32
],
"realm_name": "Zulip Dev",
"realm_url": "http://zulip.testserver",
"type": "remove",
"user_id": 10
}
```
[zulip-bouncer]: https://zulip.readthedocs.io/en/latest/production/mobile-push-notifications.html#mobile-push-notification-service
**Changes**: New in Zulip 11.0 (feature level 413).
### Test push notification
A user can trigger [sending an E2EE test push notification](/api/e2ee-test-notify)
to the user's selected mobile device or all of their mobile devices.
Sample JSON data that gets encrypted:
```json
{
"realm_name": "Zulip Dev",
"realm_url": "http://zulip.testserver",
"time": 1754577820,
"type": "test",
"user_id": 10
}
```
**Changes**: New in Zulip 11.0 (feature level 420).
## Future work
This page will eventually also document the formats of the APNs and
FCM payloads wrapping the encrypted content.

View File

@@ -1,64 +0,0 @@
# Error handling
Zulip's API will always return a JSON format response.
The HTTP status code indicates whether the request was successful
(200 = success, 4xx = user error, 5xx = server error).
Every response, both success and error responses, will contain at least
two keys:
- `msg`: an internationalized, human-readable error message string.
- `result`: either `"error"` or `"success"`, which is redundant with the
HTTP status code, but is convenient when print debugging.
Every error response will also contain an additional key:
- `code`: a machine-readable error string, with a default value of
`"BAD_REQUEST"` for general errors.
Clients should always check `code`, rather than `msg`, when looking for
specific error conditions. The string values for `msg` are
internationalized (e.g., the server will send the error message
translated into French if the user has a French locale), so checking
those strings will result in buggy code.
!!! tip ""
If a client needs information that is only present in the string value
of `msg` for a particular error response, then the developers
implementing the client should [start a conversation here][api-design]
in order to discuss getting a specific error `code` and/or relevant
additional key/value pairs for that error response.
In addition to the keys described above, some error responses will
contain other keys with further details that are useful for clients. The
specific keys present depend on the error `code`, and are documented at
the API endpoints where these particular errors appear.
**Changes**: Before Zulip 5.0 (feature level 76), all error responses
did not contain a `code` key, and its absence indicated that no specific
error `code` had been allocated for that error.
## Common error responses
Documented below are some error responses that are common to many
endpoints:
{generate_code_example|/rest-error-handling:post|fixture}
## Ignored Parameters
In JSON success responses, all Zulip REST API endpoints may return
an array of parameters sent in the request that are not supported
by that specific endpoint.
While this can be expected, e.g., when sending both current and legacy
names for a parameter to a Zulip server of unknown version, this often
indicates either a bug in the client implementation or an attempt to
configure a new feature while connected to an older Zulip server that
does not support said feature.
{generate_code_example|/settings:patch|fixture}
[api-design]: https://chat.zulip.org/#narrow/channel/378-api-design

View File

@@ -1,126 +0,0 @@
# Roles and permissions
Zulip offers several levels of permissions based on a
[user's role](/help/user-roles) in a Zulip organization.
Here are some important details to note when working with these
roles and permissions in Zulip's API:
## A user's role
A user's account data include a `role` property, which contains the
user's role in the Zulip organization. These roles are encoded as:
* Organization owner: 100
* Organization administrator: 200
* Organization moderator: 300
* Member: 400
* Guest: 600
User account data also include these boolean properties that duplicate
the related roles above:
* `is_owner` specifying whether the user is an organization owner.
* `is_admin` specifying whether the user is an organization administrator.
* `is_guest` specifying whether the user is a guest user.
These are intended as conveniences for simple clients, and clients
should prefer using the `role` field, since only that one is updated
by the [events API](/api/get-events).
Note that [`POST /register`](/api/register-queue) also returns an
`is_moderator` boolean property specifying whether the current user is
at least an organization moderator. The property will be true for admins
and owners too.
Additionally, user account data include an `is_billing_admin` property
specifying whether the user is a billing administrator for the Zulip
organization, which is not related to one of the roles listed above,
but rather allows for specific permissions related to billing
administration in [paid Zulip Cloud plans](https://zulip.com/plans/).
### User account data in the API
Endpoints that return the user account data / properties mentioned
above are:
* [`GET /users`](/api/get-users)
* [`GET /users/{user_id}`](/api/get-user)
* [`GET /users/{email}`](/api/get-user-by-email)
* [`GET /users/me`](/api/get-own-user)
* [`GET /events`](/api/get-events)
* [`POST /register`](/api/register-queue)
Note that the [`POST /register` endpoint](/api/register-queue) returns
the above boolean properties to describe the role of the current user,
when `realm_user` is present in `fetch_event_types`.
Additionally, the specific events returned by the
[`GET /events` endpoint](/api/get-events) containing data related
to user accounts and roles are the [`realm_user` add
event](/api/get-events#realm_user-add), and the
[`realm_user` update event](/api/get-events#realm_user-update).
## Permission levels
Many areas of Zulip are customizable by the roles
above, such as (but not limited to) [restricting message editing and
deletion](/help/restrict-message-editing-and-deletion) and various
permissions for different [channel types](/help/channel-permissions).
The potential permission levels are:
* Everyone / Any user including Guests (least restrictive)
* Members
* Full members
* Moderators
* Administrators
* Owners
* Nobody (most restrictive)
These permission levels and policies in the API are designed to be
cutoffs in that users with the specified role and above have the
specified ability or access. For example, a permission level documented
as 'moderators only' includes organization moderators, administrators,
and owners.
Note that specific settings and policies in the Zulip API that use these
permission levels will likely support a subset of those listed above.
## Group-based permissions
Some settings have been migrated to a more flexible system based on
[user groups](/api/group-setting-values).
## Determining if a user is a full member
When a Zulip organization has set up a [waiting period before new members
turn into full members](/help/restrict-permissions-of-new-members),
clients will need to determine if a user's account has aged past the
organization's waiting period threshold.
The `realm_waiting_period_threshold`, which is the number of days until
a user's account is treated as a full member, is returned by the
[`POST /register` endpoint](/api/register-queue) when `realm` is present
in `fetch_event_types`.
Clients can compare the `realm_waiting_period_threshold` to a user
accounts's `date_joined` property, which is the time the user account
was created, to determine if a user has the permissions of a full
member or a new member.

View File

@@ -1,74 +0,0 @@
# Interactive bots
Zulip's API has a powerful framework for interactive bots that react
to messages in Zulip.
## Running a bot
This guide will show you how to run an existing Zulip bot
found in [zulip_bots/bots](
https://github.com/zulip/python-zulip-api/tree/main/zulip_bots/zulip_bots/bots).
You'll need:
* An account in a Zulip organization
(e.g., [the Zulip development community](https://zulip.com/development-community/),
`{{ display_host }}`, or a Zulip organization on your own
[development](https://zulip.readthedocs.io/en/latest/development/overview.html) or
[production](https://zulip.readthedocs.io/en/latest/production/install.html) server).
* A computer where you're running the bot from.
**Note: Please be considerate when testing experimental bots on public servers such as chat.zulip.org.**
{start_tabs}
1. [Create a bot](/help/add-a-bot-or-integration), making sure to select
**Generic bot** as the **Bot type**.
1. [Download the bot's `zuliprc` file](/api/configuring-python-bindings#download-a-zuliprc-file).
1. Use the following command to install the
[`zulip_bots` Python package](https://pypi.org/project/zulip-bots/):
pip3 install zulip_bots
1. Use the following command to start the bot process *(replacing
`~/path/to/zuliprc` with the path to the `zuliprc` file you downloaded above)*:
zulip-run-bot <bot-name> --config-file ~/path/to/zuliprc
1. Check the output of the command above to make sure your bot is running.
It should include the following line:
INFO:root:starting message handling...
1. Test your setup by [starting a new direct message](/help/starting-a-new-direct-message)
with the bot or [mentioning](/help/mention-a-user-or-group) the bot on a channel.
!!! tip ""
To use the latest development version of the `zulip_bots` package, follow
[these steps](writing-bots#installing-a-development-version-of-the-zulip-bots-package).
{end_tabs}
You can now play around with the bot and get it configured the way you
like. Eventually, you'll probably want to run it in a production
environment where it'll stay up, by [deploying](/api/deploying-bots) it on a
server using the Zulip Botserver.
## Common problems
* My bot won't start
* Ensure that your API config file is correct (download the config file from the server).
* Ensure that your bot script is located in `zulip_bots/bots/<my-bot>/`
* Are you using your own Zulip development server? Ensure that you run your bot outside
the Vagrant environment.
* Some bots require Python 3. Try switching to a Python 3 environment before running
your bot.
## Related articles
* [Non-webhook integrations](/api/non-webhook-integrations)
* [Deploying bots](/api/deploying-bots)
* [Writing bots](/api/writing-bots)

View File

@@ -1,29 +0,0 @@
## Integrations
* [Overview](/api/integrations-overview)
* [Incoming webhook integrations](/api/incoming-webhooks-overview)
* [Hello world walkthrough](/api/incoming-webhooks-walkthrough)
* [Non-webhook integrations](/api/non-webhook-integrations)
## Interactive bots (beta)
* [Running bots](/api/running-bots)
* [Deploying bots](/api/deploying-bots)
* [Writing bots](/api/writing-bots)
* [Outgoing webhooks](/api/outgoing-webhooks)
## REST API
* [Overview](/api/rest)
* [Installation instructions](/api/installation-instructions)
* [API keys](/api/api-keys)
* [Configuring the Python bindings](/api/configuring-python-bindings)
* [HTTP headers](/api/http-headers)
* [Error handling](/api/rest-error-handling)
* [Roles and permissions](/api/roles-and-permissions)
* [Group-setting values](/api/group-setting-values)
* [Message formatting](/api/message-formatting)
* [Client libraries](/api/client-libraries)
* [API changelog](/api/changelog)
{!rest-endpoints.md!}

25
babel.config.js Normal file
View File

@@ -0,0 +1,25 @@
"use strict";
module.exports = {
plugins: [
[
"formatjs",
{
additionalFunctionNames: ["$t", "$t_html"],
overrideIdFn: (id, defaultMessage) => defaultMessage,
},
],
],
presets: [
[
"@babel/preset-env",
{
corejs: "3.20",
shippedProposals: true,
useBuiltIns: "usage",
},
],
"@babel/typescript",
],
sourceType: "unambiguous",
};

View File

@@ -3,6 +3,7 @@ from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("contenttypes", "0001_initial"),
]

View File

@@ -1,90 +0,0 @@
# Generated by Django 5.0.7 on 2024-08-13 19:41
import django.db.models.deletion
import django.utils.timezone
from django.db import migrations, models
class Migration(migrations.Migration):
replaces = [
("confirmation", "0001_initial"),
("confirmation", "0002_realmcreationkey"),
("confirmation", "0003_emailchangeconfirmation"),
("confirmation", "0004_remove_confirmationmanager"),
("confirmation", "0005_confirmation_realm"),
("confirmation", "0006_realmcreationkey_presume_email_valid"),
("confirmation", "0007_add_indexes"),
("confirmation", "0008_confirmation_expiry_date"),
("confirmation", "0009_confirmation_expiry_date_backfill"),
("confirmation", "0010_alter_confirmation_expiry_date"),
("confirmation", "0011_alter_confirmation_expiry_date"),
("confirmation", "0012_alter_confirmation_id"),
("confirmation", "0013_alter_realmcreationkey_id"),
("confirmation", "0014_confirmation_confirmatio_content_80155a_idx"),
]
initial = True
dependencies = [
("contenttypes", "0001_initial"),
("zerver", "0001_initial"),
]
operations = [
migrations.CreateModel(
name="RealmCreationKey",
fields=[
(
"id",
models.BigAutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
(
"creation_key",
models.CharField(db_index=True, max_length=40, verbose_name="activation key"),
),
(
"date_created",
models.DateTimeField(default=django.utils.timezone.now, verbose_name="created"),
),
("presume_email_valid", models.BooleanField(default=False)),
],
),
migrations.CreateModel(
name="Confirmation",
fields=[
(
"id",
models.BigAutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
("object_id", models.PositiveIntegerField(db_index=True)),
("date_sent", models.DateTimeField(db_index=True)),
("confirmation_key", models.CharField(db_index=True, max_length=40)),
(
"content_type",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="contenttypes.contenttype"
),
),
("type", models.PositiveSmallIntegerField()),
(
"realm",
models.ForeignKey(
null=True, on_delete=django.db.models.deletion.CASCADE, to="zerver.realm"
),
),
("expiry_date", models.DateTimeField(db_index=True, null=True)),
],
options={
"unique_together": {("type", "confirmation_key")},
"indexes": [
models.Index(
fields=["content_type", "object_id"], name="confirmatio_content_80155a_idx"
)
],
},
),
]

View File

@@ -3,6 +3,7 @@ from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("confirmation", "0001_initial"),
]

View File

@@ -3,6 +3,7 @@ from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("confirmation", "0002_realmcreationkey"),
]

View File

@@ -3,6 +3,7 @@ from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("confirmation", "0003_emailchangeconfirmation"),
]

Some files were not shown because too many files have changed in this diff Show More