Compare commits

..

81 Commits
1.3.4 ... 1.3.7

Author SHA1 Message Date
Tim Abbott
54c964a332 Rewrite the email gateway integration instructions. 2015-10-19 10:10:20 -07:00
Tim Abbott
a6ddd28c9e Clarify the steps in the outgoing SMTP setup process. 2015-10-19 10:09:45 -07:00
Tim Abbott
494797ea0a Fix has_valid_realm logic following get_realm refactor. 2015-10-19 09:59:06 -07:00
Tim Abbott
3e1f4e611c Clarify on zulip.com signup form that we're not taking new teams. 2015-10-19 09:37:24 -07:00
Tim Abbott
758baca01a run-dev.py: Report a nice error if you run it as root.
Fixes #172.
2015-10-15 12:45:38 -04:00
Tim Abbott
3167b64d1c Extend changelog with other unreleased improvemenets since 1.3.6. 2015-10-15 12:31:28 -04:00
Nicholas Bergson-Shilcock
89a2765553 Turn off desktop notifications by default for new users.
New users will no longer get desktop and audible notifications for all streams
by default.

This also updates the `day1` follow-up email to let users know they can
customize how and when Zulip notifies them of new messages.

Lastly, this adds a `changelog.md` file, following the conventions from
keepachangelog.com, to track changes for new releases.
2015-10-15 12:25:32 -04:00
Tim Abbott
e75ba630fb initialize-database: Make management command errors fatal again.
We accidentally made this non-fatal when we added the nice error
output telling users to run postgres-init-db.
2015-10-15 12:21:46 -04:00
Tim Abbott
bf694fa832 Flush memcached whenever we drop the databases.
This fixes some issues that we've had where commands will fail is
confusing ways after the database is rebuilt because data from before
the database was dropped is still in the memcached cache.
2015-10-15 12:18:41 -04:00
Tim Abbott
32aea4c9dd Fix get_unique_open_realm always returning None in production.
Fixes #186.
2015-10-15 10:21:04 -04:00
Tim Abbott
5d22f5ee0a Improve LDAP_APPEND_DOMAIN default.
The documentation suggests the default is None; this change makes that
true.  Also make the actual code robust to this being set to "" instead.
2015-10-15 09:16:59 -04:00
Tim Abbott
71a06d58de Convert uses of Realm.objects.get() to get_realm().
get_realm is better in two key ways:
* It uses memcached to fetch the data from the cache and thus is faster.
* It does a case-insensitive query and thus is more safe.
2015-10-15 09:16:58 -04:00
Tim Abbott
51ed5028dc Remove unnecessary get_realm_name function. 2015-10-15 09:16:58 -04:00
Tim Abbott
419d31a007 Expand documentation for the LDAP auth integration.
Fixes #134, #173.
2015-10-15 09:16:58 -04:00
Tim Abbott
784ba7e066 Fix support for LDAP Authentication mechanism.
This addresses a few issues:
* The LDAP authentication integration now creates an account a new
  Zulip account if the user authenticated correctly but didn't have a
  Zulip account.
* The previous code didn't correctly disable the LDAP group
  permissions functionality.  We're not using groups support from the
  Django LDAP extension and not doing so can cause errors trying to
  fetch data from LDAP.

Huge thanks to @toaomatis for the initial implementation of this.

Fixes #72.
2015-10-15 09:16:58 -04:00
Tim Abbott
90e61d3b61 Call process_new_human_user consistently when creating new users.
Previously we only did this when new human users were created via the
login process, which meant the management command to create a user did
not add the user to default streams (for example) and any future code
that might want to register a new Zulip user (such as the LDAP
integration) would need to import views/__init__.py in order to
properly set this up.
2015-10-15 09:16:58 -04:00
Tim Abbott
355e1bbd94 Move process_new_human_user and helpers from views to actions.py. 2015-10-15 09:16:58 -04:00
Tim Abbott
792075ddeb sync_api_key: Don't throw a messy exception when user doesn't exist. 2015-10-15 09:16:58 -04:00
Tim Abbott
3e735d36d1 Rename tools/postgres-init-db to tools/postgres-init-dev-db.
The previous name was confusing because we also have
scripts/setup/postgres-init-db.
2015-10-15 09:14:21 -04:00
Allie Jones
99a2ba38b1 Expand new feature tutorial. 2015-10-15 09:12:22 -04:00
Tim Abbott
e8e38e911b Fix casperjs tests in Travis CI.
For reasons I don't understand, it appears that in Travis CI we're now
seeing errors using Casper that seem to correspond to a compatibility
issue introduced in PhantomJS 2, even though we're still using 1.9.8.

The solution for that compatability issue of patching casper's
bootstrap.js to get arguments from system.args at a slightly different
time than before seems to work in our setting as well, and that's what
this implements.

Probably the right long-term solution involves upgrading both
phantomjs and Casper to the latest versions.
2015-10-14 21:49:09 -04:00
Tim Abbott
eac6ea75dd emoji_dump: Exit with nonzero status when there are failures.
Previously, emoji_dump would happily exit successfully even if it
wasn't able to generate all the emoji.
2015-10-14 21:48:13 -04:00
Tim Abbott
5ee50cdced Install libfreetype6-dev in the development environment.
This fixes a problem where the emoji_dump tool was not generating the
black-and-white emoji.  The issue is that Pillow compiled without
libfreetype cannot extract those emoji (and gives an error of the form
"The _imagingft C module is not installed"), and if libfreetype-dev
isn't installed, pip will happily build and install Pillow without
libfreetype.
2015-10-14 18:58:36 -04:00
Tim Abbott
1dc09f3abd Document need to restart run-dev.py when developing queue workers. 2015-10-14 16:11:04 -04:00
Tim Abbott
4309f92062 Pass worker errors through to the run-dev.py console. 2015-10-14 16:08:32 -04:00
Kara McNair
d72f75a7e1 email-mirror: Support missed message email token string format.
The do_send_missedmessage_events_reply_in_zulip function in the email
mirror didn't support EMAIL_GATEWAY_PATTERN that wasn't of the form
%s@example.com (which resulted in replies to missed message emails failing
to be parsed).
2015-10-14 16:02:15 -04:00
Tim Abbott
0d85ab2062 README: Clean up Mac installation instructions for Virtualbox/Vagrant.
* Removes the hardcoding of an old version of Virtualbox (and doesn't
  specify the version to avoid getting stale again over time).
* Flips around the langauge to assume you don't have Vagrant already.
* Makes clear that the first-time installation is a lot slower than
  future runs will be.

Fixes #5.
2015-10-14 12:05:23 -04:00
Tim Abbott
c6761e8604 provision.py: Check whether git repository is present.
Fixes #148.
2015-10-14 10:18:59 -04:00
Tim Abbott
6569018de7 Disable apt caching in Travis CI configuration.
Apparently it isn't supposed to work reliably with the container-based
infrastructure that we're using and empirically it's causing build
failures.

Thanks to @mijime for tracking this down.
2015-10-14 10:04:27 -04:00
Andrew Drozdov
f6311478e6 Add docs on how to update default streams. 2015-10-13 11:09:37 -04:00
David Farrell
59dfec8f8b Add Fedora-specific instructions to the manual install guide. 2015-10-07 15:23:59 -04:00
Raphael
0608e32eeb Cause install to return 1 on failure.
This fixes issue #123. Namely, the script in scripts/setup/install was
returning 0. Adding `set -e` and `set -o pipeline` causes the install
script to exit and return 1 if any part fails, including piping output
(`set -o pipeline` does this).
2015-10-07 08:46:16 -04:00
Darius Bacon
741e9d00d8 Install zulip_english.stop in 'By hand' install instructions.
This step was previously only present in provision.py.
2015-10-06 23:00:18 -04:00
Darren Worrall
77fad7a16e Add an api endpoint to fetch GOOGLE_CLIENT_ID
Further to #102, this provides an endpoint suitable for mobile apps to
consume the GOOGLE_CLIENT_ID if configured.
2015-10-06 23:28:08 +00:00
Raphael
ea65715ef8 manage.py: Give a nice error message if run as root on posix systems.
If the os is posix, this will check to see if the user is root and
alert them to run as zulip user if so.

Fixes #114.
2015-10-05 21:41:35 -04:00
David Farrell
e4cea98ccd Correct the filepath for rebuild test database script. 2015-10-05 21:28:54 -04:00
Luna Lunapiena
0ec99a0838 README: Add explicit instruction to clone Zulip repository. 2015-10-05 21:27:08 -04:00
David Farrell
411531ecaf Update script paths in by-hand instructions to execute as written. 2015-10-05 20:36:07 -04:00
Nicholas Bergson-Shilcock
759ab33981 Add note about installing Vagrant on Mac to README. 2015-10-05 20:34:27 -04:00
Justin Valentini
d490779307 Add a relative link to production README. 2015-10-04 16:28:36 +00:00
Luke Faraone
e014b68b84 Include license text in THIRDPARTY 2015-10-03 15:44:13 +00:00
Darren Worrall
14389145cd Make twitter settings validation test more explicit 2015-10-02 12:01:25 +01:00
Tim Abbott
a65656dd9d Fix backwards-compatibility for old python-requests .json property.
In b59b5cac35, we upgraded our Google
Oauth code to support new python-requests, but because Ubuntu precise
still has old python-requests, this broke the codepath for older
systems.
2015-10-01 18:54:17 -07:00
Tim Abbott
8e0479e7a0 Clarify Zulip upgrade process instructions. 2015-10-01 12:56:18 -07:00
Tim Abbott
ea7e5527be README.prod.md: Expand documentation on filling in settings.py. 2015-10-01 12:42:36 -07:00
Tim Abbott
feca065dd8 README.prod.md: Improve instructions for downloading server tarball. 2015-10-01 12:42:35 -07:00
Liam Marshall
a822118dcb Fix a bunch of formatting issues in README.prod.md.
Removes:
* Several unused <hr>s

Fixes:
* Odd linebreaks
* Inconsistent headers
* URLs which should be links
* Headers which should be headers

Code-formats:
* envvars
* FQDNs
* commands and command options
* config options
* code
2015-10-01 12:42:16 -07:00
Allie Jones
cd1fa6a42e Fix notifications in Firefox by calling the constructor with 'new'. 2015-10-01 09:31:05 -07:00
Tim Abbott
ad75959b92 README.prod: Add documentation for how to create an SSL certificate. 2015-10-01 09:31:05 -07:00
mijime
2db2fcea18 Fix a puppet config to use SSO.
puppet::enterprise was renamed to puppet::voyager.
2015-10-01 09:31:05 -07:00
Darren Worrall
8b002040e0 Correct twitter library in requirements.
This also requires updating the required version of oauthlib; previously an
appropriate version was being installed only because it was a dependency of
the wrong twitter library.

This only affects development environments and/or hand-built
installations relying on the contents of requirements.txt.

To fix existing environments, the incorrect api needs to be explicitly
removed with `pip uninstall twitter`.

Fixes #86.
2015-09-30 13:49:33 -07:00
Ged Lawrenson
21b7048e54 install: Verify that the script has sufficient privileges. 2015-09-30 10:55:49 -07:00
Darren Worrall
bec3c0943a Fix validation that twitter creds are present.
They are looked up as secrets which initialize to `None`, but the code
was checking for empty strings.

This, along with #80, fixes #81.
2015-09-30 09:27:37 -07:00
Jason Michalski
7352f31c4b Add documentation for the pagerduty integrations
Add pagerduty to the list of supported integrations and walks users
through the setup process.

Fixes #36
2015-09-30 09:24:00 -07:00
Jason Michalski
dafe69761e Use stock emoji in the pagerduty integration
The pagerduty integration was using realm emoji. Use stock replacements
in the open source release.
2015-09-30 09:23:59 -07:00
Guillaume Simon
956fd7c420 puppet: Ensure rabbitmq-server and epmd services are running.
[tabbott@mit.edu: Added a few comments]
2015-09-30 09:21:45 -07:00
Tim Abbott
f819c1e901 Update the Zulip development documentation.
Fixes a few major issues:
* Documents RAM requirements for running Zulip development
* Fixes missing steps in the "by hand" installation process
* Improves the emphasis in the section no how to run tests on the common case.
* Documents that you can use LXC on newer Ubuntu as well.
2015-09-30 09:04:16 -07:00
Tim Abbott
3b00029c52 Show the username/password form if ZulipLDAPAuthBackend is enabled. 2015-09-30 09:04:16 -07:00
Tim Abbott
1482a386c2 Fix documentation for how to enable ZulipLDAPAuthBackend. 2015-09-30 09:04:16 -07:00
Tim Abbott
92aebe595b Dramatically extend post-install documentation for production Zulip. 2015-09-30 09:04:14 -07:00
Tim Abbott
5ad84fd997 Improve documentation for the Zulip email integration.
* Document fix for the 'less insecure' email problem.
* Mention that general Django email documentation applies.
2015-09-29 18:58:27 -07:00
Tim Abbott
40ec59b93e install: Add nice error message for RabbitMQ not having started. 2015-09-29 18:41:31 -07:00
Tim Abbott
5bf66e04fc initialize-database: Print nice instructions for how to redo if fails.
Most of our installation process is idempotent, but this step in
particular is not, so it's important to provide a clear error message
about how to proceed.
2015-09-29 18:27:27 -07:00
Tim Abbott
3efdb7ebf3 Document how to setup the Zulip S3 integration. 2015-09-29 18:11:58 -07:00
Tim Abbott
80fa5006f8 Document the purpose of local_settings.py properly. 2015-09-29 18:05:04 -07:00
Tim Abbott
bda9d78092 Use settings.ZULIP_ADMINISTRATOR as contact list for deactivated users. 2015-09-29 17:59:47 -07:00
Waseem Daher
6bb9b129f7 Update Zulip support email to zulip-devel@googlegroups.com.
Ideally some of these templates should really point to the
local installation's support email address, but this is a
good start.

Exceptions:
* Where to report security incidents
* MIT Zephyr-related pages
* zulip.com terms and conditions
2015-09-29 17:59:47 -07:00
Thomas Butter
d93d4c7216 Fix settings documentation of twitter keys.
Twitter keys are stored in zulip-secrets.conf.
2015-09-29 17:45:05 -07:00
Tim Abbott
852ac66f8e Extend the Google oauth documentation in local_server_template.py. 2015-09-28 10:05:58 -07:00
Amanpreet Singh
e20bc9f9b3 Fix "by hand" installation instructions.
- Add missing `python-dev` in apt-get install command
2015-09-28 09:24:55 -07:00
Tim Abbott
1f2f497cab Unrevert run Zulip tests automatically using Travis CI.
This contains a fix written by nemeth from PR #63 for doing argument
parsing properly.
2015-09-28 09:18:51 -07:00
Luke Faraone
578f769f60 Revert "Run Zulip tests automatically using Travis CI."
Improper list access from `sys.argv` would result in an exception if no
arguments are passed.

This reverts commit d2f5937d89.
2015-09-28 14:33:13 +00:00
Ian Whitlock
54fd321941 Add Vagrant-caused permissions problem to Possible Issues. 2015-09-27 17:48:07 -07:00
Tim Abbott
b6c1f1d162 Fix incorrect name for email_password secret in settings template.
Fixes #49.
2015-09-27 17:06:03 -07:00
Tim Abbott
d2f5937d89 Run Zulip tests automatically using Travis CI.
This is a bit hackish in that ideally we'd use proper options parsing
in provision.py, but it works and I even ran the tests 100x for tests
for flakes and didn't get any, so it's definitely an improvement!

With this we'll be both testing the runtime and effectively the Dev VM
setup process, which is awesome; the additional thing I'd want to add
tests for is the production setup process...
2015-09-27 16:29:20 -07:00
Caleb Anderson
ed742fa847 small typo fix 2015-09-27 01:10:01 -06:00
Tim Abbott
a625ca49ec puppet: Move /var/lib/nagios_state creation to zulip::base.pp.
Previously, in Zulip voyager, the cron jobs would spew error emails
every time they ran, due to this directory not existing.

This also tightens the permissions for the folder and avoids needing
to create a nagios user for Zulip voyager; it should be writeable by
both root and the zulip user and world-readable (and thus readable by
the Nagios user on zulip.com systems).
2015-09-26 21:44:23 -07:00
Tim Abbott
96bd1c38dc install: Make sure python is installed before using it.
This is relevant for completely bare Ubuntu systems which might only
have python3 installed.

Fixes #40.
2015-09-26 21:34:36 -07:00
Tim Abbott
9748780192 Remove unnecessary puppet.conf configuration.
Fixes #23.
2015-09-26 21:34:19 -07:00
Tim Abbott
bc3f096918 Update redis config to be supported on Trusty.
Previously our redis config was built for precise.

Synced from redis-server 2:2.8.4-2 plus our one change, which is
disabling saving to disk, so just put that at the bottom for maximum
obviousness.

I wish there was a better way to represent the fact that this is all
we're doing, since this will make life more difficult for running on
precise as well.

Fixes #28.
2015-09-26 21:33:55 -07:00
Tim Abbott
af4aac6836 settings: Document SMTP firewall issues in email configuration. 2015-09-26 21:32:47 -07:00
80 changed files with 2019 additions and 682 deletions

16
.travis.yml Normal file
View File

@@ -0,0 +1,16 @@
install:
- pip install pbs
- python provision.py --travis
cache:
- apt: false
language: python
python:
- "2.7"
# command to run tests
script:
- source /srv/zulip-venv/bin/activate && env PATH=$PATH:/srv/zulip-venv/bin ./tools/test-all
sudo: required
services:
- docker
addons:
postgresql: "9.3"

140
README.md
View File

@@ -42,25 +42,37 @@ Please report any security issues you discover to support@zulip.com.
Running Zulip in production
===========================
This is documented in https://zulip.org/server.html and README.prod.md.
This is documented in https://zulip.org/server.html and [README.prod.md](README.prod.md).
Installing the Zulip Development environment
============================================
You will need a machine with at least 2GB of RAM available (see
https://github.com/zulip/zulip/issues/32 for a plan for how to
dramatically reduce this requirement).
Start by cloning this repository: `git clone https://github.com/zulip/zulip.git`
Using Vagrant
-------------
This is the recommended approach, and is tested on OS X 10.10 as well as Ubuntu 14.04.
* If your host is OS X, download VirtualBox from
<http://download.virtualbox.org/virtualbox/4.3.30/VirtualBox-4.3.30-101610-OSX.dmg>
and install it.
* If your host is Ubuntu 14.04:
* The best performing way to run the Zulip development environment is
using an LXC container. If your host is Ubuntu 14.04 (or newer;
what matters is having support for LXC containers), you'll want to
install and configure the LXC Vagrant provider like this:
`sudo apt-get install vagrant lxc lxc-templates cgroup-lite redir && vagrant plugin install vagrant-lxc`
* If your host is OS X, [download VirtualBox](https://www.virtualbox.org/wiki/Downloads),
[download Vagrant](https://www.vagrantup.com/downloads.html), and install them both.
Once that's done, simply change to your zulip directory and run
`vagrant up` in your terminal. That will install the development
server inside a Vagrant guest.
`vagrant up` in your terminal to install the development server. This
will take a long time on the first run because Vagrant needs to
download the Ubuntu Trusty base image, but later you can run `vagrant
destroy` and then `vagrant up` again to rebuild the environment and it
will be much faster.
Once that finishes, you can run the development server as follows:
@@ -85,9 +97,32 @@ development server encounters. It runs on top of Django's "manage.py
runserver" tool, which will automatically restart the Zulip server
whenever you save changes to Python code.
However, the Zulip queue workers will not automatically restart when
you save changes, so you will need to ctrl-C and then restart
`run-dev.py` manually if you are testing changes to the queue workers
or if a queue worker has crashed.
Using provision.py without Vagrant
----------------------------------
If you'd like to install a Zulip development environment on a server
that's already running Ubuntu 14.04 Trusty, you can do that by just
running:
```
sudo apt-get update
sudo apt-get install -y python-pbs
python /srv/zulip/provision.py
cd /srv/zulip
source /srv/zulip-venv/bin/activate
./tools/run-dev.py
```
By hand
-------
If you really want to install everything by hand, the below
instructions should work.
Install the following non-Python dependencies:
* libffi-dev — needed for some Python extensions
@@ -98,11 +133,12 @@ Install the following non-Python dependencies:
* python-dev
* redis-server — rate limiting
* tsearch-extras — better text search
* libfreetype6-dev - needed before you pip install Pillow to properly generate emoji PNGs
On Debian or Ubuntu systems:
```
sudo apt-get install libffi-dev memcached rabbitmq-server libldap2-dev redis-server postgresql-server-dev-all libmemcached-dev
sudo apt-get install libffi-dev memcached rabbitmq-server libldap2-dev python-dev redis-server postgresql-server-dev-all libmemcached-dev libfreetype6-dev
# If on 12.04 or wheezy:
sudo apt-get install postgresql-9.1
@@ -118,13 +154,49 @@ sudo dpkg -i postgresql-9.3-tsearch-extras_0.1.2_amd64.deb
sudo apt-get install postgresql-9.4
wget https://dl.dropboxusercontent.com/u/283158365/zuliposs/postgresql-9.4-tsearch-extras_0.1_amd64.deb
sudo dpkg -i postgresql-9.4-tsearch-extras_0.1_amd64.deb
```
# Then, all versions:
Now continue with the "All systems" instructions below.
On Fedora 22 (experimental):
```
sudo dnf install libffi-devel memcached rabbitmq-server openldap-devel python-devel redis postgresql-server postgresql-devel postgresql libmemcached-devel freetype-devel
wget https://launchpad.net/~tabbott/+archive/ubuntu/zulip/+files/tsearch-extras_0.1.3.tar.gz
tar xvzf tsearch-extras_0.1.3.tar.gz
cd ts2
make
sudo make install
# Hack around missing dictionary files -- need to fix this to get
# the proper dictionaries from what in debian is the hunspell-en-us package.
sudo touch /usr/share/pgsql/tsearch_data/english.stop
sudo touch /usr/share/pgsql/tsearch_data/en_us.dict
sudo touch /usr/share/pgsql/tsearch_data/en_us.affix
# Edit the postgres settings:
sudo vi /var/lib/pgsql/data/pg_hba.conf
# Add this line before the first uncommented line to enable password auth:
host all all 127.0.0.1/32 md5
# Start the services
sudo systemctl start redis memcached rabbitmq-server postgresql
```
All Systems:
```
pip install -r requirements.txt
./scripts/setup/configure-rabbitmq
./tools/postgres-init-db
./tools/do-destroy-rebuild-database
./tools/download-zxcvbn
./tools/emoji_dump/build_emoji
./scripts/setup/generate_secrets.py -d
sudo cp ./puppet/zulip/files/postgresql/zulip_english.stop /usr/share/postgresql/9.3/tsearch_data/
./scripts/setup/configure-rabbitmq
./tools/postgres-init-dev-db
./tools/do-destroy-rebuild-database
./tools/postgres-init-test-db
./tools/do-destroy-rebuild-test-database
```
To start the development server:
@@ -133,19 +205,12 @@ To start the development server:
./tools/run-dev.py
```
… and hit http://localhost:9991/.
… and visit [http://localhost:9991/](http://localhost:9991/).
Running the test suite
======================
One-time setup of test databases:
```
./tools/postgres-init-test-db
./tools/do-destroy-rebuild-test-database
```
Run all tests:
```
@@ -161,22 +226,37 @@ individual tests, e.g.:
./tools/test-js-with-casper 10-navigation.js
```
The above instructions include the first-time setup of test databases,
but you may need to rebuild the test database occasionally if you're
working on new database migrations. To do this, run:
```
./tools/postgres-init-test-db
./tools/do-destroy-rebuild-test-database
```
Possible testing issues
=======================
The Casper tests are flaky on the Virtualbox environment (probably due
to some performance-sensitive races). Until this issue is debugged,
you may need to rerun them to get them to pass.
- The Casper tests are flaky on the Virtualbox environment (probably
due to some performance-sensitive races; they work reliably in
Travis CI). Until this issue is debugged, you may need to rerun
them to get them to pass.
When running the test suite, if you get an error like this:
- When running the test suite, if you get an error like this:
```
sqlalchemy.exc.ProgrammingError: (ProgrammingError) function ts_match_locs_array(unknown, text, tsquery) does not exist
LINE 2: ...ECT message_id, flags, subject, rendered_content, ts_match_l...
^
```
```
sqlalchemy.exc.ProgrammingError: (ProgrammingError) function ts_match_locs_array(unknown, text, tsquery) does not exist
LINE 2: ...ECT message_id, flags, subject, rendered_content, ts_match_l...
^
```
… then you need to install tsearch-extras, described above. Afterwards, re-run the `init*-db` and the `do-destroy-rebuild*-database` scripts.
… then you need to install tsearch-extras, described
above. Afterwards, re-run the `init*-db` and the
`do-destroy-rebuild*-database` scripts.
- When building the development environment using Vagrant and the LXC provider, if you encounter permissions errors, you may need to `chown -R 1000:$(whoami) /path/to/zulip` on the host before running `vagrant up` in order to ensure that the synced directory has the correct owner during provision. This issue will arise if you run `id username` on the host where `username` is the user running Vagrant and the output is anything but 1000.
This seems to be caused by Vagrant behavior; more information can be found here https://github.com/fgrehm/vagrant-lxc/wiki/FAQ#help-my-shared-folders-have-the-wrong-owner
License
=======

View File

@@ -1,62 +1,342 @@
Zulip in production
===================
This documents the process for installing Zulip in a production environment.
Note that if you just want to play around with Zulip and see what it
looks like, it is easier to install it in a development environment
following the instructions in README.dev, since then you don't need to
worry about setting up SSL certificates and an authentication mechanism.
Recommended requirements:
* Server running Ubuntu Precise or Debian Wheezy
* At least 2 CPUs for production use
* At least 4GB of RAM for production use
* At least 100GB of free disk for production use
* HTTP(S) access to the public Internet (for some features;
discuss with Zulip Support if this is an issue for you)
* At least 2 CPUs for production use with 100+ users
* At least 4GB of RAM for production use with 100+ users. We strongly
recommend against installing with less than 2GB of RAM, as you will
likely experience OOM issues. In the future we expect Zulip's RAM
requirements to decrease to support smaller installations (see
https://github.com/zulip/zulip/issues/32).
* At least 10GB of free disk for production use (more may be required
if you intend to store uploaded files locally rather than in S3
and your team uses that feature extensively)
* Outgoing HTTP(S) access to the public Internet.
* SSL Certificate for the host you're putting this on
(e.g. https://zulip.example.com)
* Email credentials for the service to send outgoing emails to users
(e.g. missed message notifications, password reminders if you're not
using SSO, etc.).
(e.g. zulip.example.com). If you just want to see what
Zulip looks like, we recommend installing the development
environment detailed in README.md as that is easier to setup.
* Email credentials Zulip can use to send outgoing emails to users
(e.g. email address confirmation emails during the signup process,
missed message notifications, password reminders if you're not using
SSO, etc.).
=======================================================================
How to install Zulip in production:
Installing Zulip in production
==============================
These instructions should be followed as root.
(1) Install the SSL certificates for your machine to
/etc/ssl/private/zulip.key
and
/etc/ssl/certs/zulip.combined-chain.crt
`/etc/ssl/private/zulip.key` and `/etc/ssl/certs/zulip.combined-chain.crt`.
If you don't know how to generate an SSL certificate, you, you can
do the following to generate a self-signed certificate:
(2) download zulip-server.tar.gz, and unpack to it /root/zulip, e.g.
tar -xf zulip-server-1.1.3.tar.gz
mv zulip-server-1.1.3 /root/zulip
```
apt-get install openssl
openssl genrsa -des3 -passout pass:x -out server.pass.key 4096
openssl rsa -passin pass:x -in server.pass.key -out zulip.key
rm server.pass.key
openssl req -new -key zulip.key -out server.csr
openssl x509 -req -days 365 -in server.csr -signkey zulip.key -out zulip.combined-chain.crt
rm server.csr
cp zulip.key /etc/ssl/private/zulip.key
cp zulip.combined-chain.crt /etc/ssl/certs/zulip.combined-chain.crt
```
(3) run /root/zulip/scripts/setup/install
You will eventually want to get a properly signed certificate (and
note that at present the Zulip desktop app doesn't support
self-signed certificates), but this will let you finish the
installation process.
This may take a while to run, since it will install a large number of
packages via apt.
(2) Download [the latest built server tarball](https://www.zulip.com/dist/releases/zulip-server-latest.tar.gz)
and unpack it to `/root/zulip`, e.g.
```
wget https://www.zulip.com/dist/releases/zulip-server-latest.tar.gz
tar -xf zulip-server-latest.tar.gz
mv zulip-server-1.3.6 /root/zulip
```
(3) Run
```
/root/zulip/scripts/setup/install
```
This may take a while to run, since it will install a large number of
packages via apt.
(4) Configure the Zulip server instance by filling in the settings in
/etc/zulip/settings.py
`/etc/zulip/settings.py`. Be sure to fill in all the mandatory
settings, enable at least one authentication mechanism, and do the
configuration required for that authentication mechanism to work.
See the section on "Authentication" below for more detail on
configuring authentication mechanisms.
(5) su zulip -c /home/zulip/deployments/current/scripts/setup/initialize-database
(5) Run
```
su zulip -c /home/zulip/deployments/current/scripts/setup/initialize-database
```
This will report an error if you did not fill in all the mandatory
settings from `/etc/zulip/settings.py`. Once this completes
successfully, the main installation process will be complete, and if
you are planning on using password authentication, you should be able
to visit the URL for your server and register for an account.
This will report an error if you did not fill in all the mandatory
settings from /etc/zulip/settings.py. Once this completes
successfully, the main installation process will be complete, and if
you are planning on using password authentication, you should be able
to visit the URL for your server and register for an account.
(6) Subscribe to [the Zulip announcements Google Group](https://groups.google.com/forum/#!forum/zulip-announce)
to get announcements about new releases, security issues, etc.
(6) Subscribe to
https://groups.google.com/forum/#!forum/zulip-announce to get
announcements about new releases, security issues, etc.
=======================================================================
Authentication and logging into Zulip the first time
====================================================
Maintaining Zulip in production:
(As you read and follow the instructions in this section, if you run
into trouble, check out the troubleshooting advice in the next major
section.)
Once you've finished installing Zulip, configuring your settings.py
file, and initializing the database, it's time to login to your new
installation. By default, initialize-database creates 1 realm that
you can join, the `ADMIN_DOMAIN` realm (defined in
`/etc/zulip/settings.py`).
The `ADMIN_DOMAIN` realm is by default configured with the following settings:
* `restricted_to_domain=True`: Only people with emails ending with @ADMIN_DOMAIN can join.
* `invite_required=False`: An invitation is not required to join the realm.
* `invite_by_admin_only=False`: You don't need to be an admin user to invite other users.
* `mandatory_topics=False`: Users are not required to specify a topic when sending messages.
If you would like to change these settings, you can do so using the
following process as the zulip user:
```
cd /home/zulip/deployments/current
./manage.py shell
from zerver.models import *
r = get_realm(settings.ADMIN_DOMAIN)
r.restricted_to_domain=False # Now anyone anywhere can login
r.save() # save to the database
```
If you realize you set `ADMIN_DOMAIN` wrong, in addition to fixing the
value in settings.py, you will also want to do a similar manage.py
process to set `r.domain = newexample.com`.
Depending what authentication backend you're planning to use, you will
need to do some additional setup documented in the `settings.py` template:
* For Google authentication, you need to follow the configuration
instructions around `GOOGLE_OAUTH2_CLIENT_ID` and `GOOGLE_CLIENT_ID`.
* For Email authentication, you will need to follow the configuration
instructions around outgoing SMTP from Django.
You should be able to login now. If you get an error, check
`/var/log/zulip/errors.log` for a traceback, and consult the next
section for advice on how to debug. If you aren't able to figure it
out, email zulip-devel@googlegroups.com with the traceback and we'll
try to help you out!
You will likely want to make your own user account an admin user,
which you can do via the following management command:
```
./manage.py knight username@example.com -f
```
Now that you are an administrator, you will have a special
"Administration" tab linked to from the upper-right gear menu in the
Zulip app that lets you deactivate other users, manage streams, change
the Realm settings you may have edited using manage.py shell above,
etc.
You can also use `manage.py knight` with the
`--permission=api_super_user` argument to create API super users,
which are needed to mirror messages to streams from other users for
the IRC and Jabber mirroring integrations (see
`bots/irc-mirror.py` and `bots/jabber_mirror.py` for some detail on these).
There are a large number of useful management commands under
`zerver/manangement/commands/`; you can also see them listed using
`./manage.py` with no arguments.
One such command worth highlighting because it's a valuable feature
with no UI in the Administration page is `./manage.py realm_filters`,
which allows you to configure certain patterns in messages to be
automatically linkified, e.g. whenever someone mentions "T1234" it
could be auto-linkified to ticket 1234 in your team's Trac instance.
Checking Zulip is healthy and debugging the services it depends on
==================================================================
You can check if the zulip application is running using:
```
supervisorctl status
```
And checking for errors in the Zulip errors logs under
`/var/log/zulip/`. That contains one log file for each service, plus
`errors.log` (has all errors), `server.log` (logs from the Django and
Tornado servers), and `workers.log` (combined logs from the queue
workers).
After you change configuration in `/etc/zulip/settings.py` or fix a
misconfiguration, you will often want to restart the Zulip application.
You can restart Zulip using:
```
supervisorctl restart all
```
Similarly, you can stop Zulip using:
```
supervisorctl stop all
```
The Zulip application uses several major services to store and cache
data, queue messages, and otherwise support the Zulip application:
* postgresql
* rabbitmq-server
* nginx
* redis
* memcached
If one of these services is not installed or functioning correctly,
Zulip will not work. Below we detail some common configuration
problems and how to resolve them:
* An AMQPConnectionError traceback or error running rabbitmqctl
usually means that RabbitMQ is not running; to fix this, try:
```
service rabbitmq-server restart
```
If RabbitMQ fails to start, the problem is often that you are using
a virtual machine with broken DNS configuration; you can often
correct this by configuring `/etc/hosts` properly.
* If your browser reports no webserver is running, that is likely
because nginx is not configured properly and thus failed to start.
nginx will fail to start if you configured SSL incorrectly or did
not provide SSL certificates. To fix this, configure them properly
and then run:
```
service nginx restart
```
If you run into additional problems, [please report them](https://github.com/zulip/zulip/issues) so that we can
update these lists!
Making your Zulip instance awesome
==================================
Once you've got Zulip setup, you'll likely want to configure it the
way you like. There are four big things to focus on:
(1) Integrations. We recommend setting up integrations for the major
tools that your team works with. For example, if you're a software
development team, you may want to start with integrations for your
version control, issue tracker, CI system, and monitoring tools.
Spend time configuring these integrations to be how you like them --
if an integration is spammy, you may want to change it to not send
messages that nobody cares about (E.g. for the zulip.com trac
integration, some teams find they only want notifications when new
tickets are opened, commented on, or closed, and not every time
someone edits the metadata).
If Zulip doesn't have an integration you want, you can add your own!
Most integrations are very easy to write, and even more complex
integrations usually take less than a day's work to build. We very
much appreciate contributions of new integrations; there is a brief
draft integration writing guide [here](https://github.com/zulip/zulip/issues/70).
It can often be valuable to integrate your own internal processes to
send notifications into Zulip; e.g. notifications of new customer
signups, new error reports, or daily reports on the team's key
metrics; this can often spawn discussions in response to the data.
(2) Streams and Topics. If it feels like a stream has too much
traffic about a topic only of interest to some of the subscribers,
consider adding or renaming streams until you feel like your team is
working productively.
Second, most users are not used to topics. It can require a bit of
time for everyone to get used to topics and start benefitting from
them, but usually once a team is using them well, everyone ends up
enthusiastic about how much topics make life easier. Some tips on
using topics:
* When replying to an existing conversation thread, just click on the
message, or navigate to it with the arrow keys and hit "r" or
"enter" to reply on the same topic
* When you start a new conversation topic, even if it's related to the
previous conversation, type a new topic in the compose box
* You can edit topics to fix a thread that's already been started,
which can be helpful when onboarding new batches of users to the platform.
Third, setting default streams for new users is a great way to get
new users involved in conversations before they've accustomed
themselves with joining streams on their own. You can use the
[`set_default_streams`](https://github.com/zulip/zulip/blob/master/zerver/management/commands/set_default_streams.py)
command to set default streams for users within a realm:
```
python manage.py set_default_streams --domain=example.com --streams=foo,bar,...
```
(3) Notification settings. Zulip gives you a great deal of control
over which messages trigger desktop notifications; you can configure
these extensively in the `/#settings` page (get there from the gear
menu). If you find the desktop notifications annoying, consider
changing the settings to only trigger desktop notifications when you
receive a PM or are @-mentioned.
(4) The mobile and desktop apps. Currently, the Zulip Desktop app
only supports talking to servers with a properly signed SSL
certificate, so you may find that you get a blank screen when you
connect to a Zulip server using a self-signed certificate.
The Zulip iOS and Android apps in their respective stores don't yet
support talking to non-zulip.com servers; the iOS app is waiting on
Apple's app store review, while the Android app is waiting on someone
to do the small project of adding a field to specify what Zulip server
to talk to.
These issues will likely all be addressed in the coming weeks; make
sure to join the zulip-announce@googlegroups.com list so that you can
receive the announcements when these become available.
(5) All the other features: Hotkeys, emoji, search filters,
@-mentions, etc. Zulip has lots of great features, make sure your
team knows they exist and how to use them effectively.
(6) Enjoy your Zulip installation! If you discover things that you
wish had been documented, please contribute documentation suggestions
either via a GitHub issue or pull request; we love even small
contributions, and we'd love to make the Zulip documentation cover
everything anyone might want to know about running Zulip in
production.
Maintaining Zulip in production
===============================
* To upgrade to a new version, download the appropriate release
tarball from https://www.zulip.org, and then run as root
/home/zulip/deployments/current/scripts/upgrade-zulip <tarball>
tarball from https://www.zulip.com/dist/releases/ to a path readable
by the zulip user (e.g. /home/zulip), and then run as root:
```
/home/zulip/deployments/current/scripts/upgrade-zulip zulip-server-VERSION.tar.gz
```
The upgrade process will shut down the service, run `apt-get
upgrade` and any database migrations, and then bring the service
@@ -65,73 +345,72 @@ Maintaining Zulip in production:
transition involved. Unless you have tested the upgrade in advance,
we recommend doing upgrades at off hours.
You can create your own release tarballs from a copy of this
You can create your own release tarballs from a copy of zulip.git
repository using `tools/build-release-tarball`.
* To update your settings, simply edit /etc/zulip/settings.py and then
run /home/zulip/deployments/current/scripts/restart-server to
* To update your settings, simply edit `/etc/zulip/settings.py` and then
run `/home/zulip/deployments/current/scripts/restart-server` to
restart the server
* You are responsible for running "apt-get upgrade" on your system on
* You are responsible for running `apt-get upgrade` on your system on
a regular basis to ensure that it is up to date with the latest
security patches.
* To use the Zulip API with your Zulip server, you will need to use the
API endpoint of e.g. "https://zulip.yourdomain.net/api". Our Python
API endpoint of e.g. `https://zulip.yourdomain.net/api`. Our Python
API example scripts support this via the
"--site=https://zulip.yourdomain.net" argument. The API bindings
support it via putting "site=https://zulip.yourdomain.net" in your
`--site=https://zulip.yourdomain.net` argument. The API bindings
support it via putting `site=https://zulip.yourdomain.net` in your
.zuliprc.
Every Zulip integration supports this sort of argument (or e.g. a
`ZULIP_SITE` variable in a zuliprc file or the environment), but this
is not yet documented for some of the integrations (the included
integration documentation on `/integrations` will properly document
how to do this for most integrations). Pull requests welcome to
document this for those integrations that don't discuss this!
* Similarly, you will need to instruct your users to specify the URL
for your Zulip server when using the Zulip desktop and mobile apps.
* As a measure to mitigate the impact of potential memory leaks in one
of the Zulip daemons, the service automatically restarts itself
every Sunday early morning. See /etc/cron.d/restart-zulip for the
every Sunday early morning. See `/etc/cron.d/restart-zulip` for the
precise configuration.
=======================================================================
SSO Authentication:
SSO Authentication
==================
Zulip supports integrating with a corporate Single-Sign-On solution.
There are a few ways to do it, but this section documents how to
configure Zulip to use an SSO solution that best supports Apache and
will set the REMOTE_USER variable:
will set the `REMOTE_USER` variable:
(0) Check that /etc/zulip/settings.py has
"zproject.backends.ZulipRemoteUserBackend" as the only enabled value
in the "AUTHENTICATION_BACKENDS" list, and that "SSO_APPEND_DOMAIN" is
(0) Check that `/etc/zulip/settings.py` has
`zproject.backends.ZulipRemoteUserBackend` as the only enabled value
in the `AUTHENTICATION_BACKENDS` list, and that `SSO_APPEND_DOMAIN` is
correct set depending on whether your SSO system uses email addresses
or just usernames in REMOTE_USER.
or just usernames in `REMOTE_USER`.
Make sure that you've restarted the Zulip server since making this
configuration change.
(1) Edit /etc/zulip/zulip.conf and change the puppet_classes line to read:
(1) Edit `/etc/zulip/zulip.conf` and change the `puppet_classes` line to read:
puppet_classes = zulip::enterprise, zulip::apache_sso
(2) As root, run
/home/zulip/deployments/current/scripts/zulip-puppet-apply
```
puppet_classes = zulip::voyager, zulip::apache_sso
```
(2) As root, run `/home/zulip/deployments/current/scripts/zulip-puppet-apply`
to install our SSO integration.
(3) To configure our SSO integration, edit
/etc/apache2/sites-available/zulip-sso.example and fill in the
configuration required for your SSO service to set REMOTE_USER and
place your completed configuration file at
`/etc/apache2/sites-available/zulip-sso.example` and fill in the
configuration required for your SSO service to set `REMOTE_USER` and
place your completed configuration file at `/etc/apache2/sites-available/zulip-sso`
/etc/apache2/sites-available/zulip-sso
(4) Run `a2ensite zulip-sso` to enable the Apache integration site.
(4) Run
a2ensite zulip-sso
To enable the Apache integration site.
Now you should be able to visit https://zulip.yourdomain.net/ and
Now you should be able to visit `https://zulip.yourdomain.net/` and
login via the SSO solution.

View File

@@ -2,7 +2,7 @@ Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: Zulip
Upstream-Contact: Zulip Development Discussion <zulip-devel@googlegroups.com>
Source: https://zulip.org/
Comment:
Comment:
Unless otherwise noted, the Zulip software is distributed under the Apache
License, Version 2.0. The software includes some works released by third
parties under other free and open source licenses. Those works are
@@ -55,14 +55,14 @@ Comment: https://github.com/DavidS/puppet-common
Files: puppet/stdlib/*
Copyright: 2011, Krzysztof Wilczynski
2011, Puppet Labs Inc
2011, Puppet Labs Inc
License: Apache-2.0
File: puppet/zulip_internal/files/mediawiki/Auth_remoteuser.php
Copyright: 2006 Otheus Shelling
Copyright: 2006 Otheus Shelling
2007 Rusty Burchfield
2009 James Kinsman
2010 Daniel Thomas
2009 James Kinsman
2010 Daniel Thomas
2010 Ian Ward Comfort
License: GPL-2.0
Comment: Not linked.
@@ -205,7 +205,7 @@ License: SIL-OFL-1.1
Files: static/third/spectrum/*
Copyright: 2013 Brian Grinstead
License: Expat
License: Expat
Files: static/third/spin/spin.js
Copyright: 2011-2013 Felix Gnass
@@ -226,7 +226,7 @@ Copyright: 2010 C. F., Wong
License: Expat
Files: static/third/zocial/*
Copyright: Sam Collins
Copyright: Sam Collins
License: Expat
Files: tools/inject-messages/othello
@@ -262,3 +262,317 @@ License: Expat
Files: zerver/tests/frontend/casperjs/modules/vendors/*
Copyright: 2011, Jeremy Ashkenas
License: Expat
License: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
.
http://www.apache.org/licenses/LICENSE-2.0
.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
.
On Debian systems, the full text of the Apache License version 2 can
be found in /usr/share/common-licenses/Apache-2.0.
License: BSD-2-clause
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice(s), this list of conditions and the following disclaimer
unmodified other than the allowable addition of one or more
copyright notices.
2. Redistributions in binary form must reproduce the above copyright
notice(s), this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the
distribution.
.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER(S) ``AS IS'' AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER(S) BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
License: BSD-3-Clause
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
.
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
License: CC-0-1.0
Creative Commons CC0 1.0 Universal
CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE
LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN
ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS INFORMATION
ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES REGARDING THE
USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED HEREUNDER, AND
DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM THE USE OF THIS DOCUMENT
OR THE INFORMATION OR WORKS PROVIDED HEREUNDER.
.
Statement of Purpose
.
The laws of most jurisdictions throughout the world automatically confer
exclusive Copyright and Related Rights (defined below) upon the creator
and subsequent owner(s) (each and all, an "owner") of an original work
of authorship and/or a database (each, a "Work").
.
Certain owners wish to permanently relinquish those rights to a Work for
the purpose of contributing to a commons of creative, cultural and
scientific works ("Commons") that the public can reliably and without
fear of later claims of infringement build upon, modify, incorporate in
other works, reuse and redistribute as freely as possible in any form
whatsoever and for any purposes, including without limitation commercial
purposes. These owners may contribute to the Commons to promote the
ideal of a free culture and the further production of creative, cultural
and scientific works, or to gain reputation or greater distribution for
their Work in part through the use and efforts of others.
.
For these and/or other purposes and motivations, and without any
expectation of additional consideration or compensation, the person
associating CC0 with a Work (the "Affirmer"), to the extent that he or
she is an owner of Copyright and Related Rights in the Work, voluntarily
elects to apply CC0 to the Work and publicly distribute the Work under
its terms, with knowledge of his or her Copyright and Related Rights in
the Work and the meaning and intended legal effect of CC0 on those
rights.
.
1. Copyright and Related Rights. A Work made available under CC0 may be
protected by copyright and related or neighboring rights ("Copyright and
Related Rights"). Copyright and Related Rights include, but are not
limited to, the following:
.
i. the right to reproduce, adapt, distribute, perform, display,
communicate, and translate a Work;
.
ii. moral rights retained by the original author(s) and/or performer(s);
.
iii. publicity and privacy rights pertaining to a person's image or
likeness depicted in a Work;
.
iv. rights protecting against unfair competition in regards to a Work,
subject to the limitations in paragraph 4(a), below;
.
v. rights protecting the extraction, dissemination, use and reuse of
data in a Work;
.
vi. database rights (such as those arising under Directive 96/9/EC of
the European Parliament and of the Council of 11 March 1996 on the legal
protection of databases, and under any national implementation thereof,
including any amended or successor version of such directive); and
.
vii. other similar, equivalent or corresponding rights throughout the
world based on applicable law or treaty, and any national
implementations thereof.
.
2. Waiver. To the greatest extent permitted by, but not in contravention
of, applicable law, Affirmer hereby overtly, fully, permanently,
irrevocably and unconditionally waives, abandons, and surrenders all of
Affirmer's Copyright and Related Rights and associated claims and causes
of action, whether now known or unknown (including existing as well as
future claims and causes of action), in the Work (i) in all territories
worldwide, (ii) for the maximum duration provided by applicable law or
treaty (including future time extensions), (iii) in any current or
future medium and for any number of copies, and (iv) for any purpose
whatsoever, including without limitation commercial, advertising or
promotional purposes (the "Waiver"). Affirmer makes the Waiver for the
benefit of each member of the public at large and to the detriment of
Affirmer's heirs and successors, fully intending that such Waiver shall
not be subject to revocation, rescission, cancellation, termination, or
any other legal or equitable action to disrupt the quiet enjoyment of
the Work by the public as contemplated by Affirmer's express Statement
of Purpose.
.
3. Public License Fallback. Should any part of the Waiver for any reason
be judged legally invalid or ineffective under applicable law, then the
Waiver shall be preserved to the maximum extent permitted taking into
account Affirmer's express Statement of Purpose. In addition, to the
extent the Waiver is so judged Affirmer hereby grants to each affected
person a royalty-free, non transferable, non sublicensable, non
exclusive, irrevocable and unconditional license to exercise Affirmer's
Copyright and Related Rights in the Work (i) in all territories
worldwide, (ii) for the maximum duration provided by applicable law or
treaty (including future time extensions), (iii) in any current or
future medium and for any number of copies, and (iv) for any purpose
whatsoever, including without limitation commercial, advertising or
promotional purposes (the "License"). The License shall be deemed
effective as of the date CC0 was applied by Affirmer to the Work. Should
any part of the License for any reason be judged legally invalid or
ineffective under applicable law, such partial invalidity or
ineffectiveness shall not invalidate the remainder of the License, and
in such case Affirmer hereby affirms that he or she will not (i)
exercise any of his or her remaining Copyright and Related Rights in the
Work or (ii) assert any associated claims and causes of action with
respect to the Work, in either case contrary to Affirmer's express
Statement of Purpose.
.
4. Limitations and Disclaimers.
.
a. No trademark or patent rights held by Affirmer are waived, abandoned,
surrendered, licensed or otherwise affected by this document.
.
b. Affirmer offers the Work as-is and makes no representations or
warranties of any kind concerning the Work, express, implied, statutory
or otherwise, including without limitation warranties of title,
merchantability, fitness for a particular purpose, non infringement, or
the absence of latent or other defects, accuracy, or the present or
absence of errors, whether or not discoverable, all to the greatest
extent permissible under applicable law.
.
c. Affirmer disclaims responsibility for clearing rights of other
persons that may apply to the Work or any use thereof, including without
limitation any person's Copyright and Related Rights in the Work.
Further, Affirmer disclaims responsibility for obtaining any necessary
consents, permissions or other rights required for any use of the Work.
.
d. Affirmer understands and acknowledges that Creative Commons is not a
party to this document and has no duty or obligation with respect to
this CC0 or use of the Work.
License: Expat
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
.
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
License: GPL-2.0
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; version 2, dated June, 1991.
.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
.
On Debian systems, the complete text of the GNU General Public License
can be found in /usr/share/common-licenses/GPL-2 file.
License: SIL-OFL-1.1
---------------------------------------------------------------------------
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
---------------------------------------------------------------------------
.
PREAMBLE
.
The goals of the Open Font License (OFL) are to stimulate worldwide development
of collaborative font projects, to support the font creation efforts of academic
and linguistic communities, and to provide a free and open framework in which
fonts may be shared and improved in partnership with others.
.
The OFL allows the licensed fonts to be used, studied, modified and redistributed
freely as long as they are not sold by themselves. The fonts, including any
derivative works, can be bundled, embedded, redistributed and/or sold with any
software provided that any reserved names are not used by derivative works. The
fonts and derivatives, however, cannot be released under any other type of license.
The requirement for fonts to remain under this license does not apply to any
document created using the fonts or their derivatives.
.
DEFINITIONS
.
"Font Software" refers to the set of files released by the Copyright Holder(s) under
this license and clearly marked as such. This may include source files, build
scripts and documentation.
.
"Reserved Font Name" refers to any names specified as such after the copyright
statement(s).
.
"Original Version" refers to the collection of Font Software components as
distributed by the Copyright Holder(s).
.
"Modified Version" refers to any derivative made by adding to, deleting, or
substituting -- in part or in whole -- any of the components of the Original Version,
by changing formats or by porting the Font Software to a new environment.
.
"Author" refers to any designer, engineer, programmer, technical writer or other
person who contributed to the Font Software.
.
PERMISSION & CONDITIONS
.
Permission is hereby granted, free of charge, to any person obtaining a copy of the
Font Software, to use, study, copy, merge, embed, modify, redistribute, and sell
modified and unmodified copies of the Font Software, subject to the following
conditions:
.
1) Neither the Font Software nor any of its individual components, in Original or
Modified Versions, may be sold by itself.
.
2) Original or Modified Versions of the Font Software may be bundled, redistributed
and/or sold with any software, provided that each copy contains the above copyright
notice and this license. These can be included either as stand-alone text files,
human-readable headers or in the appropriate machine-readable metadata fields within
text or binary files as long as those fields can be easily viewed by the user.
.
3) No Modified Version of the Font Software may use the Reserved Font Name(s) unless
explicit written permission is granted by the corresponding Copyright Holder. This
restriction only applies to the primary font name as presented to the users.
.
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font Software shall
not be used to promote, endorse or advertise any Modified Version, except to
acknowledge the contribution(s) of the Copyright Holder(s) and the Author(s) or with
their explicit written permission.
.
5) The Font Software, modified or unmodified, in part or in whole, must be distributed
entirely under this license, and must not be distributed under any other license. The
requirement for fonts to remain under this license does not apply to any document
created using the Font Software.
.
TERMINATION
.
This license becomes null and void if any of the above conditions are not met.
.
DISCLAIMER
.
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER
RIGHT. IN NO EVENT SHALL THE COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF THE USE OR
INABILITY TO USE THE FONT SOFTWARE OR FROM OTHER DEALINGS IN THE FONT SOFTWARE.

View File

@@ -6,7 +6,7 @@ import pytz
from django.core.management.base import BaseCommand
from django.db.models import Count
from zerver.models import UserProfile, Realm, Stream, Message, Recipient, UserActivity, \
Subscription, UserMessage
Subscription, UserMessage, get_realm
MOBILE_CLIENT_LIST = ["Android", "ios"]
HUMAN_CLIENT_LIST = MOBILE_CLIENT_LIST + ["website"]
@@ -70,7 +70,7 @@ class Command(BaseCommand):
def handle(self, *args, **options):
if options['realms']:
try:
realms = [Realm.objects.get(domain=domain) for domain in options['realms']]
realms = [get_realm(domain) for domain in options['realms']]
except Realm.DoesNotExist, e:
print e
exit(1)

View File

@@ -2,7 +2,7 @@ from __future__ import absolute_import
from django.core.management.base import BaseCommand
from django.db.models import Q
from zerver.models import Realm, Stream, Message, Subscription, Recipient
from zerver.models import Realm, Stream, Message, Subscription, Recipient, get_realm
class Command(BaseCommand):
help = "Generate statistics on the streams for a realm."
@@ -14,7 +14,7 @@ class Command(BaseCommand):
def handle(self, *args, **options):
if options['realms']:
try:
realms = [Realm.objects.get(domain=domain) for domain in options['realms']]
realms = [get_realm(domain) for domain in options['realms']]
except Realm.DoesNotExist, e:
print e
exit(1)

View File

@@ -4,7 +4,7 @@ import datetime
import pytz
from django.core.management.base import BaseCommand
from zerver.models import UserProfile, Realm, Stream, Message
from zerver.models import UserProfile, Realm, Stream, Message, get_realm
class Command(BaseCommand):
help = "Generate statistics on user activity."
@@ -21,7 +21,7 @@ class Command(BaseCommand):
def handle(self, *args, **options):
if options['realms']:
try:
realms = [Realm.objects.get(domain=domain) for domain in options['realms']]
realms = [get_realm(domain) for domain in options['realms']]
except Realm.DoesNotExist, e:
print e
exit(1)

View File

@@ -816,7 +816,7 @@ def get_realm_activity(request, realm):
all_user_records = {}
try:
admins = Realm.objects.get(domain=realm).get_admin_users()
admins = get_realm(realm).get_admin_users()
except Realm.DoesNotExist:
return HttpResponseNotFound("Realm %s does not exist" % (realm,))

View File

@@ -49,7 +49,7 @@ client = zulip.Client(
site=config.ZULIP_SITE,
api_key=config.ZULIP_API_KEY,
client="ZulipBasecamp/" + VERSION)
user_agent = "Basecamp To Zulip Mirroring script (support@zulip.com)"
user_agent = "Basecamp To Zulip Mirroring script (zulip-devel@googlegroups.com)"
htmlParser = HTMLParser()
# find some form of JSON loader/dumper, with a preference order for speed.

View File

@@ -58,7 +58,7 @@ client = zulip.Client(
site=config.ZULIP_SITE,
api_key=config.ZULIP_API_KEY,
client="ZulipCodebase/" + VERSION)
user_agent = "Codebase To Zulip Mirroring script (support@zulip.com)"
user_agent = "Codebase To Zulip Mirroring script (zulip-devel@googlegroups.com)"
# find some form of JSON loader/dumper, with a preference order for speed.
json_implementations = ['ujson', 'cjson', 'simplejson', 'json']

View File

@@ -26,7 +26,7 @@ package_info = dict(
version=version(),
description='Bindings for the Zulip message API',
author='Zulip, Inc.',
author_email='support@zulip.com',
author_email='zulip-devel@googlegroups.com',
classifiers=[
'Development Status :: 3 - Alpha',
'Environment :: Web Environment',

View File

@@ -54,6 +54,6 @@ while backoff.keep_going():
print ""
print ""
print "ERROR: The Jabber mirroring bot is unable to continue mirroring Jabber."
print "Please contact support@zulip.com if you need assistance."
print "Please contact zulip-devel@googlegroups.com if you need assistance."
print ""
sys.exit(1)

13
changelog.md Normal file
View File

@@ -0,0 +1,13 @@
# Change Log
All notable changes to this project will be documented in this file.
## [Unreleased]
###
- Turn off desktop and audible notifications for streams by default.
- Added support for the LDAP authentication integration creating new users.
- Added new endpoint to support Google auth on mobile.
- Fixed desktop notifications in modern Firefox.
- Fixed several installation issues for both production and development environments.
- Improved documentation for outgoing SMTP and the email mirror integration.

View File

@@ -2,84 +2,215 @@
New Feature Tutorial
====================
.. attention::
This tutorial is an unfinished work -- contributions welcome!
The changes needed to add a new feature will vary, of course, but this document
provides a general outline of what you may need to do, as well as an example of
the specific steps needed to add a new feature: adding a new option to the
application that is dynamically synced through the data system in real-time to
all browsers the user may have open.
The changes needed to add a new feature will vary, of course. We give an
example here that illustrates some of the common steps needed. We describe
the process of adding a new setting for admins that restricts inviting new
users to admins only.
Backend Changes
General Process
===============
Adding a field to the database
------------------------------
The server accesses the underlying database in `zerver/models.py`. Add
a new field in the appropriate class, `realm_invite_by_admins_only`
in the `Realm` class in this case.
**Update the model:** The server accesses the underlying database in `zerver/
models.py`. Add a new field in the appropriate class.
Once you do so, you need to create the migration and run it; the
process is documented at:
https://docs.djangoproject.com/en/1.8/topics/migrations/
**Create and run the migration:** To create and apply a migration, run: ::
Once you've run the migration, to test your changes, you'll want to
restart memcached on your development server (``/etc/init.d/memcached restart``) and
then restart ``run-dev.py`` to avoid interacting with cached objects.
./manage.py makemigrations
./manage.py migrate
**Test your changes:** Once you've run the migration, restart memcached on your
development server (``/etc/init.d/memcached restart``) and then restart
``run-dev.py`` to avoid interacting with cached objects.
Backend changes
---------------
You should add code in `zerver/lib/actions.py` to interact with the database,
that actually updates the relevant field. In this case, `do_set_realm_invite_by_admins_only`
is a function that actually updates the field in the database, and sends
an event announcing that this change has been made.
**Database interaction:** Add any necessary code for updating and interacting
with the database in ``zerver/lib/actions.py``. It should update the database and
send an event announcing the change.
You then need update the `fetch_initial_state_data` and `apply_events` functions
in `zerver/lib/actions.py` to update the state based on the event you just created.
In this case, we add a line
**Application state:** Modify the ``fetch_initial_state_data`` and ``apply_events``
functions in ``zerver/lib/actions.py`` to update the state based on the event you
just created.
::
**Backend implementation:** Make any other modifications to the backend required for
your change.
state['realm_invite_by_admins_only'] = user_profile.realm.invite_by_admins_only`
to the `fetch_initial_state_data` function. The `apply_events` function
doesn't need to be updated since
::
elif event['type'] == 'realm':
field = 'realm_' + event['property']
state[field] = event['value']
already took care of our event.
Then update `zerver/views/__init__.py` to actually call your function.
In the dictionary which sets the javascript `page_params` dictionary,
add a value for your feature.
::
realm_invite_by_admins_only = register_ret['realm_invite_by_admins_only']
Perhaps your new option controls some other backend rendering: in our case
we test for this option in the `home` method for adding a variable to the response.
The functions in this file control the generation of various pages served
(along with the Django templates).
Our new feature also shows up in the administration tab (as a checkbox),
so we need to update the `update_realm` function.
Finally, add tests for your backend changes; at the very least you
should add a test of your event data flowing through the system in
``test_events.py``.
**Testing:** At the very least, add a test of your event data flowing through
the system in ``test_events.py``.
Frontend changes
----------------
You need to change various things on the front end. In this case, the relevant files
are `static/js/server_events.js`, `static/js/admin.js`, `static/styles/zulip.css
and `static/templates/admin_tab.handlebars`.
**JavaScript:** Zulip's JavaScript is located in the directory ``static/js/``.
The exact files you may need to change depend on your feature. If you've added a
new event that is sent to clients, be sure to add a handler for it to
``static/js/server_events.js``.
**CSS:** The primary CSS file is ``static/styles/zulip.css``. If your new
feature requires UI changes, you may need to add additional CSS to this file.
**Templates:** The initial page structure is rendered via Django templates
located in ``template/server``. For JavaScript, Zulip uses Handlebars templates located in
``static/templates``. Templates are precompiled as part of the build/deploy
process.
**Testing:** There are two types of frontend tests: node-based unit tests and
blackbox end-to-end tests. The blackbox tests are run in a headless browser
using Casper.js and are located in ``zerver/tests/frontend/tests/``. The unit
tests use Node's ``assert`` module are located in ``zerver/tests/frontend/node/``.
For more information on writing and running tests see the :doc:`testing
documentation <testing>`.
Example Feature
===============
This example describes the process of adding a new setting to Zulip:
a flag that restricts inviting new users to admins only (the default behavior
is that any user can invite other users). It is based on an actual Zulip feature,
and you can review `the original commit in the Zulip git repo <https://github.com/zulip/zulip/commit/5b7f3466baee565b8e5099bcbd3e1ccdbdb0a408>`_.
(Note that Zulip has since been upgraded from Django 1.6 to 1.8, so the migration
format has changed.)
First, update the database and model to store the new setting. Add a
new boolean field, ``realm_invite_by_admins_only``, to the Realm model in
``zerver/models.py``.
Then create a Django migration that adds a new field, ``invite_by_admins_only``,
to the ``zerver_realm`` table.
In ``zerver/lib/actions.py``, create a new function named
``do_set_realm_invite_by_admins_only``. This function will update the database
and trigger an event to notify clients when this setting changes. In this case
there was an exisiting ``realm|update`` event type which was used for setting
similar flags on the Realm model, so it was possible to add a new property to
that event rather than creating a new one. The property name matches the
database field to make it easy to understand what it indicates.
The second argument to ``send_event`` is the list of users whose browser
sessions should be notified. Depending on the setting, this can be a single user
(if the setting is a personal one, like time display format), only members in a
particular stream or all active users in a realm. ::
# zerver/lib/actions.py
def do_set_realm_invite_by_admins_only(realm, invite_by_admins_only):
realm.invite_by_admins_only = invite_by_admins_only
realm.save(update_fields=['invite_by_admins_only'])
event = dict(
type="realm",
op="update",
property='invite_by_admins_only',
value=invite_by_admins_only,
)
send_event(event, active_user_ids(realm))
return {}
You then need to add code that will handle the event and update the application
state. In ``zerver/lib/actions.py`` update the ``fetch_initial_state`` and
``apply_events`` functions. ::
def fetch_initial_state_data(user_profile, event_types, queue_id):
# ...
state['realm_invite_by_admins_only'] = user_profile.realm.invite_by_admins_only`
In this case you don't need to change ``apply_events`` because there is already
code that will correctly handle the realm update event type: ::
def apply_events(state, events, user_profile):
for event in events:
# ...
elif event['type'] == 'realm':
field = 'realm_' + event['property']
state[field] = event['value']
You then need to add a view for clients to access that will call the newly-added
``actions.py`` code to update the database. This example feature adds a new
parameter that should be sent to clients when the application loads and be
accessible via JavaScript, and there is already a view that does this for
related flags: ``update_realm``. So in this case, we can add out code to the
exisiting view instead of creating a new one. ::
# zerver/views/__init__.py
def home(request):
# ...
page_params = dict(
# ...
realm_invite_by_admins_only = register_ret['realm_invite_by_admins_only'],
# ...
)
Since this feature also adds a checkbox to the admin page, and adds a new
property the Realm model that can be modified from there, you also need to make
changes to the ``update_realm`` function in the same file: ::
# zerver/views/__init__.py
def update_realm(request, user_profile,
name=REQ(validator=check_string, default=None),
restricted_to_domain=REQ(validator=check_bool, default=None),
invite_by_admins_only=REQ(validator=check_bool,default=None)):
# ...
if invite_by_admins_only is not None and
realm.invite_by_admins_only != invite_by_admins_only:
do_set_realm_invite_by_admins_only(realm, invite_by_admins_only)
data['invite_by_admins_only'] = invite_by_admins_only
Then make the required front end changes: in this case a checkbox needs to be
added to the admin page (and its value added to the data sent back to server
when a realm is updated) and the change event needs to be handled on the client.
To add the checkbox to the admin page, modify the relevant template,
``static/templates/admin_tab.handlebars`` (omitted here since it is relatively
straightforward). Then add code to handle changes to the new form control in
``static/js/admin.js``. ::
var url = "/json/realm";
var new_invite_by_admins_only =
$("#id_realm_invite_by_admins_only").prop("checked");
data[invite_by_admins_only] = JSON.stringify(new_invite_by_admins_only);
channel.patch({
url: url,
data: data,
success: function (data) {
# ...
if (data.invite_by_admins_only) {
ui.report_success("New users must be invited by an admin!", invite_by_admins_only_status);
} else {
ui.report_success("Any user may now invite new users!", invite_by_admins_only_status);
}
# ...
}
});
Finally, update ``server_events.js`` to handle related events coming from the
server. ::
# static/js/server_events.js
function get_events_success(events) {
# ...
var dispatch_event = function dispatch_event(event) {
switch (event.type) {
# ...
case 'realm':
if (event.op === 'update' && event.property === 'invite_by_admins_only') {
page_params.realm_invite_by_admins_only = event.value;
}
}
}
Any code needed to update the UI should be placed in ``dispatch_event`` callback
(rather than the ``channel.patch``) function. This ensures the appropriate code
will run even if the changes are made in another browser window. In this example
most of the changes are on the backend, so no UI updates are required.

View File

@@ -5,6 +5,9 @@ import logging
import subprocess
if __name__ == "__main__":
if 'posix' in os.name and os.geteuid() == 0:
from django.core.management.base import CommandError
raise CommandError("manage.py should not be run as root.")
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "zproject.settings")
from django.conf import settings

View File

@@ -17,6 +17,7 @@ SUPPORTED_PLATFORMS = {
APT_DEPENDENCIES = {
"trusty": [
"closure-compiler",
"libfreetype6-dev",
"libffi-dev",
"memcached",
"rabbitmq-server",
@@ -51,6 +52,16 @@ NPM_DEPENDENCIES = {
VENV_PATH="/srv/zulip-venv"
ZULIP_PATH="/srv/zulip"
if not os.path.exists(os.path.join(os.path.dirname(__file__), ".git")):
print "Error: No Zulip git repository present at /srv/zulip!"
print "To setup the Zulip development environment, you should clone the code"
print "from GitHub, rather than using a Zulip production release tarball."
sys.exit(1)
# TODO: Parse arguments properly
if "--travis" in sys.argv:
ZULIP_PATH="."
# tsearch-extras is an extension to postgres's built-in full-text search.
# TODO: use a real APT repository
TSEARCH_URL_BASE = "https://dl.dropboxusercontent.com/u/283158365/zuliposs/"
@@ -159,8 +170,12 @@ def main():
os.system("tools/download-zxcvbn")
os.system("tools/emoji_dump/build_emoji")
os.system("generate_secrets.py -d")
if "--travis" in sys.argv:
os.system("sudo service rabbitmq-server restart")
os.system("sudo service redis-server restart")
os.system("sudo service memcached restart")
sh.configure_rabbitmq(**LOUD)
sh.postgres_init_db(**LOUD)
sh.postgres_init_dev_db(**LOUD)
sh.do_destroy_rebuild_database(**LOUD)
sh.postgres_init_test_db(**LOUD)
sh.do_destroy_rebuild_test_database(**LOUD)

View File

@@ -1,22 +0,0 @@
[main]
server = puppet.zulip.com
environment = production
confdir = /etc/puppet
logdir=/var/log/puppet
vardir=/var/lib/puppet
ssldir=/var/lib/puppet/ssl
rundir=/var/run/puppet
factpath=$vardir/lib/facter
templatedir=$confdir/templates
prerun_command=/etc/puppet/etckeeper-commit-pre
postrun_command=/etc/puppet/etckeeper-commit-post
modulepath = /root/zulip/puppet:/etc/puppet/modules:/usr/share/puppet/modules
[master]
environment = production
manifest = $confdir/environments/$environment/manifests/site.pp
modulepath = $confdir/environments/$environment/modules
[agent]
report = true
show_diff = true
environment = production

View File

@@ -1,6 +1,6 @@
# Redis configuration file example
# Note on units: when memory size is needed, it is possible to specifiy
# Note on units: when memory size is needed, it is possible to specify
# it in the usual form of 1k 5GB 4M and so forth:
#
# 1k => 1000 bytes
@@ -12,6 +12,26 @@
#
# units are case insensitive so 1GB 1Gb 1gB are all the same.
################################## INCLUDES ###################################
# Include one or more other config files here. This is useful if you
# have a standard template that goes to all Redis server but also need
# to customize a few per-server settings. Include files can include
# other files, so use this wisely.
#
# Notice option "include" won't be rewritten by command "CONFIG REWRITE"
# from admin or Redis Sentinel. Since Redis always uses the last processed
# line as value of a configuration directive, you'd better put includes
# at the beginning of this file to avoid overwriting config change at runtime.
#
# If instead you are interested in using includes to override configuration
# options, it is better to use include as the last line.
#
# include /path/to/local.conf
# include /path/to/other.conf
################################ GENERAL #####################################
# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
daemonize yes
@@ -24,9 +44,14 @@ pidfile /var/run/redis/redis-server.pid
# If port 0 is specified Redis will not listen on a TCP socket.
port 6379
# If you want you can bind a single interface, if the bind option is not
# specified all the interfaces will listen for incoming connections.
# By default Redis listens for connections from all the network interfaces
# available on the server. It is possible to listen to just one or multiple
# interfaces using the "bind" configuration directive, followed by one or
# more IP addresses.
#
# Examples:
#
# bind 192.168.1.100 10.0.0.1
bind 127.0.0.1
# Specify the path for the unix socket that will be used to listen for
@@ -39,15 +64,31 @@ bind 127.0.0.1
# Close the connection after a client is idle for N seconds (0 to disable)
timeout 0
# Set server verbosity to 'debug'
# it can be one of:
# TCP keepalive.
#
# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
# of communication. This is useful for two reasons:
#
# 1) Detect dead peers.
# 2) Take the connection alive from the point of view of network
# equipment in the middle.
#
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration.
#
# A reasonable value for this option is 60 seconds.
tcp-keepalive 0
# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel notice
# Specify the log file name. Also 'stdout' can be used to force
# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile /var/log/redis/redis-server.log
@@ -59,7 +100,7 @@ logfile /var/log/redis/redis-server.log
# Specify the syslog identity.
# syslog-ident redis
# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
# syslog-facility local0
# Set the number of databases. The default database is DB 0, you can select
@@ -67,7 +108,7 @@ logfile /var/log/redis/redis-server.log
# dbid is a number between 0 and 'databases'-1
databases 16
################################ SNAPSHOTTING #################################
################################ SNAPSHOTTING ################################
#
# Save the DB on disk:
#
@@ -82,10 +123,31 @@ databases 16
# after 60 sec if at least 10000 keys changed
#
# Note: you can disable saving at all commenting all the "save" lines.
#
# It is also possible to remove all the previously configured save
# points by adding a save directive with a single empty string argument
# like in the following example:
#
# save ""
# save 900 1
# save 300 10
# save 60 10000
save 900 1
save 300 10
save 60 10000
# By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
stop-writes-on-bgsave-error yes
# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
@@ -93,6 +155,15 @@ databases 16
# the dataset will likely be bigger if you have compressible values or keys.
rdbcompression yes
# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.
rdbchecksum yes
# The filename where to dump the DB
dbfilename dump.rdb
@@ -100,9 +171,9 @@ dbfilename dump.rdb
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# Also the Append Only File will be created inside this directory.
#
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir /var/lib/redis
@@ -122,27 +193,46 @@ dir /var/lib/redis
#
# masterauth <master-password>
# When a slave lost the connection with the master, or when the replication
# When a slave loses its connection with the master, or when the replication
# is still in progress, the slave can act in two different ways:
#
# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
# still reply to client requests, possibly with out of data data, or the
# still reply to client requests, possibly with out of date data, or the
# data set may just be empty if this is the first synchronization.
#
# 2) if slave-serve-stale data is set to 'no' the slave will reply with
# 2) if slave-serve-stale-data is set to 'no' the slave will reply with
# an error "SYNC with master in progress" to all the kind of commands
# but to INFO and SLAVEOF.
#
slave-serve-stale-data yes
# You can configure a slave instance to accept writes or not. Writing against
# a slave instance may be useful to store some ephemeral data (because data
# written on a slave will be easily deleted after resync with the master) but
# may also cause problems if clients are writing to it because of a
# misconfiguration.
#
# Since Redis 2.6 by default slaves are read-only.
#
# Note: read only slaves are not designed to be exposed to untrusted clients
# on the internet. It's just a protection layer against misuse of the instance.
# Still a read only slave exports by default all the administrative commands
# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
# security of read only slaves using 'rename-command' to shadow all the
# administrative / dangerous commands.
slave-read-only yes
# Slaves send PINGs to server in a predefined interval. It's possible to change
# this interval with the repl_ping_slave_period option. The default value is 10
# seconds.
#
# repl-ping-slave-period 10
# The following option sets a timeout for both Bulk transfer I/O timeout and
# master data or ping response timeout. The default value is 60 seconds.
# The following option sets the replication timeout for:
#
# 1) Bulk transfer I/O during SYNC, from the point of view of slave.
# 2) Master timeout from the point of view of slaves (data, pings).
# 3) Slave timeout from the point of view of masters (REPLCONF ACK pings).
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-slave-period otherwise a timeout will be detected
@@ -150,6 +240,80 @@ slave-serve-stale-data yes
#
# repl-timeout 60
# Disable TCP_NODELAY on the slave socket after SYNC?
#
# If you select "yes" Redis will use a smaller number of TCP packets and
# less bandwidth to send data to slaves. But this can add a delay for
# the data to appear on the slave side, up to 40 milliseconds with
# Linux kernels using a default configuration.
#
# If you select "no" the delay for data to appear on the slave side will
# be reduced but more bandwidth will be used for replication.
#
# By default we optimize for low latency, but in very high traffic conditions
# or when the master and slaves are many hops away, turning this to "yes" may
# be a good idea.
repl-disable-tcp-nodelay no
# Set the replication backlog size. The backlog is a buffer that accumulates
# slave data when slaves are disconnected for some time, so that when a slave
# wants to reconnect again, often a full resync is not needed, but a partial
# resync is enough, just passing the portion of data the slave missed while
# disconnected.
#
# The biggest the replication backlog, the longer the time the slave can be
# disconnected and later be able to perform a partial resynchronization.
#
# The backlog is only allocated once there is at least a slave connected.
#
# repl-backlog-size 1mb
# After a master has no longer connected slaves for some time, the backlog
# will be freed. The following option configures the amount of seconds that
# need to elapse, starting from the time the last slave disconnected, for
# the backlog buffer to be freed.
#
# A value of 0 means to never release the backlog.
#
# repl-backlog-ttl 3600
# The slave priority is an integer number published by Redis in the INFO output.
# It is used by Redis Sentinel in order to select a slave to promote into a
# master if the master is no longer working correctly.
#
# A slave with a low priority number is considered better for promotion, so
# for instance if there are three slaves with priority 10, 100, 25 Sentinel will
# pick the one with priority 10, that is the lowest.
#
# However a special priority of 0 marks the slave as not able to perform the
# role of master, so a slave with priority of 0 will never be selected by
# Redis Sentinel for promotion.
#
# By default the priority is 100.
slave-priority 100
# It is possible for a master to stop accepting writes if there are less than
# N slaves connected, having a lag less or equal than M seconds.
#
# The N slaves need to be in "online" state.
#
# The lag in seconds, that must be <= the specified value, is calculated from
# the last ping received from the slave, that is usually sent every second.
#
# This option does not GUARANTEES that N replicas will accept the write, but
# will limit the window of exposure for lost writes in case not enough slaves
# are available, to the specified number of seconds.
#
# For example to require at least 3 slaves with a lag <= 10 seconds use:
#
# min-slaves-to-write 3
# min-slaves-max-lag 10
#
# Setting one or the other to 0 disables the feature.
#
# By default min-slaves-to-write is set to 0 (feature disabled) and
# min-slaves-max-lag is set to 10.
################################## SECURITY ###################################
# Require clients to issue AUTH <PASSWORD> before processing any other
@@ -158,7 +322,7 @@ slave-serve-stale-data yes
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
#
#
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
@@ -167,33 +331,39 @@ slave-serve-stale-data yes
# Command renaming.
#
# It is possilbe to change the name of dangerous commands in a shared
# It is possible to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# of hard to guess so that it will be still available for internal-use
# tools but not available for general clients.
# hard to guess so that it will still be available for internal-use tools
# but not available for general clients.
#
# Example:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# It is also possilbe to completely kill a command renaming it into
# It is also possible to completely kill a command by renaming it into
# an empty string:
#
# rename-command CONFIG ""
#
# Please note that changing the name of commands that are logged into the
# AOF file or transmitted to slaves may cause problems.
################################### LIMITS ####################################
# Set the max number of connected clients at the same time. By default there
# is no limit, and it's up to the number of file descriptors the Redis process
# is able to open. The special value '0' means no limits.
# Set the max number of connected clients at the same time. By default
# this limit is set to 10000 clients, however if the Redis server is not
# able to configure the process file limit to allow for the specified limit
# the max number of allowed clients is set to the current file limit
# minus 32 (as Redis reserves a few file descriptors for internal uses).
#
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
#
# maxclients 128
# maxclients 10000
# Don't use more memory than the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# accordingly to the eviction policy selected (see maxmemmory-policy).
# according to the eviction policy selected (see maxmemory-policy).
#
# If Redis can't remove keys according to the policy, or if the policy is
# set to 'noeviction', Redis will start to reply with errors to commands
@@ -201,7 +371,7 @@ slave-serve-stale-data yes
# to reply to read-only commands like GET.
#
# This option is usually useful when using Redis as an LRU cache, or to set
# an hard memory limit for an instance (using the 'noeviction' policy).
# a hard memory limit for an instance (using the 'noeviction' policy).
#
# WARNING: If you have slaves attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the slaves are subtracted
@@ -217,16 +387,16 @@ slave-serve-stale-data yes
# maxmemory <bytes>
# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached? You can select among five behavior:
#
# is reached. You can select among five behaviors:
#
# volatile-lru -> remove the key with an expire set using an LRU algorithm
# allkeys-lru -> remove any key accordingly to the LRU algorithm
# volatile-random -> remove a random key with an expire set
# allkeys->random -> remove a random key, any key
# allkeys-random -> remove a random key, any key
# volatile-ttl -> remove the key with the nearest expire time (minor TTL)
# noeviction -> don't expire at all, just return an error on write operations
#
# Note: with all the kind of policies, Redis will return an error on write
#
# Note: with any of the above policies, Redis will return an error on write
# operations, when there are not suitable keys for eviction.
#
# At the date of writing this commands are: set setnx setex append
@@ -249,45 +419,51 @@ slave-serve-stale-data yes
############################## APPEND ONLY MODE ###############################
# By default Redis asynchronously dumps the dataset on disk. If you can live
# with the idea that the latest records will be lost if something like a crash
# happens this is the preferred way to run Redis. If instead you care a lot
# about your data and don't want to that a single record can get lost you should
# enable the append only mode: when this mode is enabled Redis will append
# every write operation received in the file appendonly.aof. This file will
# be read on startup in order to rebuild the full dataset in memory.
# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# Note that you can have both the async dumps and the append only file if you
# like (you have to comment the "save" statements above to disable the dumps).
# Still if append only mode is enabled Redis will load the data from the
# log file at startup ignoring the dump.rdb file.
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# IMPORTANT: Check the BGREWRITEAOF to check how to rewrite the append
# log file in background when it gets too big.
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check http://redis.io/topics/persistence for more information.
appendonly no
# The name of the append only file (default: "appendonly.aof")
# appendfilename appendonly.aof
appendfilename "appendonly.aof"
# The fsync() call tells the Operating System to actually write data on disk
# instead to wait for more data in the output buffer. Some OS will really flush
# instead to wait for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log . Slow, Safest.
# everysec: fsync only if one second passed since the last fsync. Compromise.
# everysec: fsync only one time every second. Compromise.
#
# The default is "everysec" that's usually the right compromise between
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will will let the operating system flush the output buffer when
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
#
# If unsure, use "everysec".
# appendfsync always
@@ -305,21 +481,22 @@ appendfsync everysec
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
#
# This means that while another child is saving the durability of Redis is
# the same as "appendfsync none", that in pratical terms means that it is
# possible to lost up to 30 seconds of log in the worst scenario (with the
# This means that while another child is saving, the durability of Redis is
# the same as "appendfsync none". In practical terms, this means that it is
# possible to lose up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
#
#
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.
no-appendfsync-on-rewrite no
# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size will growth by the specified percentage.
#
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (or if no rewrite happened since the restart, the size of
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
#
# This base size is compared to the current size. If the current size is
@@ -328,12 +505,30 @@ no-appendfsync-on-rewrite no
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
#
# Specify a precentage of zero in order to disable the automatic AOF
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
################################ LUA SCRIPTING ###############################
# Max execution time of a Lua script in milliseconds.
#
# If the maximum execution time is reached Redis will log that a script is
# still in execution after the maximum allowed time and will start to
# reply to queries with an error.
#
# When a long running script exceed the maximum execution time only the
# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
# used to stop a script that did not yet called write commands. The second
# is the only way to shut down the server in the case a write commands was
# already issue by the script but the user don't want to wait for the natural
# termination of the script.
#
# Set it to 0 or a negative value for unlimited execution without warnings.
lua-time-limit 5000
################################## SLOW LOG ###################################
# The Redis Slow Log is a system to log queries that exceeded a specified
@@ -342,7 +537,7 @@ auto-aof-rewrite-min-size 64mb
# but just the time needed to actually execute the command (this is the only
# stage of command execution where the thread is blocked and can not serve
# other requests in the meantime).
#
#
# You can configure the slow log with two parameters: one tells Redis
# what is the execution time, in microseconds, to exceed in order for the
# command to get logged, and the other parameter is the length of the
@@ -358,88 +553,59 @@ slowlog-log-slower-than 10000
# You can reclaim memory used by the slow log with SLOWLOG RESET.
slowlog-max-len 128
################################ VIRTUAL MEMORY ###############################
############################# Event notification ##############################
### WARNING! Virtual Memory is deprecated in Redis 2.4
### The use of Virtual Memory is strongly discouraged.
# Virtual Memory allows Redis to work with datasets bigger than the actual
# amount of RAM needed to hold the whole dataset in memory.
# In order to do so very used keys are taken in memory while the other keys
# are swapped into a swap file, similarly to what operating systems do
# with memory pages.
# Redis can notify Pub/Sub clients about events happening in the key space.
# This feature is documented at http://redis.io/topics/keyspace-events
#
# For instance if keyspace events notification is enabled, and a client
# performs a DEL operation on key "foo" stored in the Database 0, two
# messages will be published via Pub/Sub:
#
# To enable VM just set 'vm-enabled' to yes, and set the following three
# VM parameters accordingly to your needs.
vm-enabled no
# vm-enabled yes
# This is the path of the Redis swap file. As you can guess, swap files
# can't be shared by different Redis instances, so make sure to use a swap
# file for every redis process you are running. Redis will complain if the
# swap file is already in use.
# PUBLISH __keyspace@0__:foo del
# PUBLISH __keyevent@0__:del foo
#
# The best kind of storage for the Redis swap file (that's accessed at random)
# is a Solid State Disk (SSD).
# It is possible to select the events that Redis will notify among a set
# of classes. Every class is identified by a single character:
#
# *** WARNING *** if you are using a shared hosting the default of putting
# the swap file under /tmp is not secure. Create a dir with access granted
# only to Redis user and configure Redis to create the swap file there.
vm-swap-file /var/lib/redis/redis.swap
# vm-max-memory configures the VM to use at max the specified amount of
# RAM. Everything that deos not fit will be swapped on disk *if* possible, that
# is, if there is still enough contiguous space in the swap file.
# K Keyspace events, published with __keyspace@<db>__ prefix.
# E Keyevent events, published with __keyevent@<db>__ prefix.
# g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
# $ String commands
# l List commands
# s Set commands
# h Hash commands
# z Sorted set commands
# x Expired events (events generated every time a key expires)
# e Evicted events (events generated when a key is evicted for maxmemory)
# A Alias for g$lshzxe, so that the "AKE" string means all the events.
#
# With vm-max-memory 0 the system will swap everything it can. Not a good
# default, just specify the max amount of RAM you can in bytes, but it's
# better to leave some margin. For instance specify an amount of RAM
# that's more or less between 60 and 80% of your free RAM.
vm-max-memory 0
# Redis swap files is split into pages. An object can be saved using multiple
# contiguous pages, but pages can't be shared between different objects.
# So if your page is too big, small objects swapped out on disk will waste
# a lot of space. If you page is too small, there is less space in the swap
# file (assuming you configured the same number of total swap file pages).
# The "notify-keyspace-events" takes as argument a string that is composed
# by zero or multiple characters. The empty string means that notifications
# are disabled at all.
#
# If you use a lot of small objects, use a page size of 64 or 32 bytes.
# If you use a lot of big objects, use a bigger page size.
# If unsure, use the default :)
vm-page-size 32
# Number of total memory pages in the swap file.
# Given that the page table (a bitmap of free/used pages) is taken in memory,
# every 8 pages on disk will consume 1 byte of RAM.
# Example: to enable list and generic events, from the point of view of the
# event name, use:
#
# The total swap size is vm-page-size * vm-pages
# notify-keyspace-events Elg
#
# With the default of 32-bytes memory pages and 134217728 pages Redis will
# use a 4 GB swap file, that will use 16 MB of RAM for the page table.
# Example 2: to get the stream of the expired keys subscribing to channel
# name __keyevent@0__:expired use:
#
# It's better to use the smallest acceptable value for your application,
# but the default is large in order to work in most conditions.
vm-pages 134217728
# Max number of VM I/O threads running at the same time.
# This threads are used to read/write data from/to swap file, since they
# also encode and decode objects from disk to memory or the reverse, a bigger
# number of threads can help with big objects even if they can't help with
# I/O itself as the physical device may not be able to couple with many
# reads/writes operations at the same time.
# notify-keyspace-events Ex
#
# The special value of 0 turn off threaded I/O and enables the blocking
# Virtual Memory implementation.
vm-max-threads 4
# By default all notifications are disabled because most users don't need
# this feature and the feature has some overhead. Note that if you don't
# specify at least one of K or E, no events will be delivered.
notify-keyspace-events ""
############################### ADVANCED CONFIG ###############################
# Hashes are encoded in a special way (much more memory efficient) when they
# have at max a given numer of elements, and the biggest element does not
# exceed a given threshold. You can configure this limits with the following
# configuration directives.
hash-max-zipmap-entries 512
hash-max-zipmap-value 64
# Hashes are encoded using a memory efficient data structure when they have a
# small number of entries, and the biggest entry does not exceed a given
# threshold. These thresholds can be configured using the following directives.
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
# Similarly to hashes, small lists are also encoded in a special way in order
# to save a lot of space. The special representation is only used when
@@ -462,12 +628,12 @@ zset-max-ziplist-value 64
# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
# order to help rehashing the main Redis hash table (the one mapping top-level
# keys to values). The hash table implementation redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into an hash table
# that is rhashing, the more rehashing "steps" are performed, so if the
# keys to values). The hash table implementation Redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into a hash table
# that is rehashing, the more rehashing "steps" are performed, so if the
# server is idle the rehashing is never complete and some more memory is used
# by the hash table.
#
#
# The default is to use this millisecond 10 times every second in order to
# active rehashing the main dictionaries, freeing memory when possible.
#
@@ -480,12 +646,65 @@ zset-max-ziplist-value 64
# want to free memory asap when possible.
activerehashing yes
################################## INCLUDES ###################################
# Include one or more other config files here. This is useful if you
# have a standard template that goes to all redis server but also need
# to customize a few per-server settings. Include files can include
# other files, so use this wisely.
# The client output buffer limits can be used to force disconnection of clients
# that are not reading data from the server fast enough for some reason (a
# common reason is that a Pub/Sub client can't consume messages as fast as the
# publisher can produce them).
#
# include /path/to/local.conf
# include /path/to/other.conf
# The limit can be set differently for the three different classes of clients:
#
# normal -> normal clients
# slave -> slave clients and MONITOR clients
# pubsub -> clients subscribed to at least one pubsub channel or pattern
#
# The syntax of every client-output-buffer-limit directive is the following:
#
# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
#
# A client is immediately disconnected once the hard limit is reached, or if
# the soft limit is reached and remains reached for the specified number of
# seconds (continuously).
# So for instance if the hard limit is 32 megabytes and the soft limit is
# 16 megabytes / 10 seconds, the client will get disconnected immediately
# if the size of the output buffers reach 32 megabytes, but will also get
# disconnected if the client reaches 16 megabytes and continuously overcomes
# the limit for 10 seconds.
#
# By default normal clients are not limited because they don't receive data
# without asking (in a push way), but just after a request, so only
# asynchronous clients may create a scenario where data is requested faster
# than it can read.
#
# Instead there is a default limit for pubsub and slave clients, since
# subscribers and slaves receive data in a push fashion.
#
# Both the hard or the soft limit can be disabled by setting them to zero.
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
# Redis calls an internal function to perform many background tasks, like
# closing connections of clients in timeout, purging expired keys that are
# never requested, and so forth.
#
# Not all tasks are performed with the same frequency, but Redis checks for
# tasks to perform accordingly to the specified "hz" value.
#
# By default "hz" is set to 10. Raising the value will use more CPU when
# Redis is idle, but at the same time will make Redis more responsive when
# there are many keys expiring at the same time, and timeouts may be
# handled with more precision.
#
# The range is between 1 and 500, however a value over 100 is usually not
# a good idea. Most users should use the default of 10 and raise this up to
# 100 only in environments where very low latency is required.
hz 10
# When a child rewrites the AOF file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
aof-rewrite-incremental-fsync yes
# Zulip-specific configuration: disable saving to disk.
save ""

View File

@@ -40,14 +40,6 @@ class zulip::base {
group => 'zulip',
}
file { '/etc/puppet/puppet.conf':
ensure => file,
mode => 640,
owner => "root",
group => "root",
source => 'puppet:///modules/zulip/puppet.conf',
}
file { '/etc/security/limits.conf':
ensure => file,
mode => 640,
@@ -56,6 +48,13 @@ class zulip::base {
source => 'puppet:///modules/zulip/limits.conf',
}
# This directory is written to by cron jobs for reading by Nagios
file { '/var/lib/nagios_state/':
ensure => directory,
group => 'zulip',
mode => 774,
}
file { '/var/log/zulip':
ensure => 'directory',
owner => 'zulip',

View File

@@ -1,5 +1,6 @@
class zulip::rabbit {
$rabbit_packages = [# Needed to run rabbitmq
"erlang-base",
"rabbitmq-server",
]
package { $rabbit_packages: ensure => "installed" }
@@ -39,5 +40,21 @@ class zulip::rabbit {
source => "puppet:///modules/zulip/rabbitmq/rabbitmq.config",
}
# epmd doesn't have an init script. This won't leak epmd processes
# because epmd checks if one is already running and exits if so.
#
# TODO: Ideally we'd still check if it's already running to keep the
# puppet log for what is being changed clean
exec { "epmd":
command => "epmd -daemon",
require => Package[erlang-base],
path => "/usr/bin/:/bin/",
}
service { "rabbitmq-server":
ensure => running,
require => Exec["epmd"],
}
# TODO: Should also call exactly once "configure-rabbitmq"
}

View File

@@ -49,8 +49,8 @@ $wgLogo = "$wgStylePath/common/images/wiki.png";
$wgEnableEmail = true;
$wgEnableUserEmail = true; # UPO
$wgEmergencyContact = "support@zulip.com";
$wgPasswordSender = "support@zulip.com";
$wgEmergencyContact = "zulip-devel@googlegroups.com";
$wgPasswordSender = "zulip-devel@googlegroups.com";
$wgEnotifUserTalk = true; # UPO
$wgEnotifWatchlist = true; # UPO

View File

@@ -102,13 +102,6 @@ class zulip_internal::base {
group => "nagios",
mode => 600,
}
file { '/var/lib/nagios_state/':
ensure => directory,
require => User['nagios'],
owner => "nagios",
group => "nagios",
mode => 777,
}
file { '/var/lib/nagios/.ssh':
ensure => directory,
require => File['/var/lib/nagios/'],

View File

@@ -33,7 +33,7 @@ jwt==0.3.2
mandrill==1.0.57
mock==1.0.1
oauth2client==1.4.11
oauthlib==0.1.3
oauthlib==1.0.3
pika==0.9.14
postmonkey==1.0b0
psycopg2==2.6
@@ -56,7 +56,7 @@ smmap==0.9.0
sockjs-tornado==1.0.1
sourcemap==0.1.8
tornado==2.4.1
twitter==1.17.0
python-twitter==1.1
ujson==1.33
uritemplate==0.6
wsgiref==0.1.2

View File

@@ -12,8 +12,7 @@ EOF
apt-get update
apt-get -y dist-upgrade
apt-get install -y puppet git
cp -a /root/zulip/puppet/zulip/files/puppet.conf /etc/puppet/
apt-get install -y puppet git python
mkdir -p /etc/zulip
echo -e "[machine]\npuppet_classes = zulip::voyager\ndeploy_type = voyager" > /etc/zulip/zulip.conf
@@ -29,6 +28,17 @@ fi
cp -a /root/zulip/zproject/local_settings_template.py /etc/zulip/settings.py
ln -nsf /etc/zulip/settings.py /root/zulip/zproject/local_settings.py
if ! rabbitmqctl status >/dev/null; then
set +x
echo; echo "RabbitMQ seems to not have started properly after the installation process."
echo "Often, this can be caused by misconfigured /etc/hosts in virtualized environments"
echo "See https://github.com/zulip/zulip/issues/53#issuecomment-143805121"
echo "for more information"
echo
set -x
exit 1
fi
/root/zulip/scripts/setup/configure-rabbitmq
/root/zulip/scripts/setup/postgres-init-db
@@ -56,4 +66,5 @@ cat <<EOF
su zulip -c /home/zulip/deployments/current/scripts/setup/initialize-database
To configure the initial database.
EOF

4
scripts/setup/flush-memcached Executable file
View File

@@ -0,0 +1,4 @@
#!/bin/sh -xe
# Flush memcached
echo 'flush_all' | nc localhost 11211

View File

@@ -7,6 +7,25 @@ python manage.py checkconfig
python manage.py migrate --noinput
python manage.py createcachetable third_party_api_results
python manage.py initialize_voyager_db
if ! python manage.py initialize_voyager_db; then
set +x
echo
echo -e "\033[32mPopulating default database failed."
echo "After you fix the problem, you will need to do the following before rerunning this:"
echo " * supervisorctl stop all # to stop all services that might be accessing the database"
echo " * scripts/setup/postgres-init-db # run as root to drop and re-create the database"
echo -e "\033[0m"
set -x
exit 1
fi
supervisorctl restart all
echo "Congratulations! You have successfully configured your Zulip database."
echo "If you haven't already, you should configure email in /etc/zulip/settings.py"
echo "And then you should now be able to visit your server over https and sign up using"
echo "an email address that ends with @ADMIN_DOMAIN (from your settings file)."
echo ""
echo "See README.prod.md for instructions on how to confirm your Zulip install is healthy, "
echo " change ADMIN_DOMAIN, debug common issues, and otherwise finish setting things up."

View File

@@ -1,3 +1,9 @@
#!/bin/bash
set -e
set -o pipefail
if [ "$EUID" -ne 0 ]; then
echo "Error: The installation script must be run as root" >&2
exit 1
fi
mkdir -p /var/log/zulip
"$(dirname "$(dirname "$0")")/lib/install" "$@" 2>&1 | tee -a /var/log/zulip/install.log

View File

@@ -15,5 +15,7 @@ CREATE SCHEMA zulip AUTHORIZATION zulip;
CREATE EXTENSION tsearch_extras SCHEMA zulip;
EOF
sh "$(dirname "$0")/flush-memcached"
echo "Database created"

View File

@@ -23,7 +23,7 @@ include apt
for pclass in re.split(r'\s*,\s*', config.get('machine', 'puppet_classes')):
puppet_config += "include %s\n" % (pclass,)
puppet_cmd = ["puppet", "apply", "-e", puppet_config]
puppet_cmd = ["puppet", "apply", "--modulepath=/root/zulip/puppet", "-e", puppet_config]
puppet_cmd += extra_args
if force:

View File

@@ -33,7 +33,7 @@
<p>We know this is stressful, but we still love you.</p>
<p>If you'd like, you can <a href="mailto:support@zulip.com?Subject=404%20error%20on%20%7Bwhich%20URL%3F%7D&Body=Hi%20there%21%0A%0AI%20was%20trying%20to%20do%20%7Bwhat%20were%20you%20trying%20to%20do%3F%7D%20at%20around%20%7Bwhen%20was%20this%3F%7D%20when%20I%20got%20a%20404%20error%20while%20accessing%20%7Bwhich%20URL%3F%7D.%0A%0AThanks!%0A%0ASincerely%2C%20%0A%0A%7BYour%20name%7D">drop us a line</a> to let us know what happened.</p>
<p>If you'd like, you can <a href="mailto:zulip-devel@googlegroups.com?Subject=404%20error%20on%20%7Bwhich%20URL%3F%7D&Body=Hi%20there%21%0A%0AI%20was%20trying%20to%20do%20%7Bwhat%20were%20you%20trying%20to%20do%3F%7D%20at%20around%20%7Bwhen%20was%20this%3F%7D%20when%20I%20got%20a%20404%20error%20while%20accessing%20%7Bwhich%20URL%3F%7D.%0A%0AThanks!%0A%0ASincerely%2C%20%0A%0A%7BYour%20name%7D">drop us a line</a> to let us know what happened.</p>
</div>
</div>

View File

@@ -39,7 +39,7 @@
data-screen-name="ZulipStatus"
>@ZulipStatus on Twitter</a>.</p>
<p>If you'd like, you can <a href="mailto:support@zulip.com?Subject=500%20error%20on%20%7Bwhich%20URL%3F%7D&Body=Hi%20there%21%0A%0AI%20was%20trying%20to%20do%20%7Bwhat%20were%20you%20trying%20to%20do%3F%7D%20at%20around%20%7Bwhen%20was%20this%3F%7D%20when%20I%20got%20a%20500%20error%20while%20accessing%20%7Bwhich%20URL%3F%7D.%0A%0AThanks!%0A%0ASincerely%2C%20%0A%0A%7BYour%20name%7D">drop us a line</a> to let us know what happened.</p>
<p>If you'd like, you can <a href="mailto:zulip-devel@googlegroups.com?Subject=500%20error%20on%20%7Bwhich%20URL%3F%7D&Body=Hi%20there%21%0A%0AI%20was%20trying%20to%20do%20%7Bwhat%20were%20you%20trying%20to%20do%3F%7D%20at%20around%20%7Bwhen%20was%20this%3F%7D%20when%20I%20got%20a%20500%20error%20while%20accessing%20%7Bwhich%20URL%3F%7D.%0A%0AThanks!%0A%0ASincerely%2C%20%0A%0A%7BYour%20name%7D">drop us a line</a> to let us know what happened.</p>
</div>
</div>

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

View File

@@ -324,7 +324,7 @@ function process_notification(notification) {
} else if (notification.webkit_notify === false && typeof Notification !== "undefined" && $.browser.mozilla === true) {
Notification.requestPermission(function (perm) {
if (perm === 'granted') {
Notification(title, {
notification_object = new Notification(title, {
body: content,
iconUrl: ui.small_avatar_url(message)
});

View File

@@ -8,7 +8,7 @@
<p>The organization you are trying to join, {{ deactivated_domain_name }}, has
been deactivated. Please
contact <a href="mailto:support@zulip.com">support@zulip.com</a> to reactivate
contact <a href="mailto:{{ zulip_administrator }}">{{ zulip_administrator }}</a> to reactivate
this group.</p>
{% endblock %}

View File

@@ -24,7 +24,7 @@
<li>Want to <b>share files</b> in Zulip? No problem! Start a new message and click the paperclip, or just drag-and-drop the file into the textbox.</li>
<li>Trying to <b>get someone's attention</b>? Type in an @-sign followed by their name to "mention" them.</li>
<li>Trying to <b>get someone's attention</b>? Type in an @-sign followed by their name to "mention" them. (Want more or fewer notifications? Select Settings under the gear menu to customize how and when Zulip notifies you of new messages.)</li>
<li>Bulleted lists, bold, code blocks, and more are all at your fingertips - just check out <b>"Message formatting"</b> in the gear menu.</li>

View File

@@ -9,7 +9,7 @@ https://{{ external_host }}/hello has a nice overview of what we're up to, but h
2. Want to share files in Zulip? No problem! Start a new message and click the paperclip, or just drag-and-drop the file into the textbox.
3. Trying to get someone's attention? Type in an @-sign followed by their name to "mention" them.
3. Trying to get someone's attention? Type in an @-sign followed by their name to "mention" them. (Want more or fewer notifications? Select Settings under the gear menu to customize how and when Zulip notifies you of new messages.)
4. Bulleted lists, bold, code blocks, and more are all at your fingertips - just check out "Message formatting" in the gear menu.

View File

@@ -24,7 +24,7 @@
<p class="portico-large-text">With Zulip integrations, your team can stay up-to-date on
code changes, issue tickets, build system results, and much more. If you don't see the system you would like to integrate with it, or run into any
trouble, don't hesitate to <a href="mailto:support@zulip.com?subject=Integration%20question">email us</a>.</p>
trouble, don't hesitate to <a href="mailto:zulip-devel@googlegroups.com?subject=Integration%20question">email us</a>.</p>
<p>Many of these integrations require creating a Zulip bot. You can do so on your <a href="/#settings">Zulip settings page</a>. Be sure to note its username and API key.</p>
@@ -146,6 +146,12 @@
<span class="integration-label">New Relic</span>
</a>
</div>
<div class="integration-lozenge integration-pagerduty">
<a class="integration-link integration-pagerduty" href="#pagerduty">
<img class="integration-logo" src="/static/images/integrations/logos/pagerduty.png" alt="Pagerduty logo" />
<span class="integration-label">Pagerduty</span>
</a>
</div>
<div class="integration-lozenge integration-perforce">
<a class="integration-link integration-perforce" href="#perforce">
<img class="integration-logo" src="/static/images/integrations/logos/perforce.png" alt="Perforce logo" />
@@ -270,7 +276,7 @@
example, auto-restarting through <code>supervisord</code>).</p>
<p>Please
contact <a href="mailto:support@zulip.com?subject=Asana%20integration%20question">support@zulip.com</a>
contact <a href="mailto:zulip-devel@googlegroups.com?subject=Asana%20integration%20question">zulip-devel@googlegroups.com</a>
if you'd like assistance with maintaining this integration.
</p>
</li>
@@ -924,7 +930,7 @@
<li>Did you set up a post-build action for your project?</li>
<li>Does the stream you picked (e.g. <code>jenkins</code>) already exist? If not, add yourself to it and try again.</li>
<li>Are your access key and email address correct? Test them using <a href="/api">our curl API</a>.</li>
<li>Still stuck? Email <a href="mailto:support@zulip.com?subject=Jenkins">support@zulip.com</a>.</li>
<li>Still stuck? Email <a href="mailto:zulip-devel@googlegroups.com?subject=Jenkins">zulip-devel@googlegroups.com</a>.</li>
</ul>
</p>
</div>
@@ -1081,6 +1087,30 @@ key = NAGIOS_BOT_API_KEY
directly.</p>
</div>
<div id="pagerduty" class="integration-instructions">
<p>First, create the stream you'd like to use for Pagerduty notifications,
and subscribe all interested parties to this stream. We recommend the
stream name <code>pagerduty</code>. Keep in mind you still need to create
the stream first even if you are using this recommendation.</p>
<p>Next, in Pagerduty, select Services under Configuration on the top
of the page.</p>
<img class="screenshot" src="/static/images/integrations/pagerduty/001.png" />
<p>Now navigate to the service you want to integrate with Zulip. From
there, click "Add a webhook". Fill in the form like this:</p>
<ul>
<li><b>Name</b>: Zulip</li>
<li><b>Endpoint URL</b>: <code>{{ external_api_uri }}{% verbatim %}/v1/external/pagerduty?api_key=abcdefgh&amp;stream=pagerduty{% endverbatim %}</code></li>
</ul>
<img class="screenshot" src="/static/images/integrations/pagerduty/002.png" />
</div>
<div id="perforce" class="integration-instructions">

View File

@@ -86,8 +86,6 @@ git checkout $branch
EOF
ssh "${SSH_OPTS[@]}" "$server" -t -i "$amazon_key_file" -lroot <<EOF
cp -a /root/zulip/puppet/zulip/files/puppet.conf /etc/puppet/
userdel admin
passwd -d root
mkdir /etc/zulip

View File

@@ -5,6 +5,8 @@ DROP DATABASE IF EXISTS zulip;
CREATE DATABASE zulip TEMPLATE zulip_base;
EOF
sh "$(dirname "$0")/../scripts/setup/flush-memcached"
python manage.py migrate --noinput
python manage.py createcachetable third_party_api_results
python manage.py populate_db -n100 --threads=1

View File

@@ -3,6 +3,7 @@ import os
import shutil
import subprocess
import json
import sys
from PIL import Image, ImageDraw, ImageFont
@@ -57,6 +58,7 @@ emoji_map = json.load(open('emoji_map.json'))
emoji_map['blue_car'] = emoji_map['red_car']
emoji_map['red_car'] = emoji_map['oncoming_automobile']
failed = False
for name, code_point in emoji_map.items():
try:
color_font(name, code_point)
@@ -66,6 +68,11 @@ for name, code_point in emoji_map.items():
except Exception as e:
print e
print 'Missing {}, {}'.format(name, code_point)
failed = True
continue
os.symlink('unicode/{}.png'.format(code_point), 'out/{}.png'.format(name))
if failed:
print "Errors dumping emoji!"
sys.exit(1)

View File

@@ -15,6 +15,7 @@ if [ "$template_grep_error_code" == "0" ]; then
DROP DATABASE IF EXISTS zulip_test;
CREATE DATABASE zulip_test TEMPLATE zulip_test_template;
EOF
sh "$(dirname "$0")/../scripts/setup/flush-memcached"
exit 0
fi
fi
@@ -25,6 +26,7 @@ psql -h localhost postgres zulip_test <<EOF
DROP DATABASE IF EXISTS zulip_test;
CREATE DATABASE zulip_test TEMPLATE zulip_test_base;
EOF
sh "$(dirname "$0")/../scripts/setup/flush-memcached"
python manage.py migrate --noinput --settings=zproject.test_settings
migration_status "zerver/fixtures/migration-status"

View File

@@ -67,5 +67,7 @@ psql -h localhost postgres "$USERNAME" <<EOF
CREATE DATABASE $DBNAME TEMPLATE $DBNAME_BASE;
EOF
sh "$(dirname "$0")/../scripts/setup/flush-memcached"
echo "Database created"

View File

@@ -1,3 +1,3 @@
#!/bin/bash -xe
"$(dirname "$0")/postgres-init-db" zulip_test "$("$(dirname "$0")/../bin/get-django-setting" LOCAL_DATABASE_PASSWORD)" zulip_test zulip,public
"$(dirname "$0")/postgres-init-dev-db" zulip_test "$("$(dirname "$0")/../bin/get-django-setting" LOCAL_DATABASE_PASSWORD)" zulip_test zulip,public

View File

@@ -17,4 +17,5 @@ queues = get_active_worker_queues()
args = sys.argv[1:]
for queue in queues:
subprocess.Popen(['python', 'manage.py', 'process_queue'] + args + [queue])
subprocess.Popen(['python', 'manage.py', 'process_queue'] + args + [queue],
stderr=subprocess.STDOUT)

View File

@@ -19,6 +19,9 @@ def patched_finish(self):
return orig_finish(self)
Request.finish = patched_finish
if 'posix' in os.name and os.geteuid() == 0:
raise RuntimeError("run-dev.py should not be run as root.")
parser = optparse.OptionParser(r"""
Starts the app listening on localhost, for local development.

View File

@@ -12,16 +12,19 @@ from zerver.lib.actions import do_change_password, is_inactive
from zproject.backends import password_auth_enabled
import DNS
SIGNUP_STRING = u'Use a different e-mail address, or contact %s with questions.'%(settings.ZULIP_ADMINISTRATOR,)
SIGNUP_STRING = u'Your e-mail does not match any existing open organization. ' + \
u'Use a different e-mail address, or contact %s with questions.' % (settings.ZULIP_ADMINISTRATOR,)
if settings.ZULIP_COM:
SIGNUP_STRING = u'Your e-mail does not match any existing organization. <br />' + \
u"The zulip.com service is not taking new customer teams. <br /> " + \
u"<a href=\"https://blogs.dropbox.com/tech/2015/09/open-sourcing-zulip-a-dropbox-hack-week-project/\">Zulip is open source</a>, so you can install your own Zulip server " + \
u"by following the instructions on <a href=\"https://www.zulip.org\">www.zulip.org</a>!"
def has_valid_realm(value):
# Checks if there is a realm without invite_required
# matching the domain of the input e-mail.
try:
realm = Realm.objects.get(domain=resolve_email_to_domain(value))
except Realm.DoesNotExist:
return False
return not realm.invite_required
realm = get_realm(resolve_email_to_domain(value))
return realm is not None and not realm.invite_required
def not_mit_mailing_list(value):
# I don't want ec-discuss signed up for Zulip
@@ -68,9 +71,7 @@ class HomepageForm(forms.Form):
data = self.cleaned_data['email']
if completely_open(self.domain) or has_valid_realm(data) and not_mit_mailing_list(data):
return data
raise ValidationError(mark_safe(
u'Your e-mail does not match any existing open organization. ' \
+ SIGNUP_STRING))
raise ValidationError(mark_safe(SIGNUP_STRING))
class LoggingSetPasswordForm(SetPasswordForm):
def save(self, commit=True):
@@ -93,8 +94,9 @@ class OurAuthenticationForm(AuthenticationForm):
if user_profile.realm.deactivated:
error_msg = u"""Sorry for the trouble, but %s has been deactivated.
Please contact support@zulip.com to reactivate this group.""" % (
user_profile.realm.name,)
Please contact %s to reactivate this group.""" % (
user_profile.realm.name,
settings.ZULIP_ADMINISTRATOR)
raise ValidationError(mark_safe(error_msg))
return email

View File

@@ -106,6 +106,116 @@ def bot_owner_userids(user_profile):
else:
return active_user_ids(user_profile.realm)
def realm_user_count(realm):
user_dicts = get_active_user_dicts_in_realm(realm)
return len([user_dict for user_dict in user_dicts if not user_dict["is_bot"]])
def send_signup_message(sender, signups_stream, user_profile,
internal=False, realm=None):
if internal:
# When this is done using manage.py vs. the web interface
internal_blurb = " **INTERNAL SIGNUP** "
else:
internal_blurb = " "
user_count = realm_user_count(user_profile.realm)
# Send notification to realm notifications stream if it exists
# Don't send notification for the first user in a realm
if user_profile.realm.notifications_stream is not None and user_count > 1:
internal_send_message(sender, "stream",
user_profile.realm.notifications_stream.name,
"New users", "%s just signed up for Zulip. Say hello!" % \
(user_profile.full_name,),
realm=user_profile.realm)
internal_send_message(sender,
"stream", signups_stream, user_profile.realm.domain,
"%s <`%s`> just signed up for Zulip!%s(total: **%i**)" % (
user_profile.full_name,
user_profile.email,
internal_blurb,
user_count,
)
)
def notify_new_user(user_profile, internal=False):
if settings.NEW_USER_BOT is not None:
send_signup_message(settings.NEW_USER_BOT, "signups", user_profile, internal)
statsd.gauge("users.signups.%s" % (user_profile.realm.domain.replace('.', '_')), 1, delta=True)
# Does the processing for a new user account:
# * Subscribes to default/invitation streams
# * Fills in some recent historical messages
# * Notifies other users in realm and Zulip about the signup
# * Deactivates PreregistrationUser objects
# * subscribe the user to newsletter if newsletter_data is specified
def process_new_human_user(user_profile, prereg_user=None, newsletter_data=None):
mit_beta_user = user_profile.realm.domain == "mit.edu"
try:
streams = prereg_user.streams.all()
except AttributeError:
# This will catch both the case where prereg_user is None and where it
# is a MitUser.
streams = []
# If the user's invitation didn't explicitly list some streams, we
# add the default streams
if len(streams) == 0:
streams = get_default_subs(user_profile)
bulk_add_subscriptions(streams, [user_profile])
# Give you the last 100 messages on your streams, so you have
# something to look at in your home view once you finish the
# tutorial.
one_week_ago = now() - datetime.timedelta(weeks=1)
recipients = Recipient.objects.filter(type=Recipient.STREAM,
type_id__in=[stream.id for stream in streams])
messages = Message.objects.filter(recipient_id__in=recipients, pub_date__gt=one_week_ago).order_by("-id")[0:100]
if len(messages) > 0:
ums_to_create = [UserMessage(user_profile=user_profile, message=message,
flags=UserMessage.flags.read)
for message in messages]
UserMessage.objects.bulk_create(ums_to_create)
# mit_beta_users don't have a referred_by field
if not mit_beta_user and prereg_user is not None and prereg_user.referred_by is not None \
and settings.NOTIFICATION_BOT is not None:
# This is a cross-realm private message.
internal_send_message(settings.NOTIFICATION_BOT,
"private", prereg_user.referred_by.email, user_profile.realm.domain,
"%s <`%s`> accepted your invitation to join Zulip!" % (
user_profile.full_name,
user_profile.email,
)
)
# Mark any other PreregistrationUsers that are STATUS_ACTIVE as
# inactive so we can keep track of the PreregistrationUser we
# actually used for analytics
if prereg_user is not None:
PreregistrationUser.objects.filter(email__iexact=user_profile.email).exclude(
id=prereg_user.id).update(status=0)
else:
PreregistrationUser.objects.filter(email__iexact=user_profile.email).update(status=0)
notify_new_user(user_profile)
if newsletter_data is not None:
# If the user was created automatically via the API, we may
# not want to register them for the newsletter
queue_json_publish(
"signups",
{
'EMAIL': user_profile.email,
'merge_vars': {
'NAME': user_profile.full_name,
'REALM': user_profile.realm.domain,
'OPTIN_IP': newsletter_data["IP"],
'OPTIN_TIME': datetime.datetime.isoformat(datetime.datetime.now()),
},
},
lambda event: None)
def notify_created_user(user_profile):
event = dict(type="realm_user", op="add",
person=dict(email=user_profile.email,
@@ -140,7 +250,8 @@ def do_create_user(email, password, realm, full_name, short_name,
active=True, bot=False, bot_owner=None,
avatar_source=UserProfile.AVATAR_FROM_GRAVATAR,
default_sending_stream=None, default_events_register_stream=None,
default_all_public_streams=None):
default_all_public_streams=None, prereg_user=None,
newsletter_data=None):
event = {'type': 'user_created',
'timestamp': time.time(),
'full_name': full_name,
@@ -163,6 +274,9 @@ def do_create_user(email, password, realm, full_name, short_name,
notify_created_user(user_profile)
if bot:
notify_created_bot(user_profile)
else:
process_new_human_user(user_profile, prereg_user=prereg_user,
newsletter_data=newsletter_data)
return user_profile
def user_sessions(user_profile):
@@ -203,10 +317,6 @@ def do_set_realm_name(realm, name):
send_event(event, active_user_ids(realm))
return {}
def get_realm_name(domain):
realm = Realm.objects.get(domain=domain)
return realm.name
def do_set_realm_restricted_to_domain(realm, restricted):
realm.restricted_to_domain = restricted
realm.save(update_fields=['restricted_to_domain'])
@@ -600,7 +710,7 @@ def recipient_for_emails(emails, not_forged_mirror_message,
# and one of users is a zuliper
if len(realm_domains) == 2:
# I'm assuming that cross-realm PMs with the "admin realm" are rare, and therefore can be slower
admin_realm = Realm.objects.get(domain=settings.ADMIN_DOMAIN)
admin_realm = get_realm(settings.ADMIN_DOMAIN)
admin_realm_admin_emails = {u.email for u in admin_realm.get_admin_users()}
# We allow settings.CROSS_REALM_BOT_EMAILS for the hardcoded emails for the feedback and notification bots
if not (normalized_emails & admin_realm_admin_emails or normalized_emails & settings.CROSS_REALM_BOT_EMAILS):
@@ -2182,25 +2292,31 @@ def encode_email_address_helper(name, email_token):
encoded_token = "%s+%s" % (encoded_name, email_token)
return settings.EMAIL_GATEWAY_PATTERN % (encoded_token,)
def decode_email_address(email):
# Perform the reverse of encode_email_address. Returns a tuple of (streamname, email_token)
def get_email_gateway_message_string_from_address(address):
pattern_parts = [re.escape(part) for part in settings.EMAIL_GATEWAY_PATTERN.split('%s')]
if settings.ZULIP_COM:
# Accept mails delivered to any Zulip server
pattern_parts[-1] = r'@[\w-]*\.zulip\.net'
match_email_re = re.compile("(.*?)".join(pattern_parts))
match = match_email_re.match(email)
match = match_email_re.match(address)
if not match:
return None
full_address = match.group(1)
if '.' in full_address:
msg_string = match.group(1)
return msg_string
def decode_email_address(email):
# Perform the reverse of encode_email_address. Returns a tuple of (streamname, email_token)
msg_string = get_email_gateway_message_string_from_address(email)
if '.' in msg_string:
# Workaround for Google Groups and other programs that don't accept emails
# that have + signs in them (see Trac #2102)
encoded_stream_name, token = full_address.split('.')
encoded_stream_name, token = msg_string.split('.')
else:
encoded_stream_name, token = full_address.split('+')
encoded_stream_name, token = msg_string.split('+')
stream_name = re.sub("%\d{4}", lambda x: unichr(int(x.group(0)[1:])), encoded_stream_name)
return stream_name, token

View File

@@ -97,18 +97,17 @@ def fetch_tweet_data(tweet_id):
from . import testing_mocks
res = testing_mocks.twitter(tweet_id)
else:
if settings.TWITTER_CONSUMER_KEY == '' or \
settings.TWITTER_CONSUMER_SECRET == '' or \
settings.TWITTER_ACCESS_TOKEN_KEY == '' or \
settings.TWITTER_ACCESS_TOKEN_SECRET == '':
creds = {
'consumer_key': settings.TWITTER_CONSUMER_KEY,
'consumer_secret': settings.TWITTER_CONSUMER_SECRET,
'access_token_key': settings.TWITTER_ACCESS_TOKEN_KEY,
'access_token_secret': settings.TWITTER_ACCESS_TOKEN_SECRET,
}
if not all(creds.values()):
return None
api = twitter.Api(consumer_key = settings.TWITTER_CONSUMER_KEY,
consumer_secret = settings.TWITTER_CONSUMER_SECRET,
access_token_key = settings.TWITTER_ACCESS_TOKEN_KEY,
access_token_secret = settings.TWITTER_ACCESS_TOKEN_SECRET)
try:
api = twitter.Api(**creds)
# Sometimes Twitter hangs on responses. Timing out here
# will cause the Tweet to go through as-is with no inline
# preview, rather than having the message be rejected
@@ -117,6 +116,10 @@ def fetch_tweet_data(tweet_id):
tweet = timeout(3, api.GetStatus, tweet_id)
res = tweet.AsDict()
res['media'] = tweet.media # AsDict does not include media
except AttributeError:
logging.error('Unable to load twitter api, you may have the wrong '
'library installed, see https://github.com/zulip/zulip/issues/86')
return None
except TimeoutExpired as e:
# We'd like to try again later and not cache the bad result,
# so we need to re-raise the exception (just as though

View File

@@ -7,7 +7,8 @@ from email.header import decode_header
from django.conf import settings
from zerver.lib.actions import decode_email_address, internal_send_message
from zerver.lib.actions import decode_email_address, get_email_gateway_message_string_from_address, \
internal_send_message
from zerver.lib.notifications import convert_html_to_markdown
from zerver.lib.redis_utils import get_redis_client
from zerver.lib.upload import upload_message_image
@@ -56,16 +57,18 @@ def missed_message_redis_key(token):
def is_missed_message_address(address):
local_part = address.split('@')[0]
return local_part.startswith('mm') and len(local_part) == 34
msg_string = get_email_gateway_message_string_from_address(address)
return msg_string.startswith('mm') and len(msg_string) == 34
def get_missed_message_token_from_address(address):
local_part = address.split('@')[0]
if not address.startswith('mm') and len(address) != 34:
raise ZulipEmailForwardError('Could not parse missed message address')
return local_part[2:]
msg_string = get_email_gateway_message_string_from_address(address)
if not msg_string.startswith('mm') and len(msg_string) != 34:
raise ZulipEmailForwardError('Could not parse missed message address')
# strip off the 'mm' before returning the redis key
return msg_string[2:]
def create_missed_message_address(user_profile, message):
if message.recipient.type == Recipient.PERSONAL:

View File

@@ -8,10 +8,12 @@ from zerver.worker import queue_processors
from zerver.lib.actions import (
check_send_message, create_stream_if_needed, do_add_subscription,
get_display_recipient, get_user_profile_by_email,
get_display_recipient,
)
from zerver.models import (
get_realm,
get_user_profile_by_email,
resolve_email_to_domain,
Client,
Message,
@@ -271,7 +273,7 @@ class AuthedTestCase(TestCase):
return data['messages']
def users_subscribed_to_stream(self, stream_name, realm_domain):
realm = Realm.objects.get(domain=realm_domain)
realm = get_realm(realm_domain)
stream = Stream.objects.get(name=stream_name, realm=realm)
recipient = Recipient.objects.get(type_id=stream.id, type=Recipient.STREAM)
subscriptions = Subscription.objects.filter(recipient=recipient, active=True)
@@ -321,7 +323,7 @@ class AuthedTestCase(TestCase):
# Subscribe to a stream directly
def subscribe_to_stream(self, email, stream_name, realm=None):
realm = Realm.objects.get(domain=resolve_email_to_domain(email))
realm = get_realm(resolve_email_to_domain(email))
stream, _ = create_stream_if_needed(realm, stream_name)
user_profile = get_user_profile_by_email(email)
do_add_subscription(user_profile, stream, no_log=True)

View File

@@ -5,7 +5,7 @@ from optparse import make_option
from django.core.management.base import BaseCommand
from zerver.lib.actions import create_stream_if_needed, do_add_subscription
from zerver.models import Realm, UserProfile, get_user_profile_by_email
from zerver.models import UserProfile, get_realm, get_user_profile_by_email
class Command(BaseCommand):
help = """Add some or all users in a realm to a set of streams."""
@@ -37,7 +37,7 @@ class Command(BaseCommand):
exit(1)
stream_names = set([stream.strip() for stream in options["streams"].split(",")])
realm = Realm.objects.get(domain=options["domain"])
realm = get_realm(options["domain"])
if options["all_users"]:
user_profiles = UserProfile.objects.filter(realm=realm)

View File

@@ -8,7 +8,7 @@ from django.core.exceptions import ValidationError
from django.db.utils import IntegrityError
from django.core import validators
from zerver.models import Realm, email_to_username
from zerver.models import Realm, get_realm, email_to_username
from zerver.lib.actions import do_create_user
from zerver.views import notify_new_user
from zerver.lib.initial_password import initial_password
@@ -46,7 +46,7 @@ Terms of Service by passing --this-user-has-accepted-the-tos.""")
raise CommandError("""Please specify a realm by passing --domain.""")
try:
realm = Realm.objects.get(domain=options["domain"])
realm = get_realm(options["domain"])
except Realm.DoesNotExist:
raise CommandError("Realm does not exist.")

View File

@@ -2,7 +2,7 @@ from __future__ import absolute_import
from optparse import make_option
from django.core.management.base import BaseCommand
from zerver.models import Message, Realm, Stream, Recipient
from zerver.models import get_realm, Message, Realm, Stream, Recipient
import datetime
import time
@@ -23,7 +23,7 @@ class Command(BaseCommand):
)
def handle(self, *args, **options):
realm = Realm.objects.get(domain=options["domain"])
realm = get_realm(options["domain"])
streams = Stream.objects.filter(realm=realm, invite_only=False)
recipients = Recipient.objects.filter(
type=Recipient.STREAM, type_id__in=[stream.id for stream in streams])

View File

@@ -6,7 +6,7 @@ from django.core.management.base import BaseCommand
from zerver.lib.actions import delete_all_user_sessions, \
delete_realm_user_sessions
from zerver.models import Realm
from zerver.models import get_realm
class Command(BaseCommand):
help = "Log out all users."
@@ -21,7 +21,7 @@ class Command(BaseCommand):
def handle(self, *args, **options):
if options["realm"]:
realm = Realm.objects.get(domain=options["realm"])
realm = get_realm(options["realm"])
delete_realm_user_sessions(realm)
else:
delete_all_user_sessions()

View File

@@ -9,7 +9,7 @@ from django.conf import settings
from django_auth_ldap.backend import LDAPBackend, _LDAPUser
# Run this on a cronjob to pick up on name changes.
# Quick tool to test whether you're correctly authenticating to LDAP
def query_ldap(**options):
email = options['email']
for backend in get_backends():

View File

@@ -23,7 +23,7 @@ class Command(BaseCommand):
help="alias to add or remove")
def handle(self, *args, **options):
realm = Realm.objects.get(domain=options["domain"])
realm = get_realm(options["domain"])
if options["op"] == "show":
print "Aliases for %s:" % (realm.domain,)
for alias in realm_aliases(realm):

View File

@@ -1,7 +1,7 @@
from __future__ import absolute_import
from django.core.management.base import BaseCommand
from zerver.models import Realm
from zerver.models import Realm, get_realm
from zerver.lib.actions import do_add_realm_emoji, do_remove_realm_emoji
import sys
@@ -30,7 +30,7 @@ Example: python manage.py realm_emoji --realm=zulip.com --op=show
help="URL of image to display for the emoji")
def handle(self, *args, **options):
realm = Realm.objects.get(domain=options["domain"])
realm = get_realm(options["domain"])
if options["op"] == "show":
for name, url in realm.get_emoji().iteritems():
print name, url

View File

@@ -2,7 +2,7 @@ from __future__ import absolute_import
from optparse import make_option
from django.core.management.base import BaseCommand
from zerver.models import RealmFilter, all_realm_filters, Realm
from zerver.models import RealmFilter, all_realm_filters, get_realm
from zerver.lib.actions import do_add_realm_filter, do_remove_realm_filter
import sys
@@ -37,7 +37,7 @@ Example: python manage.py realm_filters --realm=zulip.com --op=show
help="format string to substitute")
def handle(self, *args, **options):
realm = Realm.objects.get(domain=options["domain"])
realm = get_realm(options["domain"])
if options["op"] == "show":
print "%s: %s" % (realm.domain, all_realm_filters().get(realm.domain, ""))
sys.exit(0)

View File

@@ -5,7 +5,7 @@ from optparse import make_option
from django.core.management.base import BaseCommand
from zerver.lib.actions import do_remove_subscription
from zerver.models import Realm, UserProfile, get_stream, \
from zerver.models import Realm, UserProfile, get_realm, get_stream, \
get_user_profile_by_email
class Command(BaseCommand):
@@ -37,7 +37,7 @@ class Command(BaseCommand):
self.print_help("python manage.py", "remove_users_from_stream")
exit(1)
realm = Realm.objects.get(domain=options["domain"])
realm = get_realm(options["domain"])
stream_name = options["stream"].strip()
stream = get_stream(stream_name, realm)

View File

@@ -2,7 +2,7 @@ from __future__ import absolute_import
from django.core.management.base import BaseCommand
from zerver.models import Realm
from zerver.models import get_realm
from zerver.lib.actions import set_default_streams
from optparse import make_option
@@ -41,5 +41,5 @@ set of streams (which can be empty, with `--streams=`)."
exit(1)
stream_names = [stream.strip() for stream in options["streams"].split(",")]
realm = Realm.objects.get(domain=options["domain"])
realm = get_realm(options["domain"])
set_default_streams(realm, stream_names)

View File

@@ -1,7 +1,7 @@
from __future__ import absolute_import
from django.core.management.base import BaseCommand
from zerver.models import Realm
from zerver.models import get_realm, Realm
import sys
class Command(BaseCommand):
@@ -15,7 +15,7 @@ class Command(BaseCommand):
realm = options['realm']
try:
realm = Realm.objects.get(domain=realm)
realm = get_realm(realm)
except Realm.DoesNotExist:
print 'There is no realm called %s.' % (realm,)
sys.exit(1)

View File

@@ -5,7 +5,7 @@ from optparse import make_option
from django.core.management.base import BaseCommand
from zerver.lib.actions import do_change_enable_digest_emails
from zerver.models import Realm, UserProfile, get_user_profile_by_email
from zerver.models import Realm, UserProfile, get_realm, get_user_profile_by_email
class Command(BaseCommand):
help = """Turn off digests for a domain or specified set of email addresses."""
@@ -27,7 +27,7 @@ class Command(BaseCommand):
exit(1)
if options["domain"]:
realm = Realm.objects.get(domain=options["domain"])
realm = get_realm(options["domain"])
user_profiles = UserProfile.objects.filter(realm=realm)
else:
emails = set([email.strip() for email in options["users"].split(",")])

View File

@@ -91,6 +91,11 @@ def completely_open(domain):
def get_unique_open_realm():
# We only return a realm if there is a unique realm and it is completely open.
realms = Realm.objects.filter(deactivated=False)
if settings.VOYAGER:
# On production installations, the "zulip.com" realm is an
# empty realm just used for system bots, so don't include it
# in this accounting.
realms = realms.exclude(domain="zulip.com")
if len(realms) != 1:
return None
realm = realms[0]
@@ -294,8 +299,8 @@ class UserProfile(AbstractBaseUser, PermissionsMixin):
### Notifications settings. ###
# Stream notifications.
enable_stream_desktop_notifications = models.BooleanField(default=True)
enable_stream_sounds = models.BooleanField(default=True)
enable_stream_desktop_notifications = models.BooleanField(default=False)
enable_stream_sounds = models.BooleanField(default=False)
# PM + @-mention notifications.
enable_desktop_notifications = models.BooleanField(default=True)

View File

@@ -294,6 +294,10 @@ class BugdownTest(TestCase):
self.assertEqual(converted, '<p>%s</p>\n%s' % (make_link('http://twitter.com/wdaher/status/287977969287315459'),
make_inline_twitter_preview('http://twitter.com/wdaher/status/287977969287315459', media_tweet_html, """<div class="twitter-image"><a href="http://t.co/xo7pAhK6n3" target="_blank" title="http://t.co/xo7pAhK6n3"><img src="https://pbs.twimg.com/media/BdoEjD4IEAIq86Z.jpg:small"></a></div>""")))
def test_fetch_tweet_data_settings_validation(self):
with self.settings(TEST_SUITE=False, TWITTER_CONSUMER_KEY=None):
self.assertIs(None, bugdown.fetch_tweet_data('287977969287315459'))
def test_realm_emoji(self):
def emoji_img(name, url):
return '<img alt="%s" class="emoji" src="%s" title="%s">' % (name, url, name)

View File

@@ -760,7 +760,7 @@ class PagerDutyHookTests(AuthedTestCase):
self.assertEqual(msg.subject, 'incident 3')
self.assertEqual(
msg.content,
':unhealthy_heart: Incident [3](https://zulip-test.pagerduty.com/incidents/P140S4Y) triggered by [Test service](https://zulip-test.pagerduty.com/services/PIL5CUQ) and assigned to [armooo@](https://zulip-test.pagerduty.com/users/POBCFRJ)\n\n>foo'
':imp: Incident [3](https://zulip-test.pagerduty.com/incidents/P140S4Y) triggered by [Test service](https://zulip-test.pagerduty.com/services/PIL5CUQ) and assigned to [armooo@](https://zulip-test.pagerduty.com/users/POBCFRJ)\n\n>foo'
)
def test_unacknowledge(self):
@@ -769,7 +769,7 @@ class PagerDutyHookTests(AuthedTestCase):
self.assertEqual(msg.subject, 'incident 3')
self.assertEqual(
msg.content,
':unhealthy_heart: Incident [3](https://zulip-test.pagerduty.com/incidents/P140S4Y) unacknowledged by [Test service](https://zulip-test.pagerduty.com/services/PIL5CUQ) and assigned to [armooo@](https://zulip-test.pagerduty.com/users/POBCFRJ)\n\n>foo'
':imp: Incident [3](https://zulip-test.pagerduty.com/incidents/P140S4Y) unacknowledged by [Test service](https://zulip-test.pagerduty.com/services/PIL5CUQ) and assigned to [armooo@](https://zulip-test.pagerduty.com/users/POBCFRJ)\n\n>foo'
)
def test_resolved(self):
@@ -778,7 +778,7 @@ class PagerDutyHookTests(AuthedTestCase):
self.assertEqual(msg.subject, 'incident 1')
self.assertEqual(
msg.content,
':healthy_heart: Incident [1](https://zulip-test.pagerduty.com/incidents/PO1XIJ5) resolved by [armooo@](https://zulip-test.pagerduty.com/users/POBCFRJ)\n\n>It is on fire'
':grinning: Incident [1](https://zulip-test.pagerduty.com/incidents/PO1XIJ5) resolved by [armooo@](https://zulip-test.pagerduty.com/users/POBCFRJ)\n\n>It is on fire'
)
def test_auto_resolved(self):
@@ -787,7 +787,7 @@ class PagerDutyHookTests(AuthedTestCase):
self.assertEqual(msg.subject, 'incident 2')
self.assertEqual(
msg.content,
':healthy_heart: Incident [2](https://zulip-test.pagerduty.com/incidents/PX7K9J2) resolved\n\n>new'
':grinning: Incident [2](https://zulip-test.pagerduty.com/incidents/PX7K9J2) resolved\n\n>new'
)
def test_acknowledge(self):
@@ -796,7 +796,7 @@ class PagerDutyHookTests(AuthedTestCase):
self.assertEqual(msg.subject, 'incident 1')
self.assertEqual(
msg.content,
':average_heart: Incident [1](https://zulip-test.pagerduty.com/incidents/PO1XIJ5) acknowledged by [armooo@](https://zulip-test.pagerduty.com/users/POBCFRJ)\n\n>It is on fire'
':no_good: Incident [1](https://zulip-test.pagerduty.com/incidents/PO1XIJ5) acknowledged by [armooo@](https://zulip-test.pagerduty.com/users/POBCFRJ)\n\n>It is on fire'
)
def test_no_subject(self):
@@ -805,7 +805,7 @@ class PagerDutyHookTests(AuthedTestCase):
self.assertEqual(msg.subject, 'incident 48219')
self.assertEqual(
msg.content,
u':healthy_heart: Incident [48219](https://dropbox.pagerduty.com/incidents/PJKGZF9) resolved\n\n>mp_error_block_down_critical\u2119\u01b4'
u':grinning: Incident [48219](https://dropbox.pagerduty.com/incidents/PJKGZF9) resolved\n\n>mp_error_block_down_critical\u2119\u01b4'
)
def test_explicit_subject(self):
@@ -814,7 +814,7 @@ class PagerDutyHookTests(AuthedTestCase):
self.assertEqual(msg.subject, 'my cool topic')
self.assertEqual(
msg.content,
':average_heart: Incident [1](https://zulip-test.pagerduty.com/incidents/PO1XIJ5) acknowledged by [armooo@](https://zulip-test.pagerduty.com/users/POBCFRJ)\n\n>It is on fire'
':no_good: Incident [1](https://zulip-test.pagerduty.com/incidents/PO1XIJ5) acknowledged by [armooo@](https://zulip-test.pagerduty.com/users/POBCFRJ)\n\n>It is on fire'
)
def test_bad_message(self):

View File

@@ -488,7 +488,7 @@ class StreamMessagesTest(AuthedTestCase):
# Subscribe everyone to a stream with non-ASCII characters.
non_ascii_stream_name = u"hümbüǵ"
realm = Realm.objects.get(domain="zulip.com")
realm = get_realm("zulip.com")
stream, _ = create_stream_if_needed(realm, non_ascii_stream_name)
for user_profile in UserProfile.objects.filter(realm=realm):
do_add_subscription(user_profile, stream, no_log=True)
@@ -499,7 +499,7 @@ class StreamMessagesTest(AuthedTestCase):
class MessageDictTest(AuthedTestCase):
@slow(1.6, 'builds lots of messages')
def test_bulk_message_fetching(self):
realm = Realm.objects.get(domain="zulip.com")
realm = get_realm("zulip.com")
sender = get_user_profile_by_email('othello@zulip.com')
receiver = get_user_profile_by_email('hamlet@zulip.com')
pm_recipient = Recipient.objects.get(type_id=receiver.id, type=Recipient.PERSONAL)
@@ -838,7 +838,7 @@ class GetOldMessagesTest(AuthedTestCase):
# We need to susbcribe to a stream and then send a message to
# it to ensure that we actually have a stream message in this
# narrow view.
realm = Realm.objects.get(domain="zulip.com")
realm = get_realm("zulip.com")
stream, _ = create_stream_if_needed(realm, "Scotland")
do_add_subscription(get_user_profile_by_email("hamlet@zulip.com"),
stream, no_log=True)
@@ -866,7 +866,7 @@ class GetOldMessagesTest(AuthedTestCase):
# We need to susbcribe to a stream and then send a message to
# it to ensure that we actually have a stream message in this
# narrow view.
realm = Realm.objects.get(domain="mit.edu")
realm = get_realm("mit.edu")
lambda_stream, _ = create_stream_if_needed(realm, u"\u03bb-stream")
do_add_subscription(get_user_profile_by_email("starnine@mit.edu"),
lambda_stream, no_log=True)
@@ -901,7 +901,7 @@ class GetOldMessagesTest(AuthedTestCase):
# We need to susbcribe to a stream and then send a message to
# it to ensure that we actually have a stream message in this
# narrow view.
realm = Realm.objects.get(domain="mit.edu")
realm = get_realm("mit.edu")
stream, _ = create_stream_if_needed(realm, "Scotland")
do_add_subscription(get_user_profile_by_email("starnine@mit.edu"),
stream, no_log=True)
@@ -1439,7 +1439,7 @@ class CheckMessageTest(AuthedTestCase):
sender = get_user_profile_by_email('othello@zulip.com')
client, _ = Client.objects.get_or_create(name="test suite")
stream_name = 'integration'
stream, _ = create_stream_if_needed(Realm.objects.get(domain="zulip.com"), stream_name)
stream, _ = create_stream_if_needed(get_realm("zulip.com"), stream_name)
message_type_name = 'stream'
message_to = None
message_to = [stream_name]
@@ -1468,7 +1468,7 @@ class CheckMessageTest(AuthedTestCase):
sender = bot
client, _ = Client.objects.get_or_create(name="test suite")
stream_name = 'integration'
stream, _ = create_stream_if_needed(Realm.objects.get(domain="zulip.com"), stream_name)
stream, _ = create_stream_if_needed(get_realm("zulip.com"), stream_name)
message_type_name = 'stream'
message_to = None
message_to = [stream_name]

View File

@@ -5,7 +5,7 @@ from django.test import TestCase
from zilencer.models import Deployment
from zerver.models import (
get_user_profile_by_email,
get_realm, get_user_profile_by_email,
PreregistrationUser, Realm, ScheduledJob, UserProfile,
)
@@ -83,6 +83,25 @@ class PublicURLTest(TestCase):
for status_code, url_set in post_urls.iteritems():
self.fetch("post", url_set, status_code)
def test_get_gcid_when_not_configured(self):
with self.settings(GOOGLE_CLIENT_ID=None):
resp = self.client.get("/api/v1/fetch_google_client_id")
self.assertEquals(400, resp.status_code,
msg="Expected 400, received %d for GET /api/v1/fetch_google_client_id" % resp.status_code,
)
data = ujson.loads(resp.content)
self.assertEqual('error', data['result'])
def test_get_gcid_when_configured(self):
with self.settings(GOOGLE_CLIENT_ID="ABCD"):
resp = self.client.get("/api/v1/fetch_google_client_id")
self.assertEquals(200, resp.status_code,
msg="Expected 200, received %d for GET /api/v1/fetch_google_client_id" % resp.status_code,
)
data = ujson.loads(resp.content)
self.assertEqual('success', data['result'])
self.assertEqual('ABCD', data['google_client_id'])
class LoginTest(AuthedTestCase):
"""
Logging in, registration, and logging out.
@@ -102,7 +121,7 @@ class LoginTest(AuthedTestCase):
self.assertIn("Please enter a correct email and password", result.content)
def test_register(self):
realm = Realm.objects.get(domain="zulip.com")
realm = get_realm("zulip.com")
streams = ["stream_%s" % i for i in xrange(40)]
for stream in streams:
create_stream_if_needed(realm, stream)
@@ -120,7 +139,7 @@ class LoginTest(AuthedTestCase):
If you try to register for a deactivated realm, you get a clear error
page.
"""
realm = Realm.objects.get(domain="zulip.com")
realm = get_realm("zulip.com")
realm.deactivated = True
realm.save(update_fields=["deactivated"])
@@ -134,7 +153,7 @@ class LoginTest(AuthedTestCase):
"""
If you try to log in to a deactivated realm, you get a clear error page.
"""
realm = Realm.objects.get(domain="zulip.com")
realm = get_realm("zulip.com")
realm.deactivated = True
realm.save(update_fields=["deactivated"])
@@ -367,7 +386,7 @@ so we didn't send them an invitation. We did send invitations to everyone else!"
In a realm with `restricted_to_domain = True`, you can't invite people
with a different domain from that of the realm or your e-mail address.
"""
zulip_realm = Realm.objects.get(domain="zulip.com")
zulip_realm = get_realm("zulip.com")
zulip_realm.restricted_to_domain = True
zulip_realm.save()
@@ -384,7 +403,7 @@ so we didn't send them an invitation. We did send invitations to everyone else!"
In a realm with `restricted_to_domain = False`, you can invite people
with a different domain from that of the realm or your e-mail address.
"""
zulip_realm = Realm.objects.get(domain="zulip.com")
zulip_realm = get_realm("zulip.com")
zulip_realm.restricted_to_domain = False
zulip_realm.save()
@@ -402,7 +421,7 @@ so we didn't send them an invitation. We did send invitations to everyone else!"
invitee = "alice-test@zulip.com"
stream_name = u"hümbüǵ"
realm = Realm.objects.get(domain="zulip.com")
realm = get_realm("zulip.com")
stream, _ = create_stream_if_needed(realm, stream_name)
# Make sure we're subscribed before inviting someone.

View File

@@ -417,7 +417,7 @@ class DefaultStreamTest(AuthedTestCase):
return set(stream_names)
def test_set_default_streams(self):
realm = Realm.objects.get(domain="zulip.com")
realm = get_realm("zulip.com")
stream_names = ['apple', 'banana', 'Carrot Cake']
expected_names = stream_names + ['zulip']
set_default_streams(realm, stream_names)
@@ -425,7 +425,7 @@ class DefaultStreamTest(AuthedTestCase):
self.assertEqual(stream_names, set(expected_names))
def test_add_and_remove_default_stream(self):
realm = Realm.objects.get(domain="zulip.com")
realm = get_realm("zulip.com")
orig_stream_names = self.get_default_stream_names(realm)
do_add_default_stream(realm, 'Added Stream')
new_stream_names = self.get_default_stream_names(realm)
@@ -882,7 +882,7 @@ class SubscriptionAPITest(AuthedTestCase):
def test_multi_user_subscription(self):
email1 = 'cordelia@zulip.com'
email2 = 'iago@zulip.com'
realm = Realm.objects.get(domain="zulip.com")
realm = get_realm("zulip.com")
streams_to_sub = ['multi_user_stream']
events = []
with tornado_redirected_to_list(events):
@@ -960,7 +960,7 @@ class SubscriptionAPITest(AuthedTestCase):
def test_bulk_subscribe_MIT(self):
realm = Realm.objects.get(domain="mit.edu")
realm = get_realm("mit.edu")
streams = ["stream_%s" % i for i in xrange(40)]
for stream in streams:
create_stream_if_needed(realm, stream)
@@ -979,7 +979,7 @@ class SubscriptionAPITest(AuthedTestCase):
def test_bulk_subscribe_many(self):
# Create a whole bunch of streams
realm = Realm.objects.get(domain="zulip.com")
realm = get_realm("zulip.com")
streams = ["stream_%s" % i for i in xrange(20)]
for stream in streams:
create_stream_if_needed(realm, stream)
@@ -1400,7 +1400,7 @@ class GetSubscribersTest(AuthedTestCase):
"""
gather_subscriptions returns correct results with only 3 queries
"""
realm = Realm.objects.get(domain="zulip.com")
realm = get_realm("zulip.com")
streams = ["stream_%s" % i for i in xrange(10)]
for stream in streams:
create_stream_if_needed(realm, stream)

View File

@@ -12,14 +12,15 @@ from zerver.lib.test_helpers import (
from zerver.models import UserProfile, Recipient, \
Realm, Client, UserActivity, \
get_user_profile_by_email, split_email_to_domain, get_realm, \
get_client, get_stream, Message
get_client, get_stream, Message, get_unique_open_realm, \
completely_open
from zerver.lib.avatar import get_avatar_url
from zerver.lib.initial_password import initial_password
from zerver.lib.actions import \
get_emails_from_user_ids, do_deactivate_user, do_reactivate_user, \
do_change_is_admin, extract_recipients, \
do_set_realm_name, get_realm_name, do_deactivate_realm, \
do_set_realm_name, do_deactivate_realm, \
do_add_subscription, do_remove_subscription, do_make_stream_private
from zerver.lib.alert_words import alert_words_in_realm, user_alert_words, \
add_user_alert_words, remove_user_alert_words
@@ -83,14 +84,14 @@ class RealmTest(AuthedTestCase):
# cache, and we start by populating the cache for Hamlet, and we end
# by checking the cache to ensure that the new value is there.
get_user_profile_by_email('hamlet@zulip.com')
realm = Realm.objects.get(domain='zulip.com')
realm = get_realm('zulip.com')
new_name = 'Zed You Elle Eye Pea'
do_set_realm_name(realm, new_name)
self.assertEqual(get_realm_name(realm.domain), new_name)
self.assertEqual(get_realm(realm.domain).name, new_name)
self.assert_user_profile_cache_gets_new_name('hamlet@zulip.com', new_name)
def test_do_set_realm_name_events(self):
realm = Realm.objects.get(domain='zulip.com')
realm = get_realm('zulip.com')
new_name = 'Puliz'
events = []
with tornado_redirected_to_list(events):
@@ -135,7 +136,7 @@ class RealmTest(AuthedTestCase):
# by checking the cache to ensure that his realm appears to be deactivated.
# You can make this test fail by disabling cache.flush_realm().
get_user_profile_by_email('hamlet@zulip.com')
realm = Realm.objects.get(domain='zulip.com')
realm = get_realm('zulip.com')
do_deactivate_realm(realm)
user = get_user_profile_by_email('hamlet@zulip.com')
self.assertTrue(user.realm.deactivated)
@@ -1426,3 +1427,18 @@ class TestMissedMessages(AuthedTestCase):
'Denmark > test Othello, the Moor of Venice 1 2 3 4 5 6 7 8 9 10 @**hamlet**',
normalize_string(mail.outbox[0].body),
)
class TestOpenRealms(AuthedTestCase):
def test_open_realm_logic(self):
mit_realm = get_realm("mit.edu")
self.assertEquals(get_unique_open_realm(), None)
mit_realm.restricted_to_domain = False
mit_realm.save()
self.assertTrue(completely_open(mit_realm.domain))
self.assertEquals(get_unique_open_realm(), None)
settings.VOYAGER = True
self.assertEquals(get_unique_open_realm(), mit_realm)
# Restore state
settings.VOYAGER = False
mit_realm.restricted_to_domain = True
mit_realm.save()

View File

@@ -175,7 +175,7 @@ function patchRequire(require, requireDirs) {
function bootstrap(global) {
"use strict";
var phantomArgs = require('system').args;
var system = require('system');
/**
* Hooks in default phantomjs error handler to print a hint when a possible
@@ -223,7 +223,7 @@ function bootstrap(global) {
// casper root path
if (!phantom.casperPath) {
try {
phantom.casperPath = phantom.args.map(function _map(i) {
phantom.casperPath = system.args.map(function _map(i) {
var match = i.match(/^--casper-path=(.*)/);
if (match) {
return fs.absolute(match[1]);
@@ -289,7 +289,7 @@ function bootstrap(global) {
global.require = patchRequire(global.require, [phantom.casperPath, fs.workingDirectory]);
// casper cli args
phantom.casperArgs = global.require('cli').parse(phantom.args);
phantom.casperArgs = global.require('cli').parse(system.args);
// loaded status
phantom.casperLoaded = true;

View File

@@ -33,7 +33,8 @@ from zerver.lib.actions import bulk_remove_subscriptions, do_change_password, \
create_stream_if_needed, gather_subscriptions, subscribed_to_stream, \
update_user_presence, bulk_add_subscriptions, do_events_register, \
get_status_dict, do_change_enable_offline_email_notifications, \
do_change_enable_digest_emails, do_set_realm_name, do_set_realm_restricted_to_domain, do_set_realm_invite_required, do_set_realm_invite_by_admins_only, internal_prep_message, \
do_change_enable_digest_emails, do_set_realm_name, do_set_realm_restricted_to_domain, \
do_set_realm_invite_required, do_set_realm_invite_by_admins_only, internal_prep_message, \
do_send_messages, get_default_subs, do_deactivate_user, do_reactivate_user, \
user_email_is_unique, do_invite_users, do_refer_friend, compute_mit_user_fullname, \
do_add_alert_words, do_remove_alert_words, do_set_alert_words, get_subscriber_emails, \
@@ -46,7 +47,8 @@ from zerver.lib.actions import bulk_remove_subscriptions, do_change_password, \
do_change_enable_stream_desktop_notifications, do_change_enable_stream_sounds, \
do_change_stream_description, do_get_streams, do_make_stream_private, \
do_regenerate_api_key, do_remove_default_stream, do_update_pointer, \
do_change_avatar_source, do_change_twenty_four_hour_time, do_change_left_side_userlist
do_change_avatar_source, do_change_twenty_four_hour_time, do_change_left_side_userlist, \
realm_user_count
from zerver.lib.create_user import random_api_key
from zerver.lib.push_notifications import num_push_devices_for_user
@@ -144,43 +146,6 @@ def list_to_streams(streams_raw, user_profile, autocreate=False, invite_only=Fal
return existing_streams, created_streams
def realm_user_count(realm):
user_dicts = get_active_user_dicts_in_realm(realm)
return len([user_dict for user_dict in user_dicts if not user_dict["is_bot"]])
def send_signup_message(sender, signups_stream, user_profile,
internal=False, realm=None):
if internal:
# When this is done using manage.py vs. the web interface
internal_blurb = " **INTERNAL SIGNUP** "
else:
internal_blurb = " "
user_count = realm_user_count(user_profile.realm)
# Send notification to realm notifications stream if it exists
# Don't send notification for the first user in a realm
if user_profile.realm.notifications_stream is not None and user_count > 1:
internal_send_message(sender, "stream",
user_profile.realm.notifications_stream.name,
"New users", "%s just signed up for Zulip. Say hello!" % \
(user_profile.full_name,),
realm=user_profile.realm)
internal_send_message(sender,
"stream", signups_stream, user_profile.realm.domain,
"%s <`%s`> just signed up for Zulip!%s(total: **%i**)" % (
user_profile.full_name,
user_profile.email,
internal_blurb,
user_count,
)
)
def notify_new_user(user_profile, internal=False):
if settings.NEW_USER_BOT is not None:
send_signup_message(settings.NEW_USER_BOT, "signups", user_profile, internal)
statsd.gauge("users.signups.%s" % (user_profile.realm.domain.replace('.', '_')), 1, delta=True)
class PrincipalError(JsonableError):
def __init__(self, principal):
self.principal = principal
@@ -244,7 +209,8 @@ def accounts_register(request):
# The user is trying to register for a deactivated realm. Advise them to
# contact support.
return render_to_response("zerver/deactivated.html",
{"deactivated_domain_name": realm.name})
{"deactivated_domain_name": realm.name,
"zulip_administrator": settings.ZULIP_ADMINISTRATOR})
try:
if existing_user_profile is not None and existing_user_profile.is_mirror_dummy:
@@ -330,12 +296,13 @@ def accounts_register(request):
do_change_password(user_profile, password)
do_change_full_name(user_profile, full_name)
except UserProfile.DoesNotExist:
user_profile = do_create_user(email, password, realm, full_name, short_name)
user_profile = do_create_user(email, password, realm, full_name, short_name,
prereg_user=prereg_user,
newsletter_data={"IP": request.META['REMOTE_ADDR']})
else:
user_profile = do_create_user(email, password, realm, full_name, short_name)
process_new_human_user(user_profile, prereg_user=prereg_user,
newsletter_data={"IP": request.META['REMOTE_ADDR']})
user_profile = do_create_user(email, password, realm, full_name, short_name,
prereg_user=prereg_user,
newsletter_data={"IP": request.META['REMOTE_ADDR']})
# This logs you in using the ZulipDummyBackend, since honestly nothing
# more fancy than this is required.
@@ -361,79 +328,6 @@ def accounts_register(request):
},
context_instance=RequestContext(request))
# Does the processing for a new user account:
# * Subscribes to default/invitation streams
# * Fills in some recent historical messages
# * Notifies other users in realm and Zulip about the signup
# * Deactivates PreregistrationUser objects
# * subscribe the user to newsletter if newsletter_data is specified
def process_new_human_user(user_profile, prereg_user=None, newsletter_data=None):
mit_beta_user = user_profile.realm.domain == "mit.edu"
try:
streams = prereg_user.streams.all()
except AttributeError:
# This will catch both the case where prereg_user is None and where it
# is a MitUser.
streams = []
# If the user's invitation didn't explicitly list some streams, we
# add the default streams
if len(streams) == 0:
streams = get_default_subs(user_profile)
bulk_add_subscriptions(streams, [user_profile])
# Give you the last 100 messages on your streams, so you have
# something to look at in your home view once you finish the
# tutorial.
one_week_ago = now() - datetime.timedelta(weeks=1)
recipients = Recipient.objects.filter(type=Recipient.STREAM,
type_id__in=[stream.id for stream in streams])
messages = Message.objects.filter(recipient_id__in=recipients, pub_date__gt=one_week_ago).order_by("-id")[0:100]
if len(messages) > 0:
ums_to_create = [UserMessage(user_profile=user_profile, message=message,
flags=UserMessage.flags.read)
for message in messages]
UserMessage.objects.bulk_create(ums_to_create)
# mit_beta_users don't have a referred_by field
if not mit_beta_user and prereg_user is not None and prereg_user.referred_by is not None \
and settings.NOTIFICATION_BOT is not None:
# This is a cross-realm private message.
internal_send_message(settings.NOTIFICATION_BOT,
"private", prereg_user.referred_by.email, user_profile.realm.domain,
"%s <`%s`> accepted your invitation to join Zulip!" % (
user_profile.full_name,
user_profile.email,
)
)
# Mark any other PreregistrationUsers that are STATUS_ACTIVE as
# inactive so we can keep track of the PreregistrationUser we
# actually used for analytics
if prereg_user is not None:
PreregistrationUser.objects.filter(email__iexact=user_profile.email).exclude(
id=prereg_user.id).update(status=0)
else:
PreregistrationUser.objects.filter(email__iexact=user_profile.email).update(status=0)
notify_new_user(user_profile)
if newsletter_data is not None:
# If the user was created automatically via the API, we may
# not want to register them for the newsletter
queue_json_publish(
"signups",
{
'EMAIL': user_profile.email,
'merge_vars': {
'NAME': user_profile.full_name,
'REALM': user_profile.realm.domain,
'OPTIN_IP': newsletter_data["IP"],
'OPTIN_TIME': datetime.datetime.isoformat(datetime.datetime.now()),
},
},
lambda event: None)
@login_required(login_url = settings.HOME_NOT_LOGGED_IN)
def accounts_accept_terms(request):
email = request.user.email
@@ -665,6 +559,15 @@ def start_google_oauth2(request):
}
return redirect(uri + urllib.urlencode(prams))
# Workaround to support the Python-requests 1.0 transition of .json
# from a property to a function
requests_json_is_function = callable(requests.Response.json)
def extract_json_response(resp):
if requests_json_is_function:
return resp.json()
else:
return resp.json
def finish_google_oauth2(request):
error = request.GET.get('error')
if error == 'access_denied':
@@ -693,7 +596,7 @@ def finish_google_oauth2(request):
)
if resp.status_code != 200:
raise Exception('Could not convert google pauth2 code to access_token\r%r' % resp.text)
access_token = resp.json()['access_token']
access_token = extract_json_response(resp)['access_token']
resp = requests.get(
'https://www.googleapis.com/plus/v1/people/me',
@@ -701,7 +604,7 @@ def finish_google_oauth2(request):
)
if resp.status_code != 200:
raise Exception('Google login failed making API call\r%r' % resp.text)
body = resp.json()
body = extract_json_response(resp)
try:
full_name = body['name']['formatted']
@@ -1737,8 +1640,7 @@ def create_user_backend(request, user_profile, email=REQ, password=REQ,
except UserProfile.DoesNotExist:
pass
new_user_profile = do_create_user(email, password, realm, full_name, short_name)
process_new_human_user(new_user_profile)
do_create_user(email, password, realm, full_name, short_name)
return json_success()
@authenticated_json_post_view
@@ -1871,6 +1773,12 @@ def json_fetch_api_key(request, user_profile, password=REQ(default='')):
return json_error("Your username or password is incorrect.")
return json_success({"api_key": user_profile.api_key})
@csrf_exempt
def api_fetch_google_client_id(request):
if not settings.GOOGLE_CLIENT_ID:
return json_error("GOOGLE_CLIENT_ID is not configured", status=400)
return json_success({"google_client_id": settings.GOOGLE_CLIENT_ID})
def get_status_list(requesting_user_profile):
return {'presences': get_status_dict(requesting_user_profile),
'server_timestamp': time.time()}

View File

@@ -961,20 +961,20 @@ def send_raw_pagerduty_json(user_profile, stream, message, topic):
def send_formated_pagerduty(user_profile, stream, message_type, format_dict, topic):
if message_type in ('incident.trigger', 'incident.unacknowledge'):
template = (u':unhealthy_heart: Incident '
template = (u':imp: Incident '
u'[{incident_num}]({incident_url}) {action} by '
u'[{service_name}]({service_url}) and assigned to '
u'[{assigned_to_username}@]({assigned_to_url})\n\n>{trigger_message}')
elif message_type == 'incident.resolve' and format_dict['resolved_by_url']:
template = (u':healthy_heart: Incident '
template = (u':grinning: Incident '
u'[{incident_num}]({incident_url}) resolved by '
u'[{resolved_by_username}@]({resolved_by_url})\n\n>{trigger_message}')
elif message_type == 'incident.resolve' and not format_dict['resolved_by_url']:
template = (u':healthy_heart: Incident '
template = (u':grinning: Incident '
u'[{incident_num}]({incident_url}) resolved\n\n>{trigger_message}')
else:
template = (u':average_heart: Incident [{incident_num}]({incident_url}) '
template = (u':no_good: Incident [{incident_num}]({incident_url}) '
u'{action} by [{assigned_to_username}@]({assigned_to_url})\n\n>{trigger_message}')
subject = topic or u'incident {incident_num}'.format(**format_dict)

View File

@@ -18,7 +18,7 @@ from zerver.lib.bulk_create import bulk_create_realms, \
bulk_create_clients
from zerver.lib.timestamp import timestamp_to_datetime
from zerver.models import MAX_MESSAGE_LENGTH
from zerver.models import DefaultStream, get_stream
from zerver.models import DefaultStream, get_stream, get_realm
from zilencer.models import Deployment
import ujson
@@ -147,7 +147,7 @@ class Command(BaseCommand):
subscriptions_to_add.append(s)
Subscription.objects.bulk_create(subscriptions_to_add)
else:
zulip_realm = Realm.objects.get(domain="zulip.com")
zulip_realm = get_realm("zulip.com")
recipient_streams = [klass.type_id for klass in
Recipient.objects.filter(type=Recipient.STREAM)]
@@ -577,7 +577,7 @@ def restore_saved_messages():
user_profile.save(update_fields=["enable_offline_push_notifications"])
continue
elif message_type == "default_streams":
set_default_streams(Realm.objects.get(domain=old_message["domain"]),
set_default_streams(get_realm(old_message["domain"]),
old_message["streams"])
continue
elif message_type == "subscription_property":

View File

@@ -1,12 +1,12 @@
from __future__ import absolute_import
from django.core.management.base import BaseCommand
from zerver.models import get_user_profile_by_email
from zerver.models import get_user_profile_by_email, UserProfile
import os
from ConfigParser import SafeConfigParser
class Command(BaseCommand):
help = """Reset all colors for a person to the default grey"""
help = """Sync your API key from ~/.zuliprc into your development instance"""
def handle(self, *args, **options):
config_file = os.path.join(os.environ["HOME"], ".zuliprc")
@@ -18,6 +18,9 @@ class Command(BaseCommand):
api_key = config.get("api", "key")
email = config.get("api", "email")
user_profile = get_user_profile_by_email(email)
user_profile.api_key = api_key
user_profile.save(update_fields=["api_key"])
try:
user_profile = get_user_profile_by_email(email)
user_profile.api_key = api_key
user_profile.save(update_fields=["api_key"])
except UserProfile.DoesNotExist:
print "User %s does not exist; not syncing API key" % (email,)

View File

@@ -5,9 +5,11 @@ from django.conf import settings
import django.contrib.auth
from django_auth_ldap.backend import LDAPBackend
from zerver.lib.actions import do_create_user
from zerver.models import UserProfile, get_user_profile_by_id, \
get_user_profile_by_email, remote_user_to_email, email_to_username
from zerver.models import UserProfile, Realm, get_user_profile_by_id, \
get_user_profile_by_email, remote_user_to_email, email_to_username, \
resolve_email_to_domain, get_realm
from apiclient.sample_tools import client as googleapiclient
from oauth2client.crypt import AppIdentityError
@@ -22,6 +24,8 @@ def password_auth_enabled(realm):
for backend in django.contrib.auth.get_backends():
if isinstance(backend, EmailAuthBackend):
return True
if isinstance(backend, ZulipLDAPAuthBackend):
return True
return False
def dev_auth_enabled():
@@ -126,26 +130,59 @@ class ZulipRemoteUserBackend(RemoteUserBackend):
return user_profile
class ZulipLDAPAuthBackend(ZulipAuthMixin, LDAPBackend):
class ZulipLDAPException(Exception):
pass
class ZulipLDAPAuthBackendBase(ZulipAuthMixin, LDAPBackend):
# Don't use Django LDAP's permissions functions
def has_perm(self, user, perm, obj=None):
return False
def has_module_perms(self, user, app_label):
return False
def get_all_permissions(self, user, obj=None):
return set()
def get_group_permissions(self, user, obj=None):
return set()
def django_to_ldap_username(self, username):
if settings.LDAP_APPEND_DOMAIN is not None:
if settings.LDAP_APPEND_DOMAIN:
if not username.endswith("@" + settings.LDAP_APPEND_DOMAIN):
raise ZulipLDAPException("Username does not match LDAP domain.")
return email_to_username(username)
return username
def ldap_to_django_username(self, username):
if settings.LDAP_APPEND_DOMAIN is not None:
if settings.LDAP_APPEND_DOMAIN:
return "@".join((username, settings.LDAP_APPEND_DOMAIN))
return username
class ZulipLDAPAuthBackend(ZulipLDAPAuthBackendBase):
def authenticate(self, username, password):
try:
username = self.django_to_ldap_username(username)
return ZulipLDAPAuthBackendBase.authenticate(self, username, password)
except Realm.DoesNotExist:
return None
except ZulipLDAPException:
return None
def get_or_create_user(self, username, ldap_user):
try:
return get_user_profile_by_email(username), False
except UserProfile.DoesNotExist:
return UserProfile(), False
domain = resolve_email_to_domain(username)
realm = get_realm(domain)
class ZulipLDAPUserPopulator(ZulipLDAPAuthBackend):
# Just like ZulipLDAPAuthBackend, but doesn't let you log in.
full_name_attr = settings.AUTH_LDAP_USER_ATTR_MAP["full_name"]
short_name = full_name = ldap_user.attrs[full_name_attr]
if "short_name" in settings.AUTH_LDAP_USER_ATTR_MAP:
short_name_attr = settings.AUTH_LDAP_USER_ATTR_MAP["short_name"]
short_name = ldap_user.attrs[short_name_attr]
user_profile = do_create_user(username, None, realm, full_name, short_name)
return user_profile, False
# Just like ZulipLDAPAuthBackend, but doesn't let you log in.
class ZulipLDAPUserPopulator(ZulipLDAPAuthBackendBase):
def authenticate(self, username, password):
return None

View File

@@ -1,4 +1,11 @@
# Non-secret secret Django settings for the Zulip project
# This file is the Zulip local_settings.py configuration for the
# zulip.com installation of Zulip. It shouldn't be used in other
# environments, but you may find it to be a a helpful reference when
# setting up your own Zulip installation to see how Zulip can be
# configured.
#
# On a normal Zulip production server, zproject/local_settings.py is a
# symlink to /etc/zulip/settings.py (based off local_settings_template.py).
import platform
import ConfigParser
from base64 import b64decode
@@ -52,7 +59,9 @@ else:
EXTERNAL_API_PATH = 'api.zulip.com'
STATSD_PREFIX = 'app'
# Legacy zulip.com bucket used for old-style S3 uploads.
S3_BUCKET="humbug-user-uploads"
# Buckets used for Amazon S3 integration for storing files and user avatars.
S3_AUTH_UPLOADS_BUCKET = "zulip-user-uploads"
S3_AVATAR_BUCKET="humbug-user-avatars"

View File

@@ -18,17 +18,25 @@ ADMIN_DOMAIN = 'example.com'
# Enable at least one of the following authentication backends.
AUTHENTICATION_BACKENDS = (
# 'zproject.backends.EmailAuthBackend', # Email and password
# 'zproject.backends.EmailAuthBackend', # Email and password; see SMTP setup below
# 'zproject.backends.ZulipRemoteUserBackend', # Local SSO
# 'zproject.backends.GoogleMobileOauth2Backend', # Google Apps, setup below
# 'zproject.backends.ZulipLDAPAuthBackend', # LDAP, setup below
)
# Google Oauth requires a bit of configuration; you will need to go to
# https://console.developers.google.com, setup an Oauth2 client ID
# that allows redirects to
# e.g. https://zulip.example.com/accounts/login/google/done/ put your
# client secret as "google_oauth2_client_secret" in
# zulip-secrets.conf, and your cleitn ID here:
# do the following:
#
# (1) Visit https://console.developers.google.com, setup an
# Oauth2 client ID that allows redirects to
# e.g. https://zulip.example.com/accounts/login/google/done/.
#
# (2) Then click into the APIs and Auth section (in the sidebar on the
# left side of the page), APIs, then under "Social APIs" click on
# "Google+ API" and click the button to enable the API.
#
# (3) put your client secret as "google_oauth2_client_secret" in
# zulip-secrets.conf, and your client ID right here:
# GOOGLE_OAUTH2_CLIENT_ID=<your client ID from Google>
# If you are using the ZulipRemoteUserBackend authentication backend,
@@ -37,18 +45,32 @@ AUTHENTICATION_BACKENDS = (
# SSO_APPEND_DOMAIN = "example.com")
SSO_APPEND_DOMAIN = None
# Configure the outgoing SMTP server below. For outgoing email
# via a GMail SMTP server, EMAIL_USE_TLS must be True and the
# outgoing port must be 587. The EMAIL_HOST is prepopulated
# for GMail servers, change it for other hosts, or leave it unset
# or empty to skip sending email.
# Configure the outgoing SMTP server below. For testing, you can skip
# sending emails entirely by commenting out EMAIL_HOST, but you will
# want to configure this to support email address confirmation emails,
# missed message emails, onboarding follow-up emails, etc. To
# configure SMTP, you will need to complete the following steps:
#
# (1) Fill out the outgoing email sending configuration below.
#
# (2) Put the SMTP password for EMAIL_HOST_USER in
# /etc/zulip/zulip-secrets.conf as email_password.
#
# (3) If you are using a gmail account to send outgoing email, you
# will likely need to read this Google support answer and configure
# that account as "less secure":
# https://support.google.com/mail/answer/14257.
#
# A common problem is hosting providers that block outgoing SMTP traffic.
#
# With the exception of reading EMAIL_HOST_PASSWORD from
# email_password in the Zulip secrets file, Zulip uses Django's
# standard EmailBackend, so if you're having issues, you may want to
# search for documentation on using your email provider with Django.
EMAIL_HOST = 'smtp.gmail.com'
EMAIL_HOST_USER = ''
# If you're using password auth, you will need to put the password in
# /etc/zulip/zulip-secrets.conf as email_host_password.
EMAIL_PORT = 587
EMAIL_USE_TLS = True
# The email From address to be used for automatically generated emails
DEFAULT_FROM_EMAIL = "Zulip <zulip@example.com>"
# The noreply address to be used as Reply-To for certain generated emails.
@@ -91,10 +113,12 @@ ERROR_REPORTING = True
INLINE_IMAGE_PREVIEW = True
# By default, files uploaded by users and user avatars are stored
# directly on the Zulip server. If file storage in Amazon S3 (or
# elsewhere, e.g. your corporate fileshare) is desired, please contact
# Zulip Support (support@zulip.com) for further instructions on
# setting up the appropriate integration.
# directly on the Zulip server. If file storage in Amazon S3 is
# desired, you can configure that by setting s3_key and s3_secret_key
# in /etc/zulip/zulip-secrets.conf to be the S3 access and secret keys
# that you want to use, and setting the S3_AUTH_UPLOADS_BUCKET and
# S3_AVATAR_BUCKET to be the S3 buckets you've created to store file
# uploads and user avatars, respectively.
LOCAL_UPLOADS_DIR = "/home/zulip/uploads"
# Controls whether name changes are completely disabled for this installation
@@ -120,50 +144,53 @@ ENABLE_GRAVATAR = True
#
# 1. Log in to http://dev.twitter.com.
# 2. In the menu under your username, click My Applications. From this page, create a new application.
# 3. Click on the application you created and click "create my access token". Fill in the requested values.
TWITTER_CONSUMER_KEY = ''
TWITTER_CONSUMER_SECRET = ''
TWITTER_ACCESS_TOKEN_KEY = ''
TWITTER_ACCESS_TOKEN_SECRET = ''
# 3. Click on the application you created and click "create my access token".
# 4. Fill in the values for twitter_consumer_key, twitter_consumer_secret, twitter_access_token_key,
# and twitter_access_token_secret in /etc/zulip/zulip-secrets.conf.
### EMAIL GATEWAY INTEGRATION
# The email gateway provides, for each stream, an email address that
# you can send email to in order to have the email's content be posted
# to that stream. Emails received at the per-stream email address
# will be converted into a Zulip message
# There are two ways to make use of local email mirroring:
# The Email gateway integration supports sending messages into Zulip
# by sending an email. This is useful for receiving notifications
# from third-party services that only send outgoing notifications via
# email. Once this integration is configured, each stream will have
# an email address documented on the stream settings page an emails
# sent to that address will be delivered into the stream.
#
# There are two ways to configure email mirroring in Zulip:
# 1. Local delivery: A MTA runs locally and passes mail directly to Zulip
# 2. Polling: Checks an IMAP inbox every minute for new messages.
# A Puppet manifest for local delivery via Postfix is available in
# puppet/zulip/manifests/postfix_localmail.pp. To use the manifest, add it to
# puppet_classes in /etc/zulip/zulip.conf. This manifest assumes you'll receive
# mail addressed to the hostname of your Zulip server.
#
# Users of other mail servers will need to configure it to pass mail to the
# email mirror; see `python manage.py email-mirror --help` for details.
# The email address pattern to use for auto-generated stream emails
# The %s will be replaced with a unique token, and the resulting email
# must be delivered to the EMAIL_GATEWAY_IMAP_FOLDER of the
# EMAIL_GATEWAY_LOGIN account below, or piped in to the email-mirror management
# command as indicated above.
# The local delivery configuration is preferred for production because
# it supports nicer looking email addresses and has no cron delay,
# while the polling mechanism is better for testing/developing this
# feature because it doesn't require a public-facing IP/DNS setup.
#
# Example: zulip+%s@example.com
# The main email mirror setting is the email address pattern, where
# you specify the email address format you'd like the integration to
# use. It should be one of the following:
# %s@zulip.example.com (for local delivery)
# username+%s@example.com (for polling if EMAIL_GATEWAY_LOGIN=username@example.com)
EMAIL_GATEWAY_PATTERN = ""
# The following options are relevant if you're using mail polling.
#
# A sample cron job for mail polling is available at puppet/zulip/files/cron.d/email-mirror
# If you are using local delivery, EMAIL_GATEWAY_PATTERN is all you need
# to change in this file. You will also need to enable the Zulip postfix
# configuration to support local delivery by adding
# , zulip::postfix_localmail
# to puppet_classes in /etc/zulip/zulip.conf.
#
# If you are using polling, you will need to setup an IMAP email
# account dedicated to Zulip email gateway messages. The model is
# that users will send emails to that account via an address of the
# form username+%s@example.com (which is what you will set as
# EMAIL_GATEWAY_PATTERN); your email provider should deliver those
# emails to the username@example.com inbox. Then you run in a cron
# job `./manage.py email-mirror` (see puppet/zulip/files/cron.d/email-mirror),
# which will check that inbox and batch-process any new messages.
#
# You will need to configure authentication for the email mirror
# command to access the IMAP mailbox below.
#
# The Zulip username of the bot that the email pattern should post as.
# Example: emailgateway@example.com
EMAIL_GATEWAY_BOT = ""
# Configuration of the email mirror mailbox
# The IMAP login and password
EMAIL_GATEWAY_LOGIN = ""
EMAIL_GATEWAY_PASSWORD = ""
@@ -175,9 +202,45 @@ EMAIL_GATEWAY_IMAP_PORT = 993
EMAIL_GATEWAY_IMAP_FOLDER = "INBOX"
### LDAP integration configuration
# Zulip supports retrieving information about users via LDAP, and optionally
# using LDAP as an authentication mechanism.
# Zulip supports retrieving information about users via LDAP, and
# optionally using LDAP as an authentication mechanism.
#
# In either configuration, you will need to do the following:
#
# * Fill in the LDAP configuration options below so that Zulip can
# connect to your LDAP server
#
# * Setup the mapping between email addresses (used as login names in
# Zulip) and LDAP usernames. There are two supported ways to setup
# the username mapping:
#
# (A) If users' email addresses are in LDAP, set
# LDAP_APPEND_DOMAIN = None
# AUTH_LDAP_USER_SEARCH to lookup users by email address
#
# (B) If LDAP only has usernames but email addresses are of the form
# username@example.com, you should set:
# LDAP_APPEND_DOMAIN = example.com and
# AUTH_LDAP_USER_SEARCH to lookup users by username
#
# You can quickly test whether your configuration works by running:
# ./manage.py query_ldap username@example.com
# From the root of your Zulip installation; if your configuration is working
# that will output the full name for your user.
#
# -------------------------------------------------------------
#
# If you are using LDAP for authentication, you will need to enable
# the zproject.backends.ZulipLDAPAuthBackend auth backend in
# AUTHENTICATION_BACKENDS above. After doing so, you should be able
# to login to Zulip by entering your email address and LDAP password
# on the Zulip login form.
#
# If you are using LDAP to populate names in Zulip, once you finish
# configuring this integration, you will need to run:
# ./manage.py sync_ldap_user_data
# To sync names for existing users; you may want to run this in a cron
# job to pick up name changes made on your LDAP server.
import ldap
from django_auth_ldap.config import LDAPSearch, GroupOfNamesType
@@ -197,7 +260,7 @@ AUTH_LDAP_USER_SEARCH = LDAPSearch("ou=users,dc=example,dc=com",
# If the value of a user's "uid" (or similar) property is not their email
# address, specify the domain to append here.
LDAP_APPEND_DOMAIN = ADMIN_DOMAIN
LDAP_APPEND_DOMAIN = None
# This map defines how to populate attributes of a Zulip user from LDAP.
AUTH_LDAP_USER_ATTR_MAP = {

View File

@@ -153,6 +153,9 @@ urlpatterns += patterns('zerver.views',
# password/pair and returns an API key.
url(r'^api/v1/fetch_api_key$', 'api_fetch_api_key'),
# Used to present the GOOGLE_CLIENT_ID to mobile apps
url(r'^api/v1/fetch_google_client_id$', 'api_fetch_google_client_id'),
# These are integration-specific web hook callbacks
url(r'^api/v1/external/beanstalk$' , 'webhooks.api_beanstalk_webhook'),
url(r'^api/v1/external/github$', 'webhooks.api_github_landing'),