mirror of
https://github.com/wazuh/wazuh-docker.git
synced 2025-10-23 06:11:57 +00:00
first commit
This commit is contained in:
21
LICENSE
Normal file
21
LICENSE
Normal file
@@ -0,0 +1,21 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2015 Anthony Lapenna
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
210
README.md
Normal file
210
README.md
Normal file
@@ -0,0 +1,210 @@
|
||||
# Docker Wazuh+ELK stack
|
||||
|
||||
.. note:: These Docker containers are based on "deviantony" dockerfiles, which can be found at `https://github.com/deviantony/docker-elk <https://github.com/deviantony/docker-elk>`_. We created our own fork, which we test and maintain. Thank you Anthony Lapenna for your contribution to the community.
|
||||
|
||||
Run the latest version of the ELK (Elasticseach, Logstash, Kibana) stack with Docker and Docker-compose.
|
||||
|
||||
It will give you the ability to analyze any data set by using the searching/aggregation capabilities of Elasticseach and the visualization power of Kibana.
|
||||
|
||||
Based on the official images:
|
||||
|
||||
* [elasticsearch](https://registry.hub.docker.com/_/elasticsearch/)
|
||||
* [logstash](https://registry.hub.docker.com/_/logstash/)
|
||||
* [kibana](https://registry.hub.docker.com/_/kibana/)
|
||||
* [Wazuh](https://github.com/wazuh/wazuh)
|
||||
|
||||
# Requirements
|
||||
|
||||
## Setup
|
||||
|
||||
1. Install [Docker](http://docker.io).
|
||||
2. Install [Docker-compose](http://docs.docker.com/compose/install/) **version >= 1.6**.
|
||||
3. Clone this repository
|
||||
|
||||
## Increase max_map_count on your host (Linux)
|
||||
|
||||
You need to increase `max_map_count` on your Docker host:
|
||||
|
||||
```bash
|
||||
$ sudo sysctl -w vm.max_map_count=262144
|
||||
```
|
||||
|
||||
## SELinux
|
||||
|
||||
On distributions which have SELinux enabled out-of-the-box you will need to either re-context the files or set SELinux into Permissive mode in order for docker-elk to start properly.
|
||||
For example on Redhat and CentOS, the following will apply the proper context:
|
||||
|
||||
```bash
|
||||
.-root@centos ~
|
||||
-$ chcon -R system_u:object_r:admin_home_t:s0 docker-elk/
|
||||
```
|
||||
|
||||
# Usage
|
||||
|
||||
Start the ELK stack using *docker-compose*:
|
||||
|
||||
```bash
|
||||
$ docker-compose up
|
||||
```
|
||||
|
||||
You can also choose to run it in background (detached mode):
|
||||
|
||||
```bash
|
||||
$ docker-compose up -d
|
||||
```
|
||||
|
||||
Now that the stack is running, you'll want to inject logs in it. The shipped logstash configuration allows you to send content via tcp:
|
||||
|
||||
```bash
|
||||
$ nc localhost 5000 < /path/to/logfile.log
|
||||
```
|
||||
|
||||
And then access Kibana UI by hitting [http://localhost:5601](http://localhost:5601) with a web browser.
|
||||
|
||||
*NOTE*: You'll need to inject data into logstash before being able to create a logstash index in Kibana. Then all you should have to do is to hit the create button.
|
||||
|
||||
See: https://www.elastic.co/guide/en/kibana/current/setup.html#connect
|
||||
|
||||
By default, the stack exposes the following ports:
|
||||
* 5000: Logstash TCP input.
|
||||
* 9200: Elasticsearch HTTP
|
||||
* 9300: Elasticsearch TCP transport
|
||||
* 5601: Kibana
|
||||
|
||||
*WARNING*: If you're using *boot2docker*, you must access it via the *boot2docker* IP address instead of *localhost*.
|
||||
|
||||
*WARNING*: If you're using *Docker Toolbox*, you must access it via the *docker-machine* IP address instead of *localhost*.
|
||||
|
||||
# Configuration
|
||||
|
||||
*NOTE*: Configuration is not dynamically reloaded, you will need to restart the stack after any change in the configuration of a component.
|
||||
|
||||
## How can I tune Kibana configuration?
|
||||
|
||||
The Kibana default configuration is stored in `kibana/config/kibana.yml`.
|
||||
|
||||
## How can I tune Logstash configuration?
|
||||
|
||||
The logstash configuration is stored in `logstash/config/logstash.conf`.
|
||||
|
||||
The folder `logstash/config` is mapped onto the container `/etc/logstash/conf.d` so you
|
||||
can create more than one file in that folder if you'd like to. However, you must be aware that config files will be read from the directory in alphabetical order.
|
||||
|
||||
## How can I specify the amount of memory used by Logstash?
|
||||
|
||||
The Logstash container use the *LS_HEAP_SIZE* environment variable to determine how much memory should be associated to the JVM heap memory (defaults to 500m).
|
||||
|
||||
If you want to override the default configuration, add the *LS_HEAP_SIZE* environment variable to the container in the `docker-compose.yml`:
|
||||
|
||||
```yml
|
||||
logstash:
|
||||
build: logstash/
|
||||
command: -f /etc/logstash/conf.d/
|
||||
volumes:
|
||||
- ./logstash/config:/etc/logstash/conf.d
|
||||
ports:
|
||||
- "5000:5000"
|
||||
networks:
|
||||
- docker_elk
|
||||
depends_on:
|
||||
- elasticsearch
|
||||
environment:
|
||||
- LS_HEAP_SIZE=2048m
|
||||
```
|
||||
|
||||
## How can I tune Elasticsearch configuration?
|
||||
|
||||
The Elasticsearch container is using the shipped configuration and it is not exposed by default.
|
||||
|
||||
If you want to override the default configuration, create a file `elasticsearch/config/elasticsearch.yml` and add your configuration in it.
|
||||
|
||||
Then, you'll need to map your configuration file inside the container in the `docker-compose.yml`. Update the elasticsearch container declaration to:
|
||||
|
||||
```yml
|
||||
elasticsearch:
|
||||
build: elasticsearch/
|
||||
ports:
|
||||
- "9200:9200"
|
||||
- "9300:9300"
|
||||
environment:
|
||||
ES_JAVA_OPTS: "-Xms1g -Xmx1g"
|
||||
networks:
|
||||
- docker_elk
|
||||
```
|
||||
|
||||
# Storage
|
||||
|
||||
## How can I store Elasticsearch data?
|
||||
|
||||
The data stored in Elasticsearch will be persisted after container reboot but not after container removal.
|
||||
|
||||
In order to persist Elasticsearch data even after removing the Elasticsearch container, you'll have to mount a volume on your Docker host. Update the elasticsearch container declaration to:
|
||||
|
||||
```yml
|
||||
elasticsearch:
|
||||
build: elasticsearch/
|
||||
command: elasticsearch -Des.network.host=_non_loopback_ -Des.cluster.name: my-cluster
|
||||
ports:
|
||||
- "9200:9200"
|
||||
- "9300:9300"
|
||||
environment:
|
||||
ES_JAVA_OPTS: "-Xms1g -Xmx1g"
|
||||
networks:
|
||||
- docker_elk
|
||||
volumes:
|
||||
- /path/to/storage:/usr/share/elasticsearch/data
|
||||
```
|
||||
|
||||
This will store elasticsearch data inside `/path/to/storage`.
|
||||
|
||||
## Final docker-compose file
|
||||
|
||||
```yml
|
||||
version: '2'
|
||||
|
||||
services:
|
||||
wazuh:
|
||||
build: wazuh/
|
||||
ports:
|
||||
- "1514:1514"
|
||||
- "1515:1515"
|
||||
- "514:514"
|
||||
- "55000:55000"
|
||||
networks:
|
||||
- docker_elk
|
||||
elasticsearch:
|
||||
image: elasticsearch:latest
|
||||
command: elasticsearch -E node.name="node-1" -E cluster.name="wazuh " -E network.host=0.0.0.0
|
||||
ports:
|
||||
- "9200:9200"
|
||||
- "9300:9300"
|
||||
environment:
|
||||
ES_JAVA_OPTS: "-Xms1g -Xmx1g"
|
||||
networks:
|
||||
- docker_elk
|
||||
logstash:
|
||||
build: logstash/
|
||||
command: -f /etc/logstash/conf.d/
|
||||
ports:
|
||||
- "5000:5000"
|
||||
volumes_from:
|
||||
- wazuh
|
||||
networks:
|
||||
- docker_elk
|
||||
depends_on:
|
||||
- elasticsearch
|
||||
environment:
|
||||
- LS_HEAP_SIZE=2048m
|
||||
kibana:
|
||||
build: kibana/
|
||||
ports:
|
||||
- "5601:5601"
|
||||
networks:
|
||||
- docker_elk
|
||||
depends_on:
|
||||
- elasticsearch
|
||||
|
||||
networks:
|
||||
docker_elk:
|
||||
driver: bridge
|
||||
```
|
47
docker-compose.yml
Normal file
47
docker-compose.yml
Normal file
@@ -0,0 +1,47 @@
|
||||
version: '2'
|
||||
|
||||
services:
|
||||
wazuh:
|
||||
build: wazuh/
|
||||
ports:
|
||||
- "1514:1514"
|
||||
- "1515:1515"
|
||||
- "514:514"
|
||||
- "55000:55000"
|
||||
networks:
|
||||
- docker_elk
|
||||
elasticsearch:
|
||||
image: elasticsearch:latest
|
||||
command: elasticsearch -E node.name="node-1" -E cluster.name="wazuh " -E network.host=0.0.0.0
|
||||
ports:
|
||||
- "9200:9200"
|
||||
- "9300:9300"
|
||||
environment:
|
||||
ES_JAVA_OPTS: "-Xms1g -Xmx1g"
|
||||
networks:
|
||||
- docker_elk
|
||||
logstash:
|
||||
build: logstash/
|
||||
command: -f /etc/logstash/conf.d/
|
||||
ports:
|
||||
- "5000:5000"
|
||||
volumes_from:
|
||||
- wazuh
|
||||
networks:
|
||||
- docker_elk
|
||||
depends_on:
|
||||
- elasticsearch
|
||||
environment:
|
||||
- LS_HEAP_SIZE=2048m
|
||||
kibana:
|
||||
build: kibana/
|
||||
ports:
|
||||
- "5601:5601"
|
||||
networks:
|
||||
- docker_elk
|
||||
depends_on:
|
||||
- elasticsearch
|
||||
|
||||
networks:
|
||||
docker_elk:
|
||||
driver: bridge
|
5
kibana/Dockerfile
Normal file
5
kibana/Dockerfile
Normal file
@@ -0,0 +1,5 @@
|
||||
FROM kibana:latest
|
||||
|
||||
COPY ./config/kibana.yml /opt/kibana/config/kibana.yml
|
||||
|
||||
RUN /usr/share/kibana/bin/kibana-plugin install http://wazuh.com/resources/wazuh-app.zip
|
92
kibana/config/kibana.yml
Normal file
92
kibana/config/kibana.yml
Normal file
@@ -0,0 +1,92 @@
|
||||
# Kibana is served by a back end server. This setting specifies the port to use.
|
||||
server.port: 5601
|
||||
|
||||
# This setting specifies the IP address of the back end server.
|
||||
server.host: "0.0.0.0"
|
||||
|
||||
# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This setting
|
||||
# cannot end in a slash.
|
||||
# server.basePath: ""
|
||||
|
||||
# The maximum payload size in bytes for incoming server requests.
|
||||
# server.maxPayloadBytes: 1048576
|
||||
|
||||
# The Kibana server's name. This is used for display purposes.
|
||||
# server.name: "your-hostname"
|
||||
|
||||
# The URL of the Elasticsearch instance to use for all your queries.
|
||||
elasticsearch.url: "http://elasticsearch:9200"
|
||||
|
||||
# When this setting’s value is true Kibana uses the hostname specified in the server.host
|
||||
# setting. When the value of this setting is false, Kibana uses the hostname of the host
|
||||
# that connects to this Kibana instance.
|
||||
# elasticsearch.preserveHost: true
|
||||
|
||||
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
|
||||
# dashboards. Kibana creates a new index if the index doesn’t already exist.
|
||||
# kibana.index: ".kibana"
|
||||
|
||||
# The default application to load.
|
||||
# kibana.defaultAppId: "discover"
|
||||
|
||||
# If your Elasticsearch is protected with basic authentication, these settings provide
|
||||
# the username and password that the Kibana server uses to perform maintenance on the Kibana
|
||||
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
|
||||
# is proxied through the Kibana server.
|
||||
# elasticsearch.username: "user"
|
||||
# elasticsearch.password: "pass"
|
||||
|
||||
# Paths to the PEM-format SSL certificate and SSL key files, respectively. These
|
||||
# files enable SSL for outgoing requests from the Kibana server to the browser.
|
||||
# server.ssl.cert: /path/to/your/server.crt
|
||||
# server.ssl.key: /path/to/your/server.key
|
||||
|
||||
# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
|
||||
# These files validate that your Elasticsearch backend uses the same key files.
|
||||
# elasticsearch.ssl.cert: /path/to/your/client.crt
|
||||
# elasticsearch.ssl.key: /path/to/your/client.key
|
||||
|
||||
# Optional setting that enables you to specify a path to the PEM file for the certificate
|
||||
# authority for your Elasticsearch instance.
|
||||
# elasticsearch.ssl.ca: /path/to/your/CA.pem
|
||||
|
||||
# To disregard the validity of SSL certificates, change this setting’s value to false.
|
||||
# elasticsearch.ssl.verify: true
|
||||
|
||||
# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
|
||||
# the elasticsearch.requestTimeout setting.
|
||||
# elasticsearch.pingTimeout: 1500
|
||||
|
||||
# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
|
||||
# must be a positive integer.
|
||||
# elasticsearch.requestTimeout: 30000
|
||||
|
||||
# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
|
||||
# headers, set this value to [] (an empty list).
|
||||
# elasticsearch.requestHeadersWhitelist: [ authorization ]
|
||||
|
||||
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
|
||||
# elasticsearch.shardTimeout: 0
|
||||
|
||||
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
|
||||
# elasticsearch.startupTimeout: 5000
|
||||
|
||||
# Specifies the path where Kibana creates the process ID file.
|
||||
# pid.file: /var/run/kibana.pid
|
||||
|
||||
# Enables you specify a file where Kibana stores log output.
|
||||
# logging.dest: stdout
|
||||
|
||||
# Set the value of this setting to true to suppress all logging output.
|
||||
# logging.silent: false
|
||||
|
||||
# Set the value of this setting to true to suppress all logging output other than error messages.
|
||||
# logging.quiet: false
|
||||
|
||||
# Set the value of this setting to true to log all events, including system usage information
|
||||
# and all requests.
|
||||
# logging.verbose: false
|
||||
|
||||
# Set the interval in milliseconds to sample system and process performance
|
||||
# metrics. Minimum is 100ms. Defaults to 10000.
|
||||
# ops.interval: 10000
|
13
logstash/Dockerfile
Normal file
13
logstash/Dockerfile
Normal file
@@ -0,0 +1,13 @@
|
||||
FROM logstash:5
|
||||
|
||||
RUN apt-get update
|
||||
RUN groupadd -g 1000 ossec && useradd -u 1000 -g 1000 ossec &&\
|
||||
usermod -a -G ossec logstash
|
||||
COPY config/logstash.conf /etc/logstash/conf.d/logstash.conf
|
||||
COPY config/elastic5-ossec-template.json /etc/logstash/elastic5-ossec-template.json
|
||||
|
||||
|
||||
ADD config/run.sh /tmp/run.sh
|
||||
RUN chmod 755 /tmp/run.sh
|
||||
|
||||
ENTRYPOINT ["/tmp/run.sh"]
|
420
logstash/config/elastic5-ossec-template.json
Normal file
420
logstash/config/elastic5-ossec-template.json
Normal file
@@ -0,0 +1,420 @@
|
||||
{
|
||||
"order": 0,
|
||||
"template": "ossec*",
|
||||
"settings": {
|
||||
"number_of_shards": 1,
|
||||
"number_of_replicas": 0,
|
||||
"index.refresh_interval": "5s"
|
||||
},
|
||||
"mappings": {
|
||||
"ossec": {
|
||||
"dynamic_templates": [
|
||||
{
|
||||
"notanalyzed": {
|
||||
"match": "*",
|
||||
"match_mapping_type": "string",
|
||||
"mapping": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"properties": {
|
||||
"@timestamp": {
|
||||
"type": "date",
|
||||
"format": "dateOptionalTime"
|
||||
},
|
||||
"@version": {
|
||||
"type": "text"
|
||||
},
|
||||
"AgentIP": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"AgentID": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"dstuser": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"AlertsFile": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"full_log": {
|
||||
"type": "text"
|
||||
},
|
||||
"previous_log": {
|
||||
"type": "text"
|
||||
},
|
||||
"GeoLocation": {
|
||||
"properties": {
|
||||
"area_code": {
|
||||
"type": "long"
|
||||
},
|
||||
"city_name": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"continent_code": {
|
||||
"type": "text"
|
||||
},
|
||||
"coordinates": {
|
||||
"type": "double"
|
||||
},
|
||||
"country_code2": {
|
||||
"type": "text"
|
||||
},
|
||||
"country_code3": {
|
||||
"type": "text"
|
||||
},
|
||||
"country_name": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"dma_code": {
|
||||
"type": "long"
|
||||
},
|
||||
"ip": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"latitude": {
|
||||
"type": "double"
|
||||
},
|
||||
"location": {
|
||||
"type": "geo_point"
|
||||
},
|
||||
"longitude": {
|
||||
"type": "double"
|
||||
},
|
||||
"postal_code": {
|
||||
"type": "keyword"
|
||||
},
|
||||
"real_region_name": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"region_name": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"timezone": {
|
||||
"type": "text"
|
||||
}
|
||||
}
|
||||
},
|
||||
"host": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"AgentName": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"SyscheckFile": {
|
||||
"properties": {
|
||||
"path": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"sha1_before": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"sha1_after": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"owner_before": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"owner_after": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"gowner_before": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"gowner_after": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"perm_before": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"perm_after": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"md5_after": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"md5_before": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"gname_after": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"gname_before": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"inode_after": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"inode_before": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"mtime_after": {
|
||||
"type": "date",
|
||||
"format": "dateOptionalTime",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"mtime_before": {
|
||||
"type": "date",
|
||||
"format": "dateOptionalTime",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"uname_after": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"uname_before": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"size_before": {
|
||||
"type": "long",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"size_after": {
|
||||
"type": "long",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"diff": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"event": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
}
|
||||
}
|
||||
},
|
||||
"location": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"message": {
|
||||
"type": "text"
|
||||
},
|
||||
"offset": {
|
||||
"type": "keyword"
|
||||
},
|
||||
"rule": {
|
||||
"properties": {
|
||||
"description": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"groups": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"AlertLevel": {
|
||||
"type": "long",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"sidid": {
|
||||
"type": "long",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"cve": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"info": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"frequency": {
|
||||
"type": "long",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"firedtimes": {
|
||||
"type": "long",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"CIS": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"PCI_DSS": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
}
|
||||
}
|
||||
},
|
||||
"decoder": {
|
||||
"properties": {
|
||||
"parent": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"name": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"ftscomment": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"fts": {
|
||||
"type": "long",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"accumulate": {
|
||||
"type": "long",
|
||||
"doc_values": "true"
|
||||
}
|
||||
}
|
||||
},
|
||||
"srcip": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"protocol": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"action": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"dstip": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"dstport": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"srcuser": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"program_name": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"id": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"status": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"command": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"url": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"data": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"systemname": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"type": {
|
||||
"type": "text"
|
||||
},
|
||||
"title": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"oscap": {
|
||||
"properties": {
|
||||
"check.title": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"check.id": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"check.result": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"check.severity": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"check.description": {
|
||||
"type": "text"
|
||||
},
|
||||
"check.rationale": {
|
||||
"type": "text"
|
||||
},
|
||||
"check.references": {
|
||||
"type": "text"
|
||||
},
|
||||
"check.identifiers": {
|
||||
"type": "text"
|
||||
},
|
||||
"check.oval.id": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"scan.id": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"scan.content": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"scan.benchmark.id": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"scan.profile.title": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"scan.profile.id": {
|
||||
"type": "keyword",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"scan.score": {
|
||||
"type": "double",
|
||||
"doc_values": "true"
|
||||
},
|
||||
"scan.return_code": {
|
||||
"type": "long",
|
||||
"doc_values": "true"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
43
logstash/config/logstash.conf
Normal file
43
logstash/config/logstash.conf
Normal file
@@ -0,0 +1,43 @@
|
||||
input {
|
||||
file {
|
||||
type => "ossec-alerts"
|
||||
path => "/var/ossec/data/logs/alerts/alerts.json"
|
||||
codec => "json"
|
||||
}
|
||||
}
|
||||
|
||||
filter {
|
||||
geoip {
|
||||
source => "srcip"
|
||||
target => "GeoLocation"
|
||||
}
|
||||
if [SyscheckFile][path] {
|
||||
mutate {
|
||||
add_field => {"file" => "%{[SyscheckFile][path]}"}
|
||||
}
|
||||
}
|
||||
grok {
|
||||
match=> {
|
||||
"file" => ["^/.+/(?<audit_file>(.+)$)|^[A-Z]:.+\\(?<audit_file>(.+)$)|^[A-Z]:\\.+/(?<audit_file>(.+)$)"]
|
||||
}
|
||||
}
|
||||
mutate {
|
||||
rename => [ "hostname", "AgentName" ]
|
||||
rename => [ "agentip", "AgentIP" ]
|
||||
rename => [ "[rule][comment]", "[rule][description]" ]
|
||||
rename => [ "[rule][level]", "[rule][AlertLevel]" ]
|
||||
remove_field => [ "timestamp", "beat", "fields", "input_type", "tags", "count" ]
|
||||
}
|
||||
}
|
||||
|
||||
output {
|
||||
#stdout { codec => rubydebug }
|
||||
elasticsearch {
|
||||
hosts => ["elasticsearch:9200"]
|
||||
index => "ossec-%{+YYYY.MM.dd}"
|
||||
document_type => "ossec"
|
||||
template => "/etc/logstash/elastic5-ossec-template.json"
|
||||
template_name => "ossec"
|
||||
template_overwrite => true
|
||||
}
|
||||
}
|
30
logstash/config/run.sh
Normal file
30
logstash/config/run.sh
Normal file
@@ -0,0 +1,30 @@
|
||||
#!/bin/bash
|
||||
|
||||
#
|
||||
# OSSEC container bootstrap. See the README for information of the environment
|
||||
# variables expected by this script.
|
||||
#
|
||||
|
||||
#
|
||||
|
||||
#
|
||||
# Apply Templates
|
||||
#
|
||||
|
||||
set -e
|
||||
|
||||
# Add logstash as command if needed
|
||||
if [ "${1:0:1}" = '-' ]; then
|
||||
set -- logstash "$@"
|
||||
fi
|
||||
|
||||
# Run as user "logstash" if the command is "logstash"
|
||||
if [ "$1" = 'logstash' ]; then
|
||||
set -- gosu logstash "$@"
|
||||
fi
|
||||
|
||||
exec "$@"
|
||||
|
||||
#echo "Wait one min to logstash restart"
|
||||
#sleep 60
|
||||
#curl -XPUT -v -H "Expect:" "http://elasticsearch:9200/_template/ossec" -d@/etc/logstash/elastic5-ossec-template.json
|
40
wazuh/Dockerfile
Normal file
40
wazuh/Dockerfile
Normal file
@@ -0,0 +1,40 @@
|
||||
FROM milcom/centos7-systemd
|
||||
|
||||
COPY config/*.repo /etc/yum.repos.d/
|
||||
|
||||
RUN rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
|
||||
|
||||
RUN yum -y update; yum clean all;
|
||||
RUN yum -y install epel-release openssl useradd; yum clean all
|
||||
RUN groupadd -g 1000 ossec
|
||||
RUN useradd -u 1000 -g 1000 ossec
|
||||
RUN yum install -y wazuh-manager wazuh-api
|
||||
|
||||
|
||||
ADD config/default_agent /var/ossec/default_agent
|
||||
|
||||
RUN service wazuh-manager restart &&\
|
||||
/var/ossec/bin/manage_agents -f /default_agent &&\
|
||||
rm /var/ossec/default_agent &&\
|
||||
service wazuh-manager stop &&\
|
||||
echo -n "" /var/ossec/logs/ossec.log
|
||||
|
||||
|
||||
ADD config/data_dirs.env /data_dirs.env
|
||||
ADD config/init.bash /init.bash
|
||||
# Sync calls are due to https://github.com/docker/docker/issues/9547
|
||||
RUN chmod 755 /init.bash &&\
|
||||
sync && /init.bash &&\
|
||||
sync && rm /init.bash
|
||||
|
||||
|
||||
ADD config/run.sh /tmp/run.sh
|
||||
RUN chmod 755 /tmp/run.sh
|
||||
|
||||
VOLUME ["/var/ossec/data"]
|
||||
|
||||
EXPOSE 55000/tcp 1514/udp 1515/tcp 514/udp
|
||||
|
||||
# Run supervisord so that the container will stay alive
|
||||
|
||||
ENTRYPOINT ["/tmp/run.sh"]
|
7
wazuh/config/data_dirs.env
Normal file
7
wazuh/config/data_dirs.env
Normal file
@@ -0,0 +1,7 @@
|
||||
i=0
|
||||
DATA_DIRS[((i++))]="etc"
|
||||
DATA_DIRS[((i++))]="rules"
|
||||
DATA_DIRS[((i++))]="logs"
|
||||
DATA_DIRS[((i++))]="stats"
|
||||
DATA_DIRS[((i++))]="queue"
|
||||
export DATA_DIRS
|
1
wazuh/config/default_agent
Normal file
1
wazuh/config/default_agent
Normal file
@@ -0,0 +1 @@
|
||||
127.0.0.1,DEFAULT_LOCAL_AGENT
|
12
wazuh/config/init.bash
Normal file
12
wazuh/config/init.bash
Normal file
@@ -0,0 +1,12 @@
|
||||
#!/bin/bash
|
||||
|
||||
#
|
||||
# Initialize the custom data directory layout
|
||||
#
|
||||
source /data_dirs.env
|
||||
|
||||
cd /var/ossec
|
||||
for ossecdir in "${DATA_DIRS[@]}"; do
|
||||
mv ${ossecdir} ${ossecdir}-template
|
||||
ln -s data/${ossecdir} ${ossecdir}
|
||||
done
|
106
wazuh/config/run.sh
Normal file
106
wazuh/config/run.sh
Normal file
@@ -0,0 +1,106 @@
|
||||
#!/bin/bash
|
||||
|
||||
#
|
||||
# OSSEC container bootstrap. See the README for information of the environment
|
||||
# variables expected by this script.
|
||||
#
|
||||
|
||||
#
|
||||
|
||||
#
|
||||
# Startup the services
|
||||
#
|
||||
|
||||
source /data_dirs.env
|
||||
FIRST_TIME_INSTALLATION=false
|
||||
DATA_PATH=/var/ossec/data
|
||||
|
||||
for ossecdir in "${DATA_DIRS[@]}"; do
|
||||
if [ ! -e "${DATA_PATH}/${ossecdir}" ]
|
||||
then
|
||||
echo "Installing ${ossecdir}"
|
||||
cp -pr /var/ossec/${ossecdir}-template ${DATA_PATH}/${ossecdir}
|
||||
FIRST_TIME_INSTALLATION=true
|
||||
fi
|
||||
done
|
||||
|
||||
touch ${DATA_PATH}/process_list
|
||||
chgrp ossec ${DATA_PATH}/process_list
|
||||
chmod g+rw ${DATA_PATH}/process_list
|
||||
|
||||
AUTO_ENROLLMENT_ENABLED=${AUTO_ENROLLMENT_ENABLED:-true}
|
||||
|
||||
if [ $FIRST_TIME_INSTALLATION == true ]
|
||||
then
|
||||
|
||||
if [ $AUTO_ENROLLMENT_ENABLED == true ]
|
||||
then
|
||||
if [ ! -e ${DATA_PATH}/etc/sslmanager.key ]
|
||||
then
|
||||
echo "Creating ossec-authd key and cert"
|
||||
openssl genrsa -out ${DATA_PATH}/etc/sslmanager.key 4096
|
||||
openssl req -new -x509 -key ${DATA_PATH}/etc/sslmanager.key\
|
||||
-out ${DATA_PATH}/etc/sslmanager.cert -days 3650\
|
||||
-subj /CN=${HOSTNAME}/
|
||||
fi
|
||||
fi
|
||||
#
|
||||
# Support SYSLOG forwarding, if configured
|
||||
#
|
||||
SYSLOG_FORWADING_ENABLED=${SYSLOG_FORWADING_ENABLED:-false}
|
||||
if [ $SYSLOG_FORWADING_ENABLED == true ]
|
||||
then
|
||||
if [ -z "$SYSLOG_FORWARDING_SERVER_IP" ]
|
||||
then
|
||||
echo "Cannot setup sylog forwarding because SYSLOG_FORWARDING_SERVER_IP is not defined"
|
||||
else
|
||||
SYSLOG_FORWARDING_SERVER_PORT=${SYSLOG_FORWARDING_SERVER_PORT:-514}
|
||||
SYSLOG_FORWARDING_FORMAT=${SYSLOG_FORWARDING_FORMAT:-default}
|
||||
SYSLOG_XML_SNIPPET="\
|
||||
<syslog_output>\n\
|
||||
<server>${SYSLOG_FORWARDING_SERVER_IP}</server>\n\
|
||||
<port>${SYSLOG_FORWARDING_SERVER_PORT}</port>\n\
|
||||
<format>${SYSLOG_FORWARDING_FORMAT}</format>\n\
|
||||
</syslog_output>";
|
||||
|
||||
cat /var/ossec/etc/ossec.conf |\
|
||||
perl -pe "s,<ossec_config>,<ossec_config>\n${SYSLOG_XML_SNIPPET}\n," \
|
||||
> /var/ossec/etc/ossec.conf-new
|
||||
mv -f /var/ossec/etc/ossec.conf-new /var/ossec/etc/ossec.conf
|
||||
chgrp ossec /var/ossec/etc/ossec.conf
|
||||
/var/ossec/bin/ossec-control enable client-syslog
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
function ossec_shutdown(){
|
||||
/var/ossec/bin/ossec-control stop;
|
||||
if [ $AUTO_ENROLLMENT_ENABLED == true ]
|
||||
then
|
||||
kill $AUTHD_PID
|
||||
fi
|
||||
}
|
||||
|
||||
# Trap exit signals and do a proper shutdown
|
||||
trap "ossec_shutdown; exit" SIGINT SIGTERM
|
||||
|
||||
chmod -R g+rw ${DATA_PATH}
|
||||
|
||||
|
||||
if [ $AUTO_ENROLLMENT_ENABLED == true ]
|
||||
then
|
||||
echo "Starting ossec-authd..."
|
||||
/var/ossec/bin/ossec-authd -p 1515 -g ossec $AUTHD_OPTIONS >/dev/null 2>&1 &
|
||||
AUTHD_PID=$!
|
||||
fi
|
||||
sleep 15 # give ossec a reasonable amount of time to start before checking status
|
||||
LAST_OK_DATE=`date +%s`
|
||||
|
||||
## Update rules and decoders with Wazuh Ruleset
|
||||
cd /var/ossec/update/ruleset && python ossec_ruleset.py
|
||||
|
||||
/bin/node /var/ossec/api/app.js &
|
||||
/var/ossec/bin/ossec-control restart
|
||||
|
||||
|
||||
tail -f /var/ossec/logs/ossec.log
|
7
wazuh/config/wazuh.repo
Normal file
7
wazuh/config/wazuh.repo
Normal file
@@ -0,0 +1,7 @@
|
||||
[wazuh_repo]
|
||||
gpgcheck=1
|
||||
gpgkey=https://packages.wazuh.com/key/RPM-GPG-KEY-WAZUH
|
||||
enabled=1
|
||||
name=CENTOS-$releasever - Wazuh
|
||||
baseurl=https://packages.wazuh.com/yumtest/el/$releasever/$basearch
|
||||
protect=1
|
Reference in New Issue
Block a user