mirror of
https://github.com/wazuh/wazuh-docker.git
synced 2025-11-02 13:03:20 +00:00
Compare commits
1 Commits
3.10_7.3.0
...
2.0_5.4.2
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
78bf058a9f |
186
CHANGELOG.md
186
CHANGELOG.md
@@ -1,186 +0,0 @@
|
|||||||
# Change Log
|
|
||||||
All notable changes to this project will be documented in this file.
|
|
||||||
|
|
||||||
## Wazuh Docker v3.9.5_7.2.1
|
|
||||||
|
|
||||||
### Added
|
|
||||||
|
|
||||||
- Update to Wazuh version 3.9.5_7.2.1
|
|
||||||
|
|
||||||
## Wazuh Docker v3.9.4_7.2.0
|
|
||||||
|
|
||||||
### Added
|
|
||||||
|
|
||||||
- Update to Wazuh version 3.9.4_7.2.0
|
|
||||||
- Implemented Wazuh Filebeat Module ([jm404](https://www.github.com/jm404)) [#2a77c6a](https://github.com/wazuh/wazuh-docker/commit/2a77c6a6e6bf78f2492adeedbade7a507d9974b2)
|
|
||||||
|
|
||||||
## Wazuh Docker v3.9.3_7.2.0
|
|
||||||
|
|
||||||
### Fixed
|
|
||||||
- Wazuh-docker reinserts cluster settings after resuming containers ([@manuasir](https://github.com/manuasir)) [#213](https://github.com/wazuh/wazuh-docker/pull/213)
|
|
||||||
|
|
||||||
## Wazuh Docker v3.9.2_7.1.1
|
|
||||||
|
|
||||||
### Added
|
|
||||||
|
|
||||||
- Update to Wazuh version 3.9.2_7.1.1
|
|
||||||
|
|
||||||
## Wazuh Docker v3.9.2_6.8.0
|
|
||||||
|
|
||||||
### Added
|
|
||||||
|
|
||||||
- Update to Wazuh version 3.9.2_6.8.0
|
|
||||||
|
|
||||||
## Wazuh Docker v3.9.1_7.1.0
|
|
||||||
|
|
||||||
### Added
|
|
||||||
|
|
||||||
- Support for Elastic v7.1.0
|
|
||||||
- New environment variables for Kibana ([@manuasir](https://github.com/manuasir)) [#22ad43](https://github.com/wazuh/wazuh-docker/commit/22ad4360f548e54bb0c5e929f8c84a186ad2ab88)
|
|
||||||
|
|
||||||
## Wazuh Docker v3.9.1_6.8.0
|
|
||||||
|
|
||||||
### Added
|
|
||||||
|
|
||||||
- Update to Wazuh version 3.9.1_6.8.0 ([#181](https://github.com/wazuh/wazuh-docker/pull/181))
|
|
||||||
|
|
||||||
### Fixed
|
|
||||||
|
|
||||||
- Fixed `ELASTICSEARCH_KIBANA_IP` environment variable ([@manuasir](https://github.com/manuasir)) ([#181](https://github.com/wazuh/wazuh-docker/pull/181))
|
|
||||||
|
|
||||||
## Wazuh Docker v3.9.0_6.7.2
|
|
||||||
|
|
||||||
### Changed
|
|
||||||
|
|
||||||
- Update Elastic Stack version to 6.7.2.
|
|
||||||
|
|
||||||
## Wazuh Docker v3.9.0_6.7.1
|
|
||||||
|
|
||||||
### Added
|
|
||||||
|
|
||||||
- Support for xPACK authorized requests ([@manuasir](https://github.com/manuasir)) ([#119](https://github.com/wazuh/wazuh-docker/pull/119))
|
|
||||||
- Add Elasticsearch cluster configuration ([@SitoRBJ](https://github.com/SitoRBJ)). ([#146](https://github.com/wazuh/wazuh-docker/pull/146))
|
|
||||||
- Add Elasticsearch cluster configuration ([@Phandora](https://github.com/Phandora)) ([#140](https://github.com/wazuh/wazuh-docker/pull/140))
|
|
||||||
- Setting Nginx to support several user/passwords in Kibana ([@toniMR](https://github.com/toniMR)) ([#136](https://github.com/wazuh/wazuh-docker/pull/136))
|
|
||||||
|
|
||||||
|
|
||||||
### Changed
|
|
||||||
|
|
||||||
- Use LS_JAVA_OPTS instead of old LS_HEAP_SIZE ([@ruffy91](https://github.com/ruffy91)) ([#139](https://github.com/wazuh/wazuh-docker/pull/139))
|
|
||||||
- Changing the original Wazuh docker image to allow adding code in the entrypoint ([@Phandora](https://github.com/phandora)) ([#151](https://github.com/wazuh/wazuh-docker/pull/151))
|
|
||||||
|
|
||||||
### Removed
|
|
||||||
|
|
||||||
- Removing files from Wazuh image ([@Phandora](https://github.com/phandora)) ([#153](https://github.com/wazuh/wazuh-docker/pull/153))
|
|
||||||
|
|
||||||
## Wazuh Docker v3.8.2_6.7.0
|
|
||||||
|
|
||||||
### Changed
|
|
||||||
|
|
||||||
- Update Elastic Stack version to 6.7.0. ([#144](https://github.com/wazuh/wazuh-docker/pull/144))
|
|
||||||
|
|
||||||
## Wazuh Docker v3.8.2_6.6.2
|
|
||||||
|
|
||||||
### Changed
|
|
||||||
|
|
||||||
- Update Elastic Stack version to 6.6.2. ([#130](https://github.com/wazuh/wazuh-docker/pull/130))
|
|
||||||
|
|
||||||
## Wazuh Docker v3.8.2_6.6.1
|
|
||||||
|
|
||||||
### Changed
|
|
||||||
|
|
||||||
- Update Elastic Stack version to 6.6.1. ([#129](https://github.com/wazuh/wazuh-docker/pull/129))
|
|
||||||
|
|
||||||
## Wazuh Docker v3.8.2_6.5.4
|
|
||||||
|
|
||||||
### Added
|
|
||||||
|
|
||||||
- Add Wazuh-Elasticsearch. ([#106](https://github.com/wazuh/wazuh-docker/pull/106))
|
|
||||||
- Store Filebeat _/var/lib/filebeat/registry._ ([#109](https://github.com/wazuh/wazuh-docker/pull/109))
|
|
||||||
- Adding the option to disable some xpack features. ([#111](https://github.com/wazuh/wazuh-docker/pull/111))
|
|
||||||
- Wazuh-Kibana customizable at plugin level. ([#117](https://github.com/wazuh/wazuh-docker/pull/117))
|
|
||||||
- Adding env variables for alerts data flow. ([#118](https://github.com/wazuh/wazuh-docker/pull/118))
|
|
||||||
- New Logstash entrypoint added. ([#135](https://github.com/wazuh/wazuh-docker/pull/135/files))
|
|
||||||
- Welcome screen management. ([#133](https://github.com/wazuh/wazuh-docker/pull/133))
|
|
||||||
|
|
||||||
### Changed
|
|
||||||
|
|
||||||
- Update to Wazuh version 3.8.2. ([#105](https://github.com/wazuh/wazuh-docker/pull/105))
|
|
||||||
|
|
||||||
### Removed
|
|
||||||
|
|
||||||
- Remove alerts created in build time. ([#137](https://github.com/wazuh/wazuh-docker/pull/137))
|
|
||||||
|
|
||||||
|
|
||||||
## Wazuh Docker v3.8.1_6.5.4
|
|
||||||
|
|
||||||
### Changed
|
|
||||||
- Update to Wazuh version 3.8.1. ([#102](https://github.com/wazuh/wazuh-docker/pull/102))
|
|
||||||
|
|
||||||
## Wazuh Docker v3.8.0_6.5.4
|
|
||||||
|
|
||||||
### Changed
|
|
||||||
|
|
||||||
- Upgrade version 3.8.0_6.5.4. ([#97](https://github.com/wazuh/wazuh-docker/pull/97))
|
|
||||||
|
|
||||||
### Removed
|
|
||||||
|
|
||||||
- Remove cluster.py work around. ([#99](https://github.com/wazuh/wazuh-docker/pull/99))
|
|
||||||
|
|
||||||
## Wazuh Docker v3.7.2_6.5.4
|
|
||||||
|
|
||||||
### Added
|
|
||||||
|
|
||||||
- Improvements to Kibana settings added. ([#91](https://github.com/wazuh/wazuh-docker/pull/91))
|
|
||||||
- Add Kibana environmental variables for Wazuh APP config.yml. ([#89](https://github.com/wazuh/wazuh-docker/pull/89))
|
|
||||||
|
|
||||||
### Changed
|
|
||||||
|
|
||||||
- Update Elastic Stack version to 6.5.4. ([#82](https://github.com/wazuh/wazuh-docker/pull/82))
|
|
||||||
- Add env credentials for nginx. ([#86](https://github.com/wazuh/wazuh-docker/pull/86))
|
|
||||||
- Improve filebeat configuration ([#88](https://github.com/wazuh/wazuh-docker/pull/88))
|
|
||||||
|
|
||||||
### Fixed
|
|
||||||
|
|
||||||
- Temporary fix for Wazuh cluster master node in Kubernetes. ([#84](https://github.com/wazuh/wazuh-docker/pull/84))
|
|
||||||
|
|
||||||
## Wazuh Docker v3.7.2_6.5.3
|
|
||||||
|
|
||||||
### Changed
|
|
||||||
|
|
||||||
- Erasing temporary fix for AWS integration. ([#81](https://github.com/wazuh/wazuh-docker/pull/81))
|
|
||||||
|
|
||||||
### Fixed
|
|
||||||
|
|
||||||
- Upgrading errors due to wrong files. ([#80](https://github.com/wazuh/wazuh-docker/pull/80))
|
|
||||||
|
|
||||||
|
|
||||||
## Wazuh Docker v3.7.0_6.5.0
|
|
||||||
|
|
||||||
### Changed
|
|
||||||
|
|
||||||
- Adapt to Elastic stack 6.5.0.
|
|
||||||
|
|
||||||
## Wazuh Docker v3.7.0_6.4.3
|
|
||||||
|
|
||||||
### Added
|
|
||||||
|
|
||||||
- Allow custom scripts or commands before service start ([#58](https://github.com/wazuh/wazuh-docker/pull/58))
|
|
||||||
- Added description for wazuh-nginx ([#59](https://github.com/wazuh/wazuh-docker/pull/59))
|
|
||||||
- Added license file to match https://github.com/wazuh/wazuh LICENSE ([#60](https://github.com/wazuh/wazuh-docker/pull/60))
|
|
||||||
- Added SMTP packages ([#67](https://github.com/wazuh/wazuh-docker/pull/67))
|
|
||||||
|
|
||||||
### Changed
|
|
||||||
|
|
||||||
- Increased proxy buffer for NGINX Kibana ([#51](https://github.com/wazuh/wazuh-docker/pull/51))
|
|
||||||
- Updated logstash config to remove deprecation warnings ([#55](https://github.com/wazuh/wazuh-docker/pull/55))
|
|
||||||
- Set ossec user's home path ([#61](https://github.com/wazuh/wazuh-docker/pull/61))
|
|
||||||
|
|
||||||
### Fixed
|
|
||||||
|
|
||||||
- Fixed a bug that prevents the API from starting when the Wazuh manager was updated. Change in the files that are stored in the volume. ([#65](https://github.com/wazuh/wazuh-docker/pull/65))
|
|
||||||
- Fixed script reference ([#62](https://github.com/wazuh/wazuh-docker/pull/62/files))
|
|
||||||
|
|
||||||
## Wazuh Docker v3.6.1_6.4.3
|
|
||||||
|
|
||||||
Wazuh-Docker starting point.
|
|
||||||
475
LICENSE
475
LICENSE
@@ -1,475 +0,0 @@
|
|||||||
|
|
||||||
Portions Copyright (C) 2019 Wazuh, Inc.
|
|
||||||
Based on work Copyright (C) 2003 - 2013 Trend Micro, Inc.
|
|
||||||
|
|
||||||
This program is a free software; you can redistribute it and/or modify
|
|
||||||
it under the terms of the GNU General Public License (version 2) as
|
|
||||||
published by the FSF - Free Software Foundation.
|
|
||||||
|
|
||||||
In addition, certain source files in this program permit linking with the
|
|
||||||
OpenSSL library (http://www.openssl.org), which otherwise wouldn't be allowed
|
|
||||||
under the GPL. For purposes of identifying OpenSSL, most source files giving
|
|
||||||
this permission limit it to versions of OpenSSL having a license identical to
|
|
||||||
that listed in this file (see section "OpenSSL LICENSE" below). It is not
|
|
||||||
necessary for the copyright years to match between this file and the OpenSSL
|
|
||||||
version in question. However, note that because this file is an extension of
|
|
||||||
the license statements of these source files, this file may not be changed
|
|
||||||
except with permission from all copyright holders of source files in this
|
|
||||||
program which reference this file.
|
|
||||||
|
|
||||||
Note that this license applies to the source code, as well as
|
|
||||||
decoders, rules and any other data file included with OSSEC (unless
|
|
||||||
otherwise specified).
|
|
||||||
|
|
||||||
For the purpose of this license, we consider an application to constitute a
|
|
||||||
"derivative work" or a work based on this program if it does any of the
|
|
||||||
following (list not exclusive):
|
|
||||||
|
|
||||||
* Integrates source code/data files from OSSEC.
|
|
||||||
* Includes OSSEC copyrighted material.
|
|
||||||
* Includes/integrates OSSEC into a proprietary executable installer.
|
|
||||||
* Links to a library or executes a program that does any of the above.
|
|
||||||
|
|
||||||
This list is not exclusive, but just a clarification of our interpretation
|
|
||||||
of derived works. These restrictions only apply if you actually redistribute
|
|
||||||
OSSEC (or parts of it).
|
|
||||||
|
|
||||||
We don't consider these to be added restrictions on top of the GPL,
|
|
||||||
but just a clarification of how we interpret "derived works" as it
|
|
||||||
applies to OSSEC. This is similar to the way Linus Torvalds has
|
|
||||||
announced his interpretation of how "derived works" applies to Linux kernel
|
|
||||||
modules. Our interpretation refers only to OSSEC - we don't speak
|
|
||||||
for any other GPL products.
|
|
||||||
|
|
||||||
* As a special exception, the copyright holders give
|
|
||||||
* permission to link the code of portions of this program with the
|
|
||||||
* OpenSSL library under certain conditions as described in each
|
|
||||||
* individual source file, and distribute linked combinations
|
|
||||||
* including the two.
|
|
||||||
* You must obey the GNU General Public License in all respects
|
|
||||||
* for all of the code used other than OpenSSL. If you modify
|
|
||||||
* file(s) with this exception, you may extend this exception to your
|
|
||||||
* version of the file(s), but you are not obligated to do so. If you
|
|
||||||
* do not wish to do so, delete this exception statement from your
|
|
||||||
* version. If you delete this exception statement from all source
|
|
||||||
* files in the program, then also delete it here.
|
|
||||||
|
|
||||||
OSSEC HIDS is distributed in the hope that it will be useful, but WITHOUT
|
|
||||||
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
|
||||||
FITNESS FOR A PARTICULAR PURPOSE.
|
|
||||||
See the GNU General Public License Version 2 below for more details.
|
|
||||||
|
|
||||||
-----------------------------------------------------------------------------
|
|
||||||
|
|
||||||
GNU GENERAL PUBLIC LICENSE
|
|
||||||
Version 2, June 1991
|
|
||||||
|
|
||||||
Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
|
|
||||||
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
|
||||||
Everyone is permitted to copy and distribute verbatim copies
|
|
||||||
of this license document, but changing it is not allowed.
|
|
||||||
|
|
||||||
Preamble
|
|
||||||
|
|
||||||
The licenses for most software are designed to take away your
|
|
||||||
freedom to share and change it. By contrast, the GNU General Public
|
|
||||||
License is intended to guarantee your freedom to share and change free
|
|
||||||
software--to make sure the software is free for all its users. This
|
|
||||||
General Public License applies to most of the Free Software
|
|
||||||
Foundation's software and to any other program whose authors commit to
|
|
||||||
using it. (Some other Free Software Foundation software is covered by
|
|
||||||
the GNU Lesser General Public License instead.) You can apply it to
|
|
||||||
your programs, too.
|
|
||||||
|
|
||||||
When we speak of free software, we are referring to freedom, not
|
|
||||||
price. Our General Public Licenses are designed to make sure that you
|
|
||||||
have the freedom to distribute copies of free software (and charge for
|
|
||||||
this service if you wish), that you receive source code or can get it
|
|
||||||
if you want it, that you can change the software or use pieces of it
|
|
||||||
in new free programs; and that you know you can do these things.
|
|
||||||
|
|
||||||
To protect your rights, we need to make restrictions that forbid
|
|
||||||
anyone to deny you these rights or to ask you to surrender the rights.
|
|
||||||
These restrictions translate to certain responsibilities for you if you
|
|
||||||
distribute copies of the software, or if you modify it.
|
|
||||||
|
|
||||||
For example, if you distribute copies of such a program, whether
|
|
||||||
gratis or for a fee, you must give the recipients all the rights that
|
|
||||||
you have. You must make sure that they, too, receive or can get the
|
|
||||||
source code. And you must show them these terms so they know their
|
|
||||||
rights.
|
|
||||||
|
|
||||||
We protect your rights with two steps: (1) copyright the software, and
|
|
||||||
(2) offer you this license which gives you legal permission to copy,
|
|
||||||
distribute and/or modify the software.
|
|
||||||
|
|
||||||
Also, for each author's protection and ours, we want to make certain
|
|
||||||
that everyone understands that there is no warranty for this free
|
|
||||||
software. If the software is modified by someone else and passed on, we
|
|
||||||
want its recipients to know that what they have is not the original, so
|
|
||||||
that any problems introduced by others will not reflect on the original
|
|
||||||
authors' reputations.
|
|
||||||
|
|
||||||
Finally, any free program is threatened constantly by software
|
|
||||||
patents. We wish to avoid the danger that redistributors of a free
|
|
||||||
program will individually obtain patent licenses, in effect making the
|
|
||||||
program proprietary. To prevent this, we have made it clear that any
|
|
||||||
patent must be licensed for everyone's free use or not licensed at all.
|
|
||||||
|
|
||||||
The precise terms and conditions for copying, distribution and
|
|
||||||
modification follow.
|
|
||||||
|
|
||||||
GNU GENERAL PUBLIC LICENSE
|
|
||||||
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
|
|
||||||
|
|
||||||
0. This License applies to any program or other work which contains
|
|
||||||
a notice placed by the copyright holder saying it may be distributed
|
|
||||||
under the terms of this General Public License. The "Program", below,
|
|
||||||
refers to any such program or work, and a "work based on the Program"
|
|
||||||
means either the Program or any derivative work under copyright law:
|
|
||||||
that is to say, a work containing the Program or a portion of it,
|
|
||||||
either verbatim or with modifications and/or translated into another
|
|
||||||
language. (Hereinafter, translation is included without limitation in
|
|
||||||
the term "modification".) Each licensee is addressed as "you".
|
|
||||||
|
|
||||||
Activities other than copying, distribution and modification are not
|
|
||||||
covered by this License; they are outside its scope. The act of
|
|
||||||
running the Program is not restricted, and the output from the Program
|
|
||||||
is covered only if its contents constitute a work based on the
|
|
||||||
Program (independent of having been made by running the Program).
|
|
||||||
Whether that is true depends on what the Program does.
|
|
||||||
|
|
||||||
1. You may copy and distribute verbatim copies of the Program's
|
|
||||||
source code as you receive it, in any medium, provided that you
|
|
||||||
conspicuously and appropriately publish on each copy an appropriate
|
|
||||||
copyright notice and disclaimer of warranty; keep intact all the
|
|
||||||
notices that refer to this License and to the absence of any warranty;
|
|
||||||
and give any other recipients of the Program a copy of this License
|
|
||||||
along with the Program.
|
|
||||||
|
|
||||||
You may charge a fee for the physical act of transferring a copy, and
|
|
||||||
you may at your option offer warranty protection in exchange for a fee.
|
|
||||||
|
|
||||||
2. You may modify your copy or copies of the Program or any portion
|
|
||||||
of it, thus forming a work based on the Program, and copy and
|
|
||||||
distribute such modifications or work under the terms of Section 1
|
|
||||||
above, provided that you also meet all of these conditions:
|
|
||||||
|
|
||||||
a) You must cause the modified files to carry prominent notices
|
|
||||||
stating that you changed the files and the date of any change.
|
|
||||||
|
|
||||||
b) You must cause any work that you distribute or publish, that in
|
|
||||||
whole or in part contains or is derived from the Program or any
|
|
||||||
part thereof, to be licensed as a whole at no charge to all third
|
|
||||||
parties under the terms of this License.
|
|
||||||
|
|
||||||
c) If the modified program normally reads commands interactively
|
|
||||||
when run, you must cause it, when started running for such
|
|
||||||
interactive use in the most ordinary way, to print or display an
|
|
||||||
announcement including an appropriate copyright notice and a
|
|
||||||
notice that there is no warranty (or else, saying that you provide
|
|
||||||
a warranty) and that users may redistribute the program under
|
|
||||||
these conditions, and telling the user how to view a copy of this
|
|
||||||
License. (Exception: if the Program itself is interactive but
|
|
||||||
does not normally print such an announcement, your work based on
|
|
||||||
the Program is not required to print an announcement.)
|
|
||||||
|
|
||||||
These requirements apply to the modified work as a whole. If
|
|
||||||
identifiable sections of that work are not derived from the Program,
|
|
||||||
and can be reasonably considered independent and separate works in
|
|
||||||
themselves, then this License, and its terms, do not apply to those
|
|
||||||
sections when you distribute them as separate works. But when you
|
|
||||||
distribute the same sections as part of a whole which is a work based
|
|
||||||
on the Program, the distribution of the whole must be on the terms of
|
|
||||||
this License, whose permissions for other licensees extend to the
|
|
||||||
entire whole, and thus to each and every part regardless of who wrote it.
|
|
||||||
|
|
||||||
Thus, it is not the intent of this section to claim rights or contest
|
|
||||||
your rights to work written entirely by you; rather, the intent is to
|
|
||||||
exercise the right to control the distribution of derivative or
|
|
||||||
collective works based on the Program.
|
|
||||||
|
|
||||||
In addition, mere aggregation of another work not based on the Program
|
|
||||||
with the Program (or with a work based on the Program) on a volume of
|
|
||||||
a storage or distribution medium does not bring the other work under
|
|
||||||
the scope of this License.
|
|
||||||
|
|
||||||
3. You may copy and distribute the Program (or a work based on it,
|
|
||||||
under Section 2) in object code or executable form under the terms of
|
|
||||||
Sections 1 and 2 above provided that you also do one of the following:
|
|
||||||
|
|
||||||
a) Accompany it with the complete corresponding machine-readable
|
|
||||||
source code, which must be distributed under the terms of Sections
|
|
||||||
1 and 2 above on a medium customarily used for software interchange; or,
|
|
||||||
|
|
||||||
b) Accompany it with a written offer, valid for at least three
|
|
||||||
years, to give any third party, for a charge no more than your
|
|
||||||
cost of physically performing source distribution, a complete
|
|
||||||
machine-readable copy of the corresponding source code, to be
|
|
||||||
distributed under the terms of Sections 1 and 2 above on a medium
|
|
||||||
customarily used for software interchange; or,
|
|
||||||
|
|
||||||
c) Accompany it with the information you received as to the offer
|
|
||||||
to distribute corresponding source code. (This alternative is
|
|
||||||
allowed only for noncommercial distribution and only if you
|
|
||||||
received the program in object code or executable form with such
|
|
||||||
an offer, in accord with Subsection b above.)
|
|
||||||
|
|
||||||
The source code for a work means the preferred form of the work for
|
|
||||||
making modifications to it. For an executable work, complete source
|
|
||||||
code means all the source code for all modules it contains, plus any
|
|
||||||
associated interface definition files, plus the scripts used to
|
|
||||||
control compilation and installation of the executable. However, as a
|
|
||||||
special exception, the source code distributed need not include
|
|
||||||
anything that is normally distributed (in either source or binary
|
|
||||||
form) with the major components (compiler, kernel, and so on) of the
|
|
||||||
operating system on which the executable runs, unless that component
|
|
||||||
itself accompanies the executable.
|
|
||||||
|
|
||||||
If distribution of executable or object code is made by offering
|
|
||||||
access to copy from a designated place, then offering equivalent
|
|
||||||
access to copy the source code from the same place counts as
|
|
||||||
distribution of the source code, even though third parties are not
|
|
||||||
compelled to copy the source along with the object code.
|
|
||||||
|
|
||||||
4. You may not copy, modify, sublicense, or distribute the Program
|
|
||||||
except as expressly provided under this License. Any attempt
|
|
||||||
otherwise to copy, modify, sublicense or distribute the Program is
|
|
||||||
void, and will automatically terminate your rights under this License.
|
|
||||||
However, parties who have received copies, or rights, from you under
|
|
||||||
this License will not have their licenses terminated so long as such
|
|
||||||
parties remain in full compliance.
|
|
||||||
|
|
||||||
5. You are not required to accept this License, since you have not
|
|
||||||
signed it. However, nothing else grants you permission to modify or
|
|
||||||
distribute the Program or its derivative works. These actions are
|
|
||||||
prohibited by law if you do not accept this License. Therefore, by
|
|
||||||
modifying or distributing the Program (or any work based on the
|
|
||||||
Program), you indicate your acceptance of this License to do so, and
|
|
||||||
all its terms and conditions for copying, distributing or modifying
|
|
||||||
the Program or works based on it.
|
|
||||||
|
|
||||||
6. Each time you redistribute the Program (or any work based on the
|
|
||||||
Program), the recipient automatically receives a license from the
|
|
||||||
original licensor to copy, distribute or modify the Program subject to
|
|
||||||
these terms and conditions. You may not impose any further
|
|
||||||
restrictions on the recipients' exercise of the rights granted herein.
|
|
||||||
You are not responsible for enforcing compliance by third parties to
|
|
||||||
this License.
|
|
||||||
|
|
||||||
7. If, as a consequence of a court judgment or allegation of patent
|
|
||||||
infringement or for any other reason (not limited to patent issues),
|
|
||||||
conditions are imposed on you (whether by court order, agreement or
|
|
||||||
otherwise) that contradict the conditions of this License, they do not
|
|
||||||
excuse you from the conditions of this License. If you cannot
|
|
||||||
distribute so as to satisfy simultaneously your obligations under this
|
|
||||||
License and any other pertinent obligations, then as a consequence you
|
|
||||||
may not distribute the Program at all. For example, if a patent
|
|
||||||
license would not permit royalty-free redistribution of the Program by
|
|
||||||
all those who receive copies directly or indirectly through you, then
|
|
||||||
the only way you could satisfy both it and this License would be to
|
|
||||||
refrain entirely from distribution of the Program.
|
|
||||||
|
|
||||||
If any portion of this section is held invalid or unenforceable under
|
|
||||||
any particular circumstance, the balance of the section is intended to
|
|
||||||
apply and the section as a whole is intended to apply in other
|
|
||||||
circumstances.
|
|
||||||
|
|
||||||
It is not the purpose of this section to induce you to infringe any
|
|
||||||
patents or other property right claims or to contest validity of any
|
|
||||||
such claims; this section has the sole purpose of protecting the
|
|
||||||
integrity of the free software distribution system, which is
|
|
||||||
implemented by public license practices. Many people have made
|
|
||||||
generous contributions to the wide range of software distributed
|
|
||||||
through that system in reliance on consistent application of that
|
|
||||||
system; it is up to the author/donor to decide if he or she is willing
|
|
||||||
to distribute software through any other system and a licensee cannot
|
|
||||||
impose that choice.
|
|
||||||
|
|
||||||
This section is intended to make thoroughly clear what is believed to
|
|
||||||
be a consequence of the rest of this License.
|
|
||||||
|
|
||||||
8. If the distribution and/or use of the Program is restricted in
|
|
||||||
certain countries either by patents or by copyrighted interfaces, the
|
|
||||||
original copyright holder who places the Program under this License
|
|
||||||
may add an explicit geographical distribution limitation excluding
|
|
||||||
those countries, so that distribution is permitted only in or among
|
|
||||||
countries not thus excluded. In such case, this License incorporates
|
|
||||||
the limitation as if written in the body of this License.
|
|
||||||
|
|
||||||
9. The Free Software Foundation may publish revised and/or new versions
|
|
||||||
of the General Public License from time to time. Such new versions will
|
|
||||||
be similar in spirit to the present version, but may differ in detail to
|
|
||||||
address new problems or concerns.
|
|
||||||
|
|
||||||
Each version is given a distinguishing version number. If the Program
|
|
||||||
specifies a version number of this License which applies to it and "any
|
|
||||||
later version", you have the option of following the terms and conditions
|
|
||||||
either of that version or of any later version published by the Free
|
|
||||||
Software Foundation. If the Program does not specify a version number of
|
|
||||||
this License, you may choose any version ever published by the Free Software
|
|
||||||
Foundation.
|
|
||||||
|
|
||||||
10. If you wish to incorporate parts of the Program into other free
|
|
||||||
programs whose distribution conditions are different, write to the author
|
|
||||||
to ask for permission. For software which is copyrighted by the Free
|
|
||||||
Software Foundation, write to the Free Software Foundation; we sometimes
|
|
||||||
make exceptions for this. Our decision will be guided by the two goals
|
|
||||||
of preserving the free status of all derivatives of our free software and
|
|
||||||
of promoting the sharing and reuse of software generally.
|
|
||||||
|
|
||||||
NO WARRANTY
|
|
||||||
|
|
||||||
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
|
|
||||||
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
|
|
||||||
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
|
|
||||||
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
|
|
||||||
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
|
|
||||||
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
|
|
||||||
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
|
|
||||||
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
|
|
||||||
REPAIR OR CORRECTION.
|
|
||||||
|
|
||||||
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
|
||||||
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
|
|
||||||
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
|
|
||||||
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
|
|
||||||
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
|
|
||||||
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
|
|
||||||
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
|
|
||||||
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
|
|
||||||
POSSIBILITY OF SUCH DAMAGES.
|
|
||||||
|
|
||||||
END OF TERMS AND CONDITIONS
|
|
||||||
|
|
||||||
|
|
||||||
-------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
OpenSSL License
|
|
||||||
---------------
|
|
||||||
|
|
||||||
LICENSE ISSUES
|
|
||||||
==============
|
|
||||||
|
|
||||||
The OpenSSL toolkit stays under a dual license, i.e. both the conditions of
|
|
||||||
the OpenSSL License and the original SSLeay license apply to the toolkit.
|
|
||||||
See below for the actual license texts. Actually both licenses are BSD-style
|
|
||||||
Open Source licenses. In case of any license issues related to OpenSSL
|
|
||||||
please contact openssl-core@openssl.org.
|
|
||||||
|
|
||||||
OpenSSL License
|
|
||||||
---------------
|
|
||||||
|
|
||||||
/* ====================================================================
|
|
||||||
* Copyright (c) 1998-2001 The OpenSSL Project. All rights reserved.
|
|
||||||
*
|
|
||||||
* Redistribution and use in source and binary forms, with or without
|
|
||||||
* modification, are permitted provided that the following conditions
|
|
||||||
* are met:
|
|
||||||
*
|
|
||||||
* 1. Redistributions of source code must retain the above copyright
|
|
||||||
* notice, this list of conditions and the following disclaimer.
|
|
||||||
*
|
|
||||||
* 2. Redistributions in binary form must reproduce the above copyright
|
|
||||||
* notice, this list of conditions and the following disclaimer in
|
|
||||||
* the documentation and/or other materials provided with the
|
|
||||||
* distribution.
|
|
||||||
*
|
|
||||||
* 3. All advertising materials mentioning features or use of this
|
|
||||||
* software must display the following acknowledgment:
|
|
||||||
* "This product includes software developed by the OpenSSL Project
|
|
||||||
* for use in the OpenSSL Toolkit. (http://www.openssl.org/)"
|
|
||||||
*
|
|
||||||
* 4. The names "OpenSSL Toolkit" and "OpenSSL Project" must not be used to
|
|
||||||
* endorse or promote products derived from this software without
|
|
||||||
* prior written permission. For written permission, please contact
|
|
||||||
* openssl-core@openssl.org.
|
|
||||||
*
|
|
||||||
* 5. Products derived from this software may not be called "OpenSSL"
|
|
||||||
* nor may "OpenSSL" appear in their names without prior written
|
|
||||||
* permission of the OpenSSL Project.
|
|
||||||
*
|
|
||||||
* 6. Redistributions of any form whatsoever must retain the following
|
|
||||||
* acknowledgment:
|
|
||||||
* "This product includes software developed by the OpenSSL Project
|
|
||||||
* for use in the OpenSSL Toolkit (http://www.openssl.org/)"
|
|
||||||
*
|
|
||||||
* THIS SOFTWARE IS PROVIDED BY THE OpenSSL PROJECT ``AS IS'' AND ANY
|
|
||||||
* EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
||||||
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
|
||||||
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE OpenSSL PROJECT OR
|
|
||||||
* ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
|
||||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
|
|
||||||
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
|
||||||
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
||||||
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
|
|
||||||
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
|
||||||
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
|
|
||||||
* OF THE POSSIBILITY OF SUCH DAMAGE.
|
|
||||||
* ====================================================================
|
|
||||||
*
|
|
||||||
* This product includes cryptographic software written by Eric Young
|
|
||||||
* (eay@cryptsoft.com). This product includes software written by Tim
|
|
||||||
* Hudson (tjh@cryptsoft.com).
|
|
||||||
*
|
|
||||||
*/
|
|
||||||
|
|
||||||
Original SSLeay License
|
|
||||||
-----------------------
|
|
||||||
|
|
||||||
/* Copyright (C) 1995-1998 Eric Young (eay@cryptsoft.com)
|
|
||||||
* All rights reserved.
|
|
||||||
*
|
|
||||||
* This package is an SSL implementation written
|
|
||||||
* by Eric Young (eay@cryptsoft.com).
|
|
||||||
* The implementation was written so as to conform with Netscapes SSL.
|
|
||||||
*
|
|
||||||
* This library is free for commercial and non-commercial use as long as
|
|
||||||
* the following conditions are aheared to. The following conditions
|
|
||||||
* apply to all code found in this distribution, be it the RC4, RSA,
|
|
||||||
* lhash, DES, etc., code; not just the SSL code. The SSL documentation
|
|
||||||
* included with this distribution is covered by the same copyright terms
|
|
||||||
* except that the holder is Tim Hudson (tjh@cryptsoft.com).
|
|
||||||
*
|
|
||||||
* Copyright remains Eric Young's, and as such any Copyright notices in
|
|
||||||
* the code are not to be removed.
|
|
||||||
* If this package is used in a product, Eric Young should be given attribution
|
|
||||||
* as the author of the parts of the library used.
|
|
||||||
* This can be in the form of a textual message at program startup or
|
|
||||||
* in documentation (online or textual) provided with the package.
|
|
||||||
*
|
|
||||||
* Redistribution and use in source and binary forms, with or without
|
|
||||||
* modification, are permitted provided that the following conditions
|
|
||||||
* are met:
|
|
||||||
* 1. Redistributions of source code must retain the copyright
|
|
||||||
* notice, this list of conditions and the following disclaimer.
|
|
||||||
* 2. Redistributions in binary form must reproduce the above copyright
|
|
||||||
* notice, this list of conditions and the following disclaimer in the
|
|
||||||
* documentation and/or other materials provided with the distribution.
|
|
||||||
* 3. All advertising materials mentioning features or use of this software
|
|
||||||
* must display the following acknowledgement:
|
|
||||||
* "This product includes cryptographic software written by
|
|
||||||
* Eric Young (eay@cryptsoft.com)"
|
|
||||||
* The word 'cryptographic' can be left out if the routines from the library
|
|
||||||
* being used are not cryptographic related :-).
|
|
||||||
* 4. If you include any Windows specific code (or a derivative thereof) from
|
|
||||||
* the apps directory (application code) you must include an acknowledgement:
|
|
||||||
* "This product includes software written by Tim Hudson (tjh@cryptsoft.com)"
|
|
||||||
*
|
|
||||||
* THIS SOFTWARE IS PROVIDED BY ERIC YOUNG ``AS IS'' AND
|
|
||||||
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
||||||
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
||||||
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
|
|
||||||
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
||||||
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
||||||
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
||||||
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
||||||
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
||||||
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
||||||
* SUCH DAMAGE.
|
|
||||||
*
|
|
||||||
* The licence and distribution terms for any publically available version or
|
|
||||||
* derivative of this code cannot be changed. i.e. this code cannot simply be
|
|
||||||
* copied and put under another distribution licence
|
|
||||||
* [including the GNU Public Licence.]
|
|
||||||
*/
|
|
||||||
78
README.md
78
README.md
@@ -1,77 +1,21 @@
|
|||||||
# Wazuh containers for Docker
|
# IMPORTANT NOTE
|
||||||
|
|
||||||
[](https://wazuh.com/community/join-us-on-slack/)
|
The first time than you runt this container can take a while until kibana finish the configuration, the Wazuh plugin can take a few minutes until finish the instalation, please be patient.
|
||||||
[](https://groups.google.com/forum/#!forum/wazuh)
|
|
||||||
[](https://documentation.wazuh.com)
|
|
||||||
[](https://wazuh.com)
|
|
||||||
|
|
||||||
In this repository you will find the containers to run:
|
# Docker container Wazuh 2.0 + ELK(5.4.2)
|
||||||
|
|
||||||
* wazuh: It runs the Wazuh manager, Wazuh API and Filebeat (for integration with Elastic Stack)
|
This Docker container source files can be found in our [Wazuh Github repository](https://github.com/wazuh/wazuh). It includes both an OSSEC manager and an Elasticsearch single-node cluster, with Logstash and Kibana. You can find more information on how these components work together in our documentation.
|
||||||
* wazuh-kibana: Provides a web user interface to browse through alerts data. It includes Wazuh plugin for Kibana, that allows you to visualize agents configuration and status.
|
|
||||||
* wazuh-nginx: Proxies the Kibana container, adding HTTPS (via self-signed SSL certificate) and [Basic authentication](https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication#Basic_authentication_scheme).
|
|
||||||
* wazuh-elasticsearch: An Elasticsearch container (working as a single-node cluster) using Elastic Stack Docker images. **Be aware to increase the `vm.max_map_count` setting, as it's detailed in the [Wazuh documentation](https://documentation.wazuh.com/current/docker/wazuh-container.html#increase-max-map-count-on-your-host-linux).**
|
|
||||||
|
|
||||||
In addition, a docker-compose file is provided to launch the containers mentioned above.
|
|
||||||
|
|
||||||
* Elasticsearch cluster. In the Elasticsearch Dockerfile we can visualize variables to configure an Elasticsearch Cluster. These variables are used in the file *config_cluster.sh* to set them in the *elasticsearch.yml* configuration file. You can see the meaning of the node variables [here](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html) and other cluster settings [here](https://github.com/elastic/elasticsearch/blob/master/distribution/src/config/elasticsearch.yml).
|
|
||||||
|
|
||||||
## Documentation
|
## Documentation
|
||||||
|
|
||||||
* [Wazuh full documentation](http://documentation.wazuh.com)
|
* [Full documentation](http://documentation.wazuh.com)
|
||||||
* [Wazuh documentation for Docker](https://documentation.wazuh.com/current/docker/index.html)
|
* [Wazuh-docker module documentation](https://documentation.wazuh.com/current/docker/index.html)
|
||||||
* [Docker hub](https://hub.docker.com/u/wazuh)
|
* [Hub docker](https://hub.docker.com/u/wazuh)
|
||||||
|
|
||||||
## Directory structure
|
## Credits and thank you
|
||||||
|
|
||||||
wazuh-docker
|
These Docker containers are based on "deviantony" dockerfiles which can be found at [https://github.com/deviantony/docker-elk] (https://github.com/deviantony/docker-elk), and "xetus-oss" dockerfiles, which can be found at [https://github.com/xetus-oss/docker-ossec-server](https://github.com/xetus-oss/docker-ossec-server). We created our own fork, which we test and maintain. Thank you Anthony Lapenna for your contribution to the community.
|
||||||
├── docker-compose.yml
|
|
||||||
├── kibana
|
|
||||||
│ ├── config
|
|
||||||
│ │ ├── entrypoint.sh
|
|
||||||
│ │ └── kibana.yml
|
|
||||||
│ └── Dockerfile
|
|
||||||
├── LICENSE
|
|
||||||
├── nginx
|
|
||||||
│ ├── config
|
|
||||||
│ │ └── entrypoint.sh
|
|
||||||
│ └── Dockerfile
|
|
||||||
├── README.md
|
|
||||||
├── CHANGELOG.md
|
|
||||||
├── VERSION
|
|
||||||
├── test.txt
|
|
||||||
└── wazuh
|
|
||||||
├── config
|
|
||||||
│ ├── data_dirs.env
|
|
||||||
│ ├── entrypoint.sh
|
|
||||||
│ ├── filebeat.runit.service
|
|
||||||
│ ├── filebeat.yml
|
|
||||||
│ ├── init.bash
|
|
||||||
│ ├── postfix.runit.service
|
|
||||||
│ ├── wazuh-api.runit.service
|
|
||||||
│ └── wazuh.runit.service
|
|
||||||
└── Dockerfile
|
|
||||||
|
|
||||||
|
## References
|
||||||
|
|
||||||
## Branches
|
* [Wazuh website](http://wazuh.com)
|
||||||
|
|
||||||
* `stable` branch on correspond to the latest Wazuh-Docker stable version.
|
|
||||||
* `master` branch contains the latest code, be aware of possible bugs on this branch.
|
|
||||||
* `Wazuh.Version_ElasticStack.Version` (for example 3.9.5_7.2.1) branch. This branch contains the current release referenced in Docker Hub. The container images are installed under the current version of this branch.
|
|
||||||
|
|
||||||
## Credits and Thank you
|
|
||||||
|
|
||||||
These Docker containers are based on:
|
|
||||||
|
|
||||||
* "deviantony" dockerfiles which can be found at [https://github.com/deviantony/docker-elk](https://github.com/deviantony/docker-elk)
|
|
||||||
* "xetus-oss" dockerfiles, which can be found at [https://github.com/xetus-oss/docker-ossec-server](https://github.com/xetus-oss/docker-ossec-server)
|
|
||||||
|
|
||||||
We thank you them and everyone else who has contributed to this project.
|
|
||||||
|
|
||||||
## License and copyright
|
|
||||||
|
|
||||||
Wazuh Docker Copyright (C) 2019 Wazuh Inc. (License GPLv2)
|
|
||||||
|
|
||||||
## Web references
|
|
||||||
|
|
||||||
[Wazuh website](http://wazuh.com)
|
|
||||||
|
|||||||
@@ -1,54 +1,71 @@
|
|||||||
# Wazuh App Copyright (C) 2019 Wazuh Inc. (License GPLv2)
|
|
||||||
version: '2'
|
version: '2'
|
||||||
|
|
||||||
services:
|
services:
|
||||||
wazuh:
|
wazuh:
|
||||||
image: wazuh/wazuh:3.9.5_7.2.1
|
image: wazuh/wazuh
|
||||||
hostname: wazuh-manager
|
hostname: wazuh-manager
|
||||||
restart: always
|
restart: always
|
||||||
ports:
|
ports:
|
||||||
- "1514:1514/udp"
|
- "1514/udp:1514/udp"
|
||||||
- "1515:1515"
|
- "1515:1515"
|
||||||
- "514:514/udp"
|
- "514/udp:514/udp"
|
||||||
- "55000:55000"
|
- "55000:55000"
|
||||||
|
networks:
|
||||||
elasticsearch:
|
- docker_elk
|
||||||
image: wazuh/wazuh-elasticsearch:3.9.5_7.2.1
|
# volumes:
|
||||||
hostname: elasticsearch
|
# - my-path:/var/ossec/data
|
||||||
restart: always
|
# - my-path:/etc/postfix
|
||||||
ports:
|
|
||||||
- "9200:9200"
|
|
||||||
environment:
|
|
||||||
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
|
|
||||||
- ELASTIC_CLUSTER=true
|
|
||||||
- CLUSTER_NODE_MASTER=true
|
|
||||||
- CLUSTER_MASTER_NODE_NAME=es01
|
|
||||||
ulimits:
|
|
||||||
memlock:
|
|
||||||
soft: -1
|
|
||||||
hard: -1
|
|
||||||
mem_limit: 2g
|
|
||||||
|
|
||||||
kibana:
|
|
||||||
image: wazuh/wazuh-kibana:3.9.5_7.2.1
|
|
||||||
hostname: kibana
|
|
||||||
restart: always
|
|
||||||
depends_on:
|
depends_on:
|
||||||
- elasticsearch
|
- elasticsearch
|
||||||
links:
|
logstash:
|
||||||
- elasticsearch:elasticsearch
|
image: wazuh/wazuh-logstash
|
||||||
- wazuh:wazuh
|
hostname: logstash
|
||||||
nginx:
|
|
||||||
image: wazuh/wazuh-nginx:3.9.5_7.2.1
|
|
||||||
hostname: nginx
|
|
||||||
restart: always
|
restart: always
|
||||||
environment:
|
command: -f /etc/logstash/conf.d/
|
||||||
- NGINX_PORT=443
|
# volumes:
|
||||||
- NGINX_CREDENTIALS
|
# - my-path:/etc/logstash/conf.d
|
||||||
ports:
|
|
||||||
- "80:80"
|
|
||||||
- "443:443"
|
|
||||||
depends_on:
|
|
||||||
- kibana
|
|
||||||
links:
|
links:
|
||||||
- kibana:kibana
|
- kibana
|
||||||
|
- elasticsearch
|
||||||
|
ports:
|
||||||
|
- "5000:5000"
|
||||||
|
networks:
|
||||||
|
- docker_elk
|
||||||
|
depends_on:
|
||||||
|
- elasticsearch
|
||||||
|
environment:
|
||||||
|
- LS_HEAP_SIZE=2048m
|
||||||
|
elasticsearch:
|
||||||
|
image: elasticsearch:5.4.2
|
||||||
|
hostname: elasticsearch
|
||||||
|
restart: always
|
||||||
|
command: elasticsearch -E node.name="node-1" -E cluster.name="wazuh" -E network.host=0.0.0.0
|
||||||
|
ports:
|
||||||
|
- "9200:9200"
|
||||||
|
- "9300:9300"
|
||||||
|
environment:
|
||||||
|
ES_JAVA_OPTS: "-Xms2g -Xmx2g"
|
||||||
|
# volumes:
|
||||||
|
# - my-path:/usr/share/elasticsearch/data
|
||||||
|
networks:
|
||||||
|
- docker_elk
|
||||||
|
kibana:
|
||||||
|
image: wazuh/wazuh-kibana
|
||||||
|
hostname: kibana
|
||||||
|
restart: always
|
||||||
|
ports:
|
||||||
|
- "5601:5601"
|
||||||
|
networks:
|
||||||
|
- docker_elk
|
||||||
|
depends_on:
|
||||||
|
- elasticsearch
|
||||||
|
entrypoint: sh wait-for-it.sh elasticsearch
|
||||||
|
# environment:
|
||||||
|
# - "WAZUH_KIBANA_PLUGIN_URL=http://your.repo/wazuhapp-2.0_5.4.2.zip"
|
||||||
|
|
||||||
|
networks:
|
||||||
|
docker_elk:
|
||||||
|
driver: bridge
|
||||||
|
ipam:
|
||||||
|
config:
|
||||||
|
- subnet: 172.25.0.0/24
|
||||||
|
|||||||
@@ -1,54 +0,0 @@
|
|||||||
# Wazuh Docker Copyright (C) 2019 Wazuh Inc. (License GPLv2)
|
|
||||||
ARG ELASTIC_VERSION=7.3.0
|
|
||||||
FROM docker.elastic.co/elasticsearch/elasticsearch:${ELASTIC_VERSION}
|
|
||||||
ARG S3_PLUGIN_URL="https://artifacts.elastic.co/downloads/elasticsearch-plugins/repository-s3/repository-s3-${ELASTIC_VERSION}.zip"
|
|
||||||
|
|
||||||
ENV ELASTICSEARCH_URL="http://elasticsearch:9200"
|
|
||||||
|
|
||||||
ENV ALERTS_SHARDS="1" \
|
|
||||||
ALERTS_REPLICAS="0"
|
|
||||||
|
|
||||||
ENV API_USER="foo" \
|
|
||||||
API_PASS="bar"
|
|
||||||
|
|
||||||
ENV XPACK_ML="true"
|
|
||||||
|
|
||||||
ENV ENABLE_CONFIGURE_S3="false"
|
|
||||||
|
|
||||||
ARG TEMPLATE_VERSION=v3.9.5
|
|
||||||
|
|
||||||
# Elasticearch cluster configuration environment variables
|
|
||||||
# If ELASTIC_CLUSTER is set to "true" the following variables will be added to the Elasticsearch configuration
|
|
||||||
# CLUSTER_INITIAL_MASTER_NODES set to own node by default.
|
|
||||||
ENV ELASTIC_CLUSTER="false" \
|
|
||||||
CLUSTER_NAME="wazuh" \
|
|
||||||
CLUSTER_NODE_MASTER="false" \
|
|
||||||
CLUSTER_NODE_DATA="true" \
|
|
||||||
CLUSTER_NODE_INGEST="true" \
|
|
||||||
CLUSTER_NODE_NAME="wazuh-elasticsearch" \
|
|
||||||
CLUSTER_MASTER_NODE_NAME="master-node" \
|
|
||||||
CLUSTER_MEMORY_LOCK="true" \
|
|
||||||
CLUSTER_DISCOVERY_SERVICE="wazuh-elasticsearch" \
|
|
||||||
CLUSTER_NUMBER_OF_MASTERS="2" \
|
|
||||||
CLUSTER_MAX_NODES="1" \
|
|
||||||
CLUSTER_DELAYED_TIMEOUT="1m" \
|
|
||||||
CLUSTER_INITIAL_MASTER_NODES="wazuh-elasticsearch"
|
|
||||||
|
|
||||||
COPY config/entrypoint.sh /entrypoint.sh
|
|
||||||
|
|
||||||
RUN chmod 755 /entrypoint.sh
|
|
||||||
|
|
||||||
COPY --chown=elasticsearch:elasticsearch ./config/load_settings.sh ./
|
|
||||||
|
|
||||||
RUN chmod +x ./load_settings.sh
|
|
||||||
|
|
||||||
RUN ${bin/elasticsearch-plugin install --batch S3_PLUGIN_URL}
|
|
||||||
|
|
||||||
COPY config/configure_s3.sh ./config/configure_s3.sh
|
|
||||||
RUN chmod 755 ./config/configure_s3.sh
|
|
||||||
|
|
||||||
COPY --chown=elasticsearch:elasticsearch ./config/config_cluster.sh ./
|
|
||||||
RUN chmod +x ./config_cluster.sh
|
|
||||||
|
|
||||||
ENTRYPOINT ["/entrypoint.sh"]
|
|
||||||
CMD ["elasticsearch"]
|
|
||||||
@@ -1,57 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Wazuh Docker Copyright (C) 2019 Wazuh Inc. (License GPLv2)
|
|
||||||
|
|
||||||
elastic_config_file="/usr/share/elasticsearch/config/elasticsearch.yml"
|
|
||||||
|
|
||||||
remove_single_node_conf(){
|
|
||||||
if grep -Fq "discovery.type" $1; then
|
|
||||||
sed -i '/discovery.type\: /d' $1
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
remove_cluster_config(){
|
|
||||||
sed -i '/# cluster node/,/# end cluster config/d' $1
|
|
||||||
}
|
|
||||||
|
|
||||||
# If Elasticsearch cluster is enable, then set up the elasticsearch.yml
|
|
||||||
if [[ $ELASTIC_CLUSTER == "true" && $CLUSTER_NODE_MASTER != "" && $CLUSTER_NODE_DATA != "" && $CLUSTER_NODE_INGEST != "" && $CLUSTER_MASTER_NODE_NAME != "" ]]; then
|
|
||||||
# Remove the old configuration
|
|
||||||
remove_single_node_conf $elastic_config_file
|
|
||||||
remove_cluster_config $elastic_config_file
|
|
||||||
|
|
||||||
if [[ $CLUSTER_NODE_MASTER == "true" ]]; then
|
|
||||||
# Add the master configuration
|
|
||||||
# cluster.initial_master_nodes for bootstrap the cluster
|
|
||||||
cat > $elastic_config_file << EOF
|
|
||||||
# cluster node
|
|
||||||
network.host: 0.0.0.0
|
|
||||||
node.name: $CLUSTER_MASTER_NODE_NAME
|
|
||||||
node.master: $CLUSTER_NODE_MASTER
|
|
||||||
cluster.initial_master_nodes:
|
|
||||||
- $CLUSTER_MASTER_NODE_NAME
|
|
||||||
# end cluster config"
|
|
||||||
EOF
|
|
||||||
|
|
||||||
elif [[ $CLUSTER_NODE_NAME != "" ]];then
|
|
||||||
# Remove the old configuration
|
|
||||||
remove_single_node_conf $elastic_config_file
|
|
||||||
remove_cluster_config $elastic_config_file
|
|
||||||
|
|
||||||
cat > $elastic_config_file << EOF
|
|
||||||
# cluster node
|
|
||||||
network.host: 0.0.0.0
|
|
||||||
node.name: $CLUSTER_NODE_NAME
|
|
||||||
node.master: false
|
|
||||||
discovery.seed_hosts:
|
|
||||||
- $CLUSTER_MASTER_NODE_NAME
|
|
||||||
- $CLUSTER_NODE_NAME
|
|
||||||
# end cluster config"
|
|
||||||
EOF
|
|
||||||
fi
|
|
||||||
# If the cluster is disabled, then set a single-node configuration
|
|
||||||
else
|
|
||||||
# Remove the old configuration
|
|
||||||
remove_single_node_conf $elastic_config_file
|
|
||||||
remove_cluster_config $elastic_config_file
|
|
||||||
echo "discovery.type: single-node" >> $elastic_config_file
|
|
||||||
fi
|
|
||||||
@@ -1,77 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Wazuh Docker Copyright (C) 2019 Wazuh Inc. (License GPLv2)
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
# Check number of arguments passed to configure_s3.sh. If it is different from 4 or 5, the process will finish with error.
|
|
||||||
# param 1: number of arguments passed to configure_s3.sh
|
|
||||||
|
|
||||||
function CheckArgs()
|
|
||||||
{
|
|
||||||
if [ $1 != 4 ] && [ $1 != 5 ];then
|
|
||||||
echo "Use: configure_s3.sh <Elastic_Server_IP:Port> <Bucket> <Path> <RepositoryName> (By default <current_elasticsearch_major_version> is added to the path and the repository name)"
|
|
||||||
echo "or use: configure_s3.sh <Elastic_Server_IP:Port> <Bucket> <Path> <RepositoryName> <Elasticsearch major version>"
|
|
||||||
exit 1
|
|
||||||
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# Create S3 repository from base_path <path>/<elasticsearch_major_version> (if there is no <Elasticsearch major version> argument, current version is added)
|
|
||||||
# Repository name would be <RepositoryName>-<elasticsearch_major_version> (if there is no <Elasticsearch major version> argument, current version is added)
|
|
||||||
# param 1: <Elastic_Server_IP:Port>
|
|
||||||
# param 2: <Bucket>
|
|
||||||
# param 3: <Path>
|
|
||||||
# param 4: <RepositoryName>
|
|
||||||
# param 5: Optional <Elasticsearch major version>
|
|
||||||
# output: It will show "acknowledged" if the repository has been successfully created
|
|
||||||
|
|
||||||
function CreateRepo()
|
|
||||||
{
|
|
||||||
|
|
||||||
elastic_ip_port="$2"
|
|
||||||
bucket_name="$3"
|
|
||||||
path="$4"
|
|
||||||
repository_name="$5"
|
|
||||||
|
|
||||||
if [ $1 == 5 ];then
|
|
||||||
version="$6"
|
|
||||||
else
|
|
||||||
version=`curl -s $elastic_ip_port | grep number | cut -d"\"" -f4 | cut -c1`
|
|
||||||
fi
|
|
||||||
|
|
||||||
if ! [[ "$version" =~ ^[0-9]+$ ]];then
|
|
||||||
echo "Elasticsearch major version must be an integer"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
repository="$repository_name-$version"
|
|
||||||
s3_path="$path/$version"
|
|
||||||
|
|
||||||
curl -X PUT "$elastic_ip_port/_snapshot/$repository" -H 'Content-Type: application/json' -d'
|
|
||||||
{
|
|
||||||
"type": "s3",
|
|
||||||
"settings": {
|
|
||||||
"bucket": "'$bucket_name'",
|
|
||||||
"base_path": "'$s3_path'"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
'
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
# Run functions CheckArgs and CreateRepo
|
|
||||||
# param 1: number of arguments passed to configure_s3.sh
|
|
||||||
# param 2: <Elastic_Server_IP:Port>
|
|
||||||
# param 3: <Bucket>
|
|
||||||
# param 4: <Path>
|
|
||||||
# param 5: <RepositoryName>
|
|
||||||
# param 6: Optional <Elasticsearch major version>
|
|
||||||
|
|
||||||
function Main()
|
|
||||||
{
|
|
||||||
CheckArgs $1
|
|
||||||
|
|
||||||
CreateRepo $1 $2 $3 $4 $5 $6
|
|
||||||
}
|
|
||||||
|
|
||||||
Main $# $1 $2 $3 $4 $5
|
|
||||||
@@ -1,52 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Wazuh Docker Copyright (C) 2019 Wazuh Inc. (License GPLv2)
|
|
||||||
|
|
||||||
# For more information https://github.com/elastic/elasticsearch-docker/blob/6.8.0/build/elasticsearch/bin/docker-entrypoint.sh
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
# Files created by Elasticsearch should always be group writable too
|
|
||||||
umask 0002
|
|
||||||
|
|
||||||
run_as_other_user_if_needed() {
|
|
||||||
if [[ "$(id -u)" == "0" ]]; then
|
|
||||||
# If running as root, drop to specified UID and run command
|
|
||||||
exec chroot --userspec=1000 / "${@}"
|
|
||||||
else
|
|
||||||
# Either we are running in Openshift with random uid and are a member of the root group
|
|
||||||
# or with a custom --user
|
|
||||||
exec "${@}"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
#Disabling xpack features
|
|
||||||
|
|
||||||
elasticsearch_config_file="/usr/share/elasticsearch/config/elasticsearch.yml"
|
|
||||||
if grep -Fq "#xpack features" "$elasticsearch_config_file";
|
|
||||||
then
|
|
||||||
declare -A CONFIG_MAP=(
|
|
||||||
[xpack.ml.enabled]=$XPACK_ML
|
|
||||||
)
|
|
||||||
for i in "${!CONFIG_MAP[@]}"
|
|
||||||
do
|
|
||||||
if [ "${CONFIG_MAP[$i]}" != "" ]; then
|
|
||||||
sed -i 's/.'"$i"'.*/'"$i"': '"${CONFIG_MAP[$i]}"'/' $elasticsearch_config_file
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
else
|
|
||||||
echo "
|
|
||||||
#xpack features
|
|
||||||
xpack.ml.enabled: $XPACK_ML
|
|
||||||
" >> $elasticsearch_config_file
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Run load settings script.
|
|
||||||
|
|
||||||
./config_cluster.sh
|
|
||||||
|
|
||||||
./load_settings.sh &
|
|
||||||
|
|
||||||
# Execute elasticsearch
|
|
||||||
|
|
||||||
run_as_other_user_if_needed /usr/share/elasticsearch/bin/elasticsearch
|
|
||||||
@@ -1,103 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Wazuh Docker Copyright (C) 2019 Wazuh Inc. (License GPLv2)
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
el_url=${ELASTICSEARCH_URL}
|
|
||||||
|
|
||||||
if [ "x${WAZUH_API_URL}" = "x" ]; then
|
|
||||||
wazuh_url="https://wazuh"
|
|
||||||
else
|
|
||||||
wazuh_url="${WAZUH_API_URL}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ ${ENABLED_XPACK} != "true" || "x${ELASTICSEARCH_USERNAME}" = "x" || "x${ELASTICSEARCH_PASSWORD}" = "x" ]]; then
|
|
||||||
auth=""
|
|
||||||
else
|
|
||||||
auth="--user ${ELASTICSEARCH_USERNAME}:${ELASTICSEARCH_PASSWORD}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
until curl ${auth} -XGET $el_url; do
|
|
||||||
>&2 echo "Elastic is unavailable - sleeping"
|
|
||||||
sleep 5
|
|
||||||
done
|
|
||||||
|
|
||||||
>&2 echo "Elastic is up - executing command"
|
|
||||||
|
|
||||||
if [ $ENABLE_CONFIGURE_S3 ]; then
|
|
||||||
#Wait for Elasticsearch to be ready to create the repository
|
|
||||||
sleep 10
|
|
||||||
IP_PORT="${ELASTICSEARCH_IP}:${ELASTICSEARCH_PORT}"
|
|
||||||
|
|
||||||
if [ "x$S3_PATH" != "x" ]; then
|
|
||||||
|
|
||||||
if [ "x$S3_ELASTIC_MAJOR" != "x" ]; then
|
|
||||||
./config/configure_s3.sh $IP_PORT $S3_BUCKET_NAME $S3_PATH $S3_REPOSITORY_NAME $S3_ELASTIC_MAJOR
|
|
||||||
|
|
||||||
else
|
|
||||||
./config/configure_s3.sh $IP_PORT $S3_BUCKET_NAME $S3_PATH $S3_REPOSITORY_NAME
|
|
||||||
|
|
||||||
fi
|
|
||||||
|
|
||||||
fi
|
|
||||||
|
|
||||||
fi
|
|
||||||
|
|
||||||
#Insert default templates
|
|
||||||
|
|
||||||
API_PASS_Q=`echo "$API_PASS" | tr -d '"'`
|
|
||||||
API_USER_Q=`echo "$API_USER" | tr -d '"'`
|
|
||||||
API_PASSWORD=`echo -n $API_PASS_Q | base64`
|
|
||||||
|
|
||||||
echo "Setting API credentials into Wazuh APP"
|
|
||||||
CONFIG_CODE=$(curl -s -o /dev/null -w "%{http_code}" -XGET $el_url/.wazuh/_doc/1513629884013 ${auth})
|
|
||||||
|
|
||||||
if [ "x$CONFIG_CODE" != "x200" ]; then
|
|
||||||
curl -s -XPOST $el_url/.wazuh/_doc/1513629884013 ${auth} -H 'Content-Type: application/json' -d'
|
|
||||||
{
|
|
||||||
"api_user": "'"$API_USER_Q"'",
|
|
||||||
"api_password": "'"$API_PASSWORD"'",
|
|
||||||
"url": "'"$wazuh_url"'",
|
|
||||||
"api_port": "55000",
|
|
||||||
"insecure": "true",
|
|
||||||
"component": "API",
|
|
||||||
"cluster_info": {
|
|
||||||
"manager": "wazuh-manager",
|
|
||||||
"cluster": "Disabled",
|
|
||||||
"status": "disabled"
|
|
||||||
},
|
|
||||||
"extensions": {
|
|
||||||
"oscap": true,
|
|
||||||
"audit": true,
|
|
||||||
"pci": true,
|
|
||||||
"aws": true,
|
|
||||||
"virustotal": true,
|
|
||||||
"gdpr": true,
|
|
||||||
"ciscat": true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
' > /dev/null
|
|
||||||
else
|
|
||||||
echo "Wazuh APP already configured"
|
|
||||||
fi
|
|
||||||
sleep 5
|
|
||||||
|
|
||||||
curl -XPUT "$el_url/_cluster/settings" ${auth} -H 'Content-Type: application/json' -d'
|
|
||||||
{
|
|
||||||
"persistent": {
|
|
||||||
"xpack.monitoring.collection.enabled": true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
'
|
|
||||||
|
|
||||||
# Set cluster delayed timeout when node falls
|
|
||||||
curl -X PUT "$el_url/_all/_settings" -H 'Content-Type: application/json' -d'
|
|
||||||
{
|
|
||||||
"settings": {
|
|
||||||
"index.unassigned.node_left.delayed_timeout": "'"$CLUSTER_DELAYED_TIMEOUT"'"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
'
|
|
||||||
|
|
||||||
|
|
||||||
echo "Elasticsearch is ready."
|
|
||||||
BIN
images/image-1.png
Normal file
BIN
images/image-1.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 81 KiB |
BIN
images/image-2.png
Normal file
BIN
images/image-2.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 86 KiB |
@@ -1,78 +1,7 @@
|
|||||||
# Wazuh Docker Copyright (C) 2019 Wazuh Inc. (License GPLv2)
|
FROM kibana:5.4.2
|
||||||
FROM docker.elastic.co/kibana/kibana:7.3.0
|
|
||||||
ARG ELASTIC_VERSION=7.3.0
|
|
||||||
ARG WAZUH_VERSION=3.9.5
|
|
||||||
ARG WAZUH_APP_VERSION="${WAZUH_VERSION}_${ELASTIC_VERSION}"
|
|
||||||
|
|
||||||
USER root
|
RUN apt-get update && apt-get install -y curl
|
||||||
|
|
||||||
ADD https://packages.wazuh.com/wazuhapp/wazuhapp-${WAZUH_APP_VERSION}.zip /tmp
|
COPY ./config/kibana.yml /opt/kibana/config/kibana.yml
|
||||||
|
|
||||||
RUN /usr/share/kibana/bin/kibana-plugin install --allow-root file:///tmp/wazuhapp-${WAZUH_APP_VERSION}.zip
|
COPY config/wait-for-it.sh /
|
||||||
RUN rm -rf /tmp/wazuhapp-${WAZUH_APP_VERSION}.zip
|
|
||||||
|
|
||||||
COPY config/entrypoint.sh ./entrypoint.sh
|
|
||||||
RUN chmod 755 ./entrypoint.sh
|
|
||||||
|
|
||||||
USER kibana
|
|
||||||
|
|
||||||
ENV PATTERN="" \
|
|
||||||
CHECKS_PATTERN="" \
|
|
||||||
CHECKS_TEMPLATE="" \
|
|
||||||
CHECKS_API="" \
|
|
||||||
CHECKS_SETUP="" \
|
|
||||||
EXTENSIONS_PCI="" \
|
|
||||||
EXTENSIONS_GDPR="" \
|
|
||||||
EXTENSIONS_AUDIT="" \
|
|
||||||
EXTENSIONS_OSCAP="" \
|
|
||||||
EXTENSIONS_CISCAT="" \
|
|
||||||
EXTENSIONS_AWS="" \
|
|
||||||
EXTENSIONS_VIRUSTOTAL="" \
|
|
||||||
EXTENSIONS_OSQUERY="" \
|
|
||||||
APP_TIMEOUT="" \
|
|
||||||
WAZUH_SHARDS="" \
|
|
||||||
WAZUH_REPLICAS="" \
|
|
||||||
WAZUH_VERSION_SHARDS="" \
|
|
||||||
WAZUH_VERSION_REPLICAS="" \
|
|
||||||
IP_SELECTOR="" \
|
|
||||||
IP_IGNORE="" \
|
|
||||||
XPACK_RBAC_ENABLED="" \
|
|
||||||
WAZUH_MONITORING_ENABLED="" \
|
|
||||||
WAZUH_MONITORING_FREQUENCY="" \
|
|
||||||
WAZUH_MONITORING_SHARDS="" \
|
|
||||||
WAZUH_MONITORING_REPLICAS="" \
|
|
||||||
ADMIN_PRIVILEGES=""
|
|
||||||
|
|
||||||
ARG XPACK_CANVAS="true"
|
|
||||||
ARG XPACK_LOGS="true"
|
|
||||||
ARG XPACK_INFRA="true"
|
|
||||||
ARG XPACK_ML="true"
|
|
||||||
ARG XPACK_DEVTOOLS="true"
|
|
||||||
ARG XPACK_MONITORING="true"
|
|
||||||
ARG XPACK_APM="true"
|
|
||||||
|
|
||||||
ARG CHANGE_WELCOME="false"
|
|
||||||
|
|
||||||
COPY --chown=kibana:kibana ./config/wazuh_app_config.sh ./
|
|
||||||
|
|
||||||
RUN chmod +x ./wazuh_app_config.sh
|
|
||||||
|
|
||||||
COPY --chown=kibana:kibana ./config/kibana_settings.sh ./
|
|
||||||
|
|
||||||
RUN chmod +x ./kibana_settings.sh
|
|
||||||
|
|
||||||
COPY --chown=kibana:kibana ./config/xpack_config.sh ./
|
|
||||||
|
|
||||||
RUN chmod +x ./xpack_config.sh
|
|
||||||
|
|
||||||
RUN ./xpack_config.sh
|
|
||||||
|
|
||||||
COPY --chown=kibana:kibana ./config/welcome_wazuh.sh ./
|
|
||||||
|
|
||||||
RUN chmod +x ./welcome_wazuh.sh
|
|
||||||
|
|
||||||
RUN ./welcome_wazuh.sh
|
|
||||||
|
|
||||||
RUN /usr/local/bin/kibana-docker --optimize
|
|
||||||
|
|
||||||
ENTRYPOINT ./entrypoint.sh
|
|
||||||
|
|||||||
@@ -1,57 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Wazuh Docker Copyright (C) 2019 Wazuh Inc. (License GPLv2)
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
##############################################################################
|
|
||||||
# Waiting for elasticsearch
|
|
||||||
##############################################################################
|
|
||||||
|
|
||||||
if [ "x${ELASTICSEARCH_URL}" = "x" ]; then
|
|
||||||
el_url="http://elasticsearch:9200"
|
|
||||||
else
|
|
||||||
el_url="${ELASTICSEARCH_URL}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ ${ENABLED_XPACK} != "true" || "x${ELASTICSEARCH_USERNAME}" = "x" || "x${ELASTICSEARCH_PASSWORD}" = "x" ]]; then
|
|
||||||
auth=""
|
|
||||||
else
|
|
||||||
auth="--user ${ELASTICSEARCH_USERNAME}:${ELASTICSEARCH_PASSWORD}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
until curl -XGET $el_url ${auth}; do
|
|
||||||
>&2 echo "Elastic is unavailable - sleeping"
|
|
||||||
sleep 5
|
|
||||||
done
|
|
||||||
|
|
||||||
sleep 2
|
|
||||||
|
|
||||||
>&2 echo "Elasticsearch is up."
|
|
||||||
|
|
||||||
|
|
||||||
##############################################################################
|
|
||||||
# Waiting for wazuh alerts template
|
|
||||||
##############################################################################
|
|
||||||
|
|
||||||
strlen=0
|
|
||||||
|
|
||||||
while [[ $strlen -eq 0 ]]
|
|
||||||
do
|
|
||||||
template=$(curl $el_url/_cat/templates/wazuh -s)
|
|
||||||
strlen=${#template}
|
|
||||||
>&2 echo "Wazuh alerts template not loaded - sleeping."
|
|
||||||
sleep 2
|
|
||||||
done
|
|
||||||
|
|
||||||
sleep 2
|
|
||||||
|
|
||||||
>&2 echo "Wazuh alerts template is loaded."
|
|
||||||
|
|
||||||
|
|
||||||
./wazuh_app_config.sh
|
|
||||||
|
|
||||||
sleep 5
|
|
||||||
|
|
||||||
./kibana_settings.sh &
|
|
||||||
|
|
||||||
/usr/local/bin/kibana-docker
|
|
||||||
92
kibana/config/kibana.yml
Normal file
92
kibana/config/kibana.yml
Normal file
@@ -0,0 +1,92 @@
|
|||||||
|
# Kibana is served by a back end server. This setting specifies the port to use.
|
||||||
|
server.port: 5601
|
||||||
|
|
||||||
|
# This setting specifies the IP address of the back end server.
|
||||||
|
server.host: "0.0.0.0"
|
||||||
|
|
||||||
|
# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This setting
|
||||||
|
# cannot end in a slash.
|
||||||
|
# server.basePath: ""
|
||||||
|
|
||||||
|
# The maximum payload size in bytes for incoming server requests.
|
||||||
|
# server.maxPayloadBytes: 1048576
|
||||||
|
|
||||||
|
# The Kibana server's name. This is used for display purposes.
|
||||||
|
# server.name: "your-hostname"
|
||||||
|
|
||||||
|
# The URL of the Elasticsearch instance to use for all your queries.
|
||||||
|
elasticsearch.url: "http://elasticsearch:9200"
|
||||||
|
|
||||||
|
# When this setting’s value is true Kibana uses the hostname specified in the server.host
|
||||||
|
# setting. When the value of this setting is false, Kibana uses the hostname of the host
|
||||||
|
# that connects to this Kibana instance.
|
||||||
|
# elasticsearch.preserveHost: true
|
||||||
|
|
||||||
|
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
|
||||||
|
# dashboards. Kibana creates a new index if the index doesn’t already exist.
|
||||||
|
# kibana.index: ".kibana"
|
||||||
|
|
||||||
|
# The default application to load.
|
||||||
|
# kibana.defaultAppId: "discover"
|
||||||
|
|
||||||
|
# If your Elasticsearch is protected with basic authentication, these settings provide
|
||||||
|
# the username and password that the Kibana server uses to perform maintenance on the Kibana
|
||||||
|
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
|
||||||
|
# is proxied through the Kibana server.
|
||||||
|
# elasticsearch.username: "user"
|
||||||
|
# elasticsearch.password: "pass"
|
||||||
|
|
||||||
|
# Paths to the PEM-format SSL certificate and SSL key files, respectively. These
|
||||||
|
# files enable SSL for outgoing requests from the Kibana server to the browser.
|
||||||
|
# server.ssl.cert: /path/to/your/server.crt
|
||||||
|
# server.ssl.key: /path/to/your/server.key
|
||||||
|
|
||||||
|
# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
|
||||||
|
# These files validate that your Elasticsearch backend uses the same key files.
|
||||||
|
# elasticsearch.ssl.cert: /path/to/your/client.crt
|
||||||
|
# elasticsearch.ssl.key: /path/to/your/client.key
|
||||||
|
|
||||||
|
# Optional setting that enables you to specify a path to the PEM file for the certificate
|
||||||
|
# authority for your Elasticsearch instance.
|
||||||
|
# elasticsearch.ssl.ca: /path/to/your/CA.pem
|
||||||
|
|
||||||
|
# To disregard the validity of SSL certificates, change this setting’s value to false.
|
||||||
|
# elasticsearch.ssl.verify: true
|
||||||
|
|
||||||
|
# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
|
||||||
|
# the elasticsearch.requestTimeout setting.
|
||||||
|
# elasticsearch.pingTimeout: 1500
|
||||||
|
|
||||||
|
# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
|
||||||
|
# must be a positive integer.
|
||||||
|
# elasticsearch.requestTimeout: 30000
|
||||||
|
|
||||||
|
# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
|
||||||
|
# headers, set this value to [] (an empty list).
|
||||||
|
# elasticsearch.requestHeadersWhitelist: [ authorization ]
|
||||||
|
|
||||||
|
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
|
||||||
|
# elasticsearch.shardTimeout: 0
|
||||||
|
|
||||||
|
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
|
||||||
|
# elasticsearch.startupTimeout: 5000
|
||||||
|
|
||||||
|
# Specifies the path where Kibana creates the process ID file.
|
||||||
|
# pid.file: /var/run/kibana.pid
|
||||||
|
|
||||||
|
# Enables you specify a file where Kibana stores log output.
|
||||||
|
# logging.dest: stdout
|
||||||
|
|
||||||
|
# Set the value of this setting to true to suppress all logging output.
|
||||||
|
# logging.silent: false
|
||||||
|
|
||||||
|
# Set the value of this setting to true to suppress all logging output other than error messages.
|
||||||
|
# logging.quiet: false
|
||||||
|
|
||||||
|
# Set the value of this setting to true to log all events, including system usage information
|
||||||
|
# and all requests.
|
||||||
|
# logging.verbose: false
|
||||||
|
|
||||||
|
# Set the interval in milliseconds to sample system and process performance
|
||||||
|
# metrics. Minimum is 100ms. Defaults to 10000.
|
||||||
|
# ops.interval: 10000
|
||||||
@@ -1,79 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Wazuh Docker Copyright (C) 2019 Wazuh Inc. (License GPLv2)
|
|
||||||
|
|
||||||
WAZUH_MAJOR=3
|
|
||||||
|
|
||||||
##############################################################################
|
|
||||||
# Wait for the Kibana API to start. It is necessary to do it in this container
|
|
||||||
# because the others are running Elastic Stack and we can not interrupt them.
|
|
||||||
#
|
|
||||||
# The following actions are performed:
|
|
||||||
#
|
|
||||||
# Add the wazuh alerts index as default.
|
|
||||||
# Set the Discover time interval to 24 hours instead of 15 minutes.
|
|
||||||
# Do not ask user to help providing usage statistics to Elastic.
|
|
||||||
##############################################################################
|
|
||||||
|
|
||||||
##############################################################################
|
|
||||||
# Customize elasticsearch ip
|
|
||||||
##############################################################################
|
|
||||||
if [ "$ELASTICSEARCH_KIBANA_IP" != "" ]; then
|
|
||||||
sed -i "s:#elasticsearch.hosts:elasticsearch.hosts:g" /usr/share/kibana/config/kibana.yml
|
|
||||||
sed -i 's|http://elasticsearch:9200|'$ELASTICSEARCH_KIBANA_IP'|g' /usr/share/kibana/config/kibana.yml
|
|
||||||
fi
|
|
||||||
|
|
||||||
# If KIBANA_INDEX was set, then change the default index in kibana.yml configuration file. If there was an index, then delete it and recreate.
|
|
||||||
if [ "$KIBANA_INDEX" != "" ]; then
|
|
||||||
if grep -q 'kibana.index' /usr/share/kibana/config/kibana.yml; then
|
|
||||||
sed -i '/kibana.index/d' /usr/share/kibana/config/kibana.yml
|
|
||||||
fi
|
|
||||||
echo "kibana.index: $KIBANA_INDEX" >> /usr/share/kibana/config/kibana.yml
|
|
||||||
fi
|
|
||||||
|
|
||||||
# If XPACK_SECURITY_ENABLED was set, then change the xpack.security.enabled option from true (default) to false.
|
|
||||||
if [ "$XPACK_SECURITY_ENABLED" != "" ]; then
|
|
||||||
if grep -q 'xpack.security.enabled' /usr/share/kibana/config/kibana.yml; then
|
|
||||||
sed -i '/xpack.security.enabled/d' /usr/share/kibana/config/kibana.yml
|
|
||||||
fi
|
|
||||||
echo "xpack.security.enabled: $XPACK_SECURITY_ENABLED" >> /usr/share/kibana/config/kibana.yml
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ "$KIBANA_IP" != "" ]; then
|
|
||||||
kibana_ip="$KIBANA_IP"
|
|
||||||
else
|
|
||||||
kibana_ip="kibana"
|
|
||||||
fi
|
|
||||||
|
|
||||||
while [[ "$(curl -XGET -I -s -o /dev/null -w ''%{http_code}'' $kibana_ip:5601/status)" != "200" ]]; do
|
|
||||||
echo "Waiting for Kibana API. Sleeping 5 seconds"
|
|
||||||
sleep 5
|
|
||||||
done
|
|
||||||
|
|
||||||
# Prepare index selection.
|
|
||||||
echo "Kibana API is running"
|
|
||||||
|
|
||||||
default_index="/tmp/default_index.json"
|
|
||||||
|
|
||||||
cat > ${default_index} << EOF
|
|
||||||
{
|
|
||||||
"changes": {
|
|
||||||
"defaultIndex": "wazuh-alerts-${WAZUH_MAJOR}.x-*"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
sleep 5
|
|
||||||
# Add the wazuh alerts index as default.
|
|
||||||
curl -POST "http://$kibana_ip:5601/api/kibana/settings" -H "Content-Type: application/json" -H "kbn-xsrf: true" -d@${default_index}
|
|
||||||
rm -f ${default_index}
|
|
||||||
|
|
||||||
sleep 5
|
|
||||||
# Configuring Kibana TimePicker.
|
|
||||||
curl -POST "http://$kibana_ip:5601/api/kibana/settings" -H "Content-Type: application/json" -H "kbn-xsrf: true" -d \
|
|
||||||
'{"changes":{"timepicker:timeDefaults":"{\n \"from\": \"now-24h\",\n \"to\": \"now\",\n \"mode\": \"quick\"}"}}'
|
|
||||||
|
|
||||||
sleep 5
|
|
||||||
# Do not ask user to help providing usage statistics to Elastic
|
|
||||||
curl -POST "http://$kibana_ip:5601/api/telemetry/v2/optIn" -H "Content-Type: application/json" -H "kbn-xsrf: true" -d '{"enabled":false}'
|
|
||||||
|
|
||||||
echo "End settings"
|
|
||||||
25
kibana/config/wait-for-it.sh
Normal file
25
kibana/config/wait-for-it.sh
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
host="$1"
|
||||||
|
shift
|
||||||
|
cmd="kibana"
|
||||||
|
WAZUH_KIBANA_PLUGIN_URL=${WAZUH_KIBANA_PLUGIN_URL:-https://packages.wazuh.com/wazuhapp/wazuhapp-2.0_5.4.2.zip}
|
||||||
|
|
||||||
|
until curl -XGET $host:9200; do
|
||||||
|
>&2 echo "Elastic is unavailable - sleeping"
|
||||||
|
sleep 1
|
||||||
|
done
|
||||||
|
|
||||||
|
sleep 30
|
||||||
|
|
||||||
|
>&2 echo "Elastic is up - executing command"
|
||||||
|
|
||||||
|
if /usr/share/kibana/bin/kibana-plugin list | grep wazuh; then
|
||||||
|
echo "Wazuh APP already installed"
|
||||||
|
else
|
||||||
|
/usr/share/kibana/bin/kibana-plugin install ${WAZUH_KIBANA_PLUGIN_URL}
|
||||||
|
fi
|
||||||
|
|
||||||
|
exec $cmd
|
||||||
@@ -1,40 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Wazuh Docker Copyright (C) 2019 Wazuh Inc. (License GPLv2)
|
|
||||||
|
|
||||||
kibana_config_file="/usr/share/kibana/plugins/wazuh/config.yml"
|
|
||||||
|
|
||||||
declare -A CONFIG_MAP=(
|
|
||||||
[pattern]=$PATTERN
|
|
||||||
[checks.pattern]=$CHECKS_PATTERN
|
|
||||||
[checks.template]=$CHECKS_TEMPLATE
|
|
||||||
[checks.api]=$CHECKS_API
|
|
||||||
[checks.setup]=$CHECKS_SETUP
|
|
||||||
[extensions.pci]=$EXTENSIONS_PCI
|
|
||||||
[extensions.gdpr]=$EXTENSIONS_GDPR
|
|
||||||
[extensions.audit]=$EXTENSIONS_AUDIT
|
|
||||||
[extensions.oscap]=$EXTENSIONS_OSCAP
|
|
||||||
[extensions.ciscat]=$EXTENSIONS_CISCAT
|
|
||||||
[extensions.aws]=$EXTENSIONS_AWS
|
|
||||||
[extensions.virustotal]=$EXTENSIONS_VIRUSTOTAL
|
|
||||||
[extensions.osquery]=$EXTENSIONS_OSQUERY
|
|
||||||
[timeout]=$APP_TIMEOUT
|
|
||||||
[wazuh.shards]=$WAZUH_SHARDS
|
|
||||||
[wazuh.replicas]=$WAZUH_REPLICAS
|
|
||||||
[wazuh-version.shards]=$WAZUH_VERSION_SHARDS
|
|
||||||
[wazuh-version.replicas]=$WAZUH_VERSION_REPLICAS
|
|
||||||
[ip.selector]=$IP_SELECTOR
|
|
||||||
[ip.ignore]=$IP_IGNORE
|
|
||||||
[xpack.rbac.enabled]=$XPACK_RBAC_ENABLED
|
|
||||||
[wazuh.monitoring.enabled]=$WAZUH_MONITORING_ENABLED
|
|
||||||
[wazuh.monitoring.frequency]=$WAZUH_MONITORING_FREQUENCY
|
|
||||||
[wazuh.monitoring.shards]=$WAZUH_MONITORING_SHARDS
|
|
||||||
[wazuh.monitoring.replicas]=$WAZUH_MONITORING_REPLICAS
|
|
||||||
[admin]=$ADMIN_PRIVILEGES
|
|
||||||
)
|
|
||||||
|
|
||||||
for i in "${!CONFIG_MAP[@]}"
|
|
||||||
do
|
|
||||||
if [ "${CONFIG_MAP[$i]}" != "" ]; then
|
|
||||||
sed -i 's/.*#'"$i"'.*/'"$i"': '"${CONFIG_MAP[$i]}"'/' $kibana_config_file
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
@@ -1,24 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Wazuh Docker Copyright (C) 2019 Wazuh Inc. (License GPLv2)
|
|
||||||
|
|
||||||
if [[ $CHANGE_WELCOME == "true" ]]
|
|
||||||
then
|
|
||||||
|
|
||||||
rm -rf ./optimize/bundles
|
|
||||||
|
|
||||||
kibana_path="/usr/share/kibana"
|
|
||||||
# Set Wazuh app as the default landing page
|
|
||||||
echo "Set Wazuh app as the default landing page"
|
|
||||||
echo "server.defaultRoute: /app/wazuh" >> /usr/share/kibana/config/kibana.yml
|
|
||||||
|
|
||||||
# Redirect Kibana welcome screen to Discover
|
|
||||||
echo "Redirect Kibana welcome screen to Discover"
|
|
||||||
sed -i "s:'/app/kibana#/home':'/app/wazuh':g" $kibana_path/src/ui/public/chrome/directives/global_nav/global_nav.html
|
|
||||||
sed -i "s:'/app/kibana#/home':'/app/wazuh':g" $kibana_path/src/ui/public/chrome/directives/header_global_nav/header_global_nav.js
|
|
||||||
|
|
||||||
# Redirect Kibana welcome screen to Discover
|
|
||||||
echo "Hide undesired links"
|
|
||||||
sed -i 's#visible: true#visible: false#g' $kibana_path/node_modules/x-pack/plugins/rollup/public/crud_app/index.js
|
|
||||||
sed -i 's#visible: true#visible: false#g' $kibana_path/node_modules/x-pack/plugins/license_management/public/management_section.js
|
|
||||||
fi
|
|
||||||
|
|
||||||
@@ -1,35 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Wazuh Docker Copyright (C) 2019 Wazuh Inc. (License GPLv2)
|
|
||||||
|
|
||||||
kibana_config_file="/usr/share/kibana/config/kibana.yml"
|
|
||||||
if grep -Fq "#xpack features" "$kibana_config_file";
|
|
||||||
then
|
|
||||||
declare -A CONFIG_MAP=(
|
|
||||||
[xpack.apm.ui.enabled]=$XPACK_APM
|
|
||||||
[xpack.grokdebugger.enabled]=$XPACK_DEVTOOLS
|
|
||||||
[xpack.searchprofiler.enabled]=$XPACK_DEVTOOLS
|
|
||||||
[xpack.ml.enabled]=$XPACK_ML
|
|
||||||
[xpack.canvas.enabled]=$XPACK_CANVAS
|
|
||||||
[xpack.infra.enabled]=$XPACK_INFRA
|
|
||||||
[xpack.monitoring.enabled]=$XPACK_MONITORING
|
|
||||||
[console.enabled]=$XPACK_DEVTOOLS
|
|
||||||
)
|
|
||||||
for i in "${!CONFIG_MAP[@]}"
|
|
||||||
do
|
|
||||||
if [ "${CONFIG_MAP[$i]}" != "" ]; then
|
|
||||||
sed -i 's/.'"$i"'.*/'"$i"': '"${CONFIG_MAP[$i]}"'/' $kibana_config_file
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
else
|
|
||||||
echo "
|
|
||||||
#xpack features
|
|
||||||
xpack.apm.ui.enabled: $XPACK_APM
|
|
||||||
xpack.grokdebugger.enabled: $XPACK_DEVTOOLS
|
|
||||||
xpack.searchprofiler.enabled: $XPACK_DEVTOOLS
|
|
||||||
xpack.ml.enabled: $XPACK_ML
|
|
||||||
xpack.canvas.enabled: $XPACK_CANVAS
|
|
||||||
xpack.infra.enabled: $XPACK_INFRA
|
|
||||||
xpack.monitoring.enabled: $XPACK_MONITORING
|
|
||||||
console.enabled: $XPACK_DEVTOOLS
|
|
||||||
" >> $kibana_config_file
|
|
||||||
fi
|
|
||||||
12
logstash/Dockerfile
Normal file
12
logstash/Dockerfile
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
FROM logstash:5.4.2
|
||||||
|
|
||||||
|
RUN apt-get update
|
||||||
|
|
||||||
|
COPY config/logstash.conf /etc/logstash/conf.d/logstash.conf
|
||||||
|
COPY config/wazuh-elastic5-template.json /etc/logstash/wazuh-elastic5-template.json
|
||||||
|
|
||||||
|
|
||||||
|
ADD config/run.sh /tmp/run.sh
|
||||||
|
RUN chmod 755 /tmp/run.sh
|
||||||
|
|
||||||
|
ENTRYPOINT ["/tmp/run.sh"]
|
||||||
43
logstash/config/logstash.conf
Normal file
43
logstash/config/logstash.conf
Normal file
@@ -0,0 +1,43 @@
|
|||||||
|
# Wazuh - Logstash configuration file
|
||||||
|
## Remote Wazuh Manager - Filebeat input
|
||||||
|
input {
|
||||||
|
beats {
|
||||||
|
port => 5000
|
||||||
|
codec => "json_lines"
|
||||||
|
# ssl => true
|
||||||
|
# ssl_certificate => "/etc/logstash/logstash.crt"
|
||||||
|
# ssl_key => "/etc/logstash/logstash.key"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
## Local Wazuh Manager - JSON file input
|
||||||
|
#input {
|
||||||
|
# file {
|
||||||
|
# type => "wazuh-alerts"
|
||||||
|
# path => "/var/ossec/logs/alerts/alerts.json"
|
||||||
|
# codec => "json"
|
||||||
|
# }
|
||||||
|
#}
|
||||||
|
filter {
|
||||||
|
geoip {
|
||||||
|
source => "srcip"
|
||||||
|
target => "GeoLocation"
|
||||||
|
fields => ["city_name", "continent_code", "country_code2", "country_name", "region_name", "location"]
|
||||||
|
}
|
||||||
|
date {
|
||||||
|
match => ["timestamp", "ISO8601"]
|
||||||
|
target => "@timestamp"
|
||||||
|
}
|
||||||
|
mutate {
|
||||||
|
remove_field => [ "timestamp", "beat", "fields", "input_type", "tags", "count", "@version", "log", "offset", "type"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
output {
|
||||||
|
elasticsearch {
|
||||||
|
hosts => ["elasticsearch:9200"]
|
||||||
|
index => "wazuh-alerts-%{+YYYY.MM.dd}"
|
||||||
|
document_type => "wazuh"
|
||||||
|
template => "/etc/logstash/wazuh-elastic5-template.json"
|
||||||
|
template_name => "wazuh"
|
||||||
|
template_overwrite => true
|
||||||
|
}
|
||||||
|
}
|
||||||
31
logstash/config/run.sh
Normal file
31
logstash/config/run.sh
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
#
|
||||||
|
# OSSEC container bootstrap. See the README for information of the environment
|
||||||
|
# variables expected by this script.
|
||||||
|
#
|
||||||
|
|
||||||
|
#
|
||||||
|
|
||||||
|
#
|
||||||
|
# Apply Templates
|
||||||
|
#
|
||||||
|
|
||||||
|
set -e
|
||||||
|
host="elasticsearch"
|
||||||
|
until curl -XGET $host:9200; do
|
||||||
|
>&2 echo "Elastic is unavailable - sleeping"
|
||||||
|
sleep 1
|
||||||
|
done
|
||||||
|
|
||||||
|
# Add logstash as command if needed
|
||||||
|
if [ "${1:0:1}" = '-' ]; then
|
||||||
|
set -- logstash "$@"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Run as user "logstash" if the command is "logstash"
|
||||||
|
if [ "$1" = 'logstash' ]; then
|
||||||
|
set -- gosu logstash "$@"
|
||||||
|
fi
|
||||||
|
|
||||||
|
exec "$@"
|
||||||
620
logstash/config/wazuh-elastic5-template.json
Normal file
620
logstash/config/wazuh-elastic5-template.json
Normal file
@@ -0,0 +1,620 @@
|
|||||||
|
{
|
||||||
|
"order": 0,
|
||||||
|
"template": "wazuh*",
|
||||||
|
"settings": {
|
||||||
|
"index.refresh_interval": "5s"
|
||||||
|
},
|
||||||
|
"mappings": {
|
||||||
|
"wazuh": {
|
||||||
|
"dynamic_templates": [
|
||||||
|
{
|
||||||
|
"string_as_keyword": {
|
||||||
|
"match_mapping_type": "string",
|
||||||
|
"mapping": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"properties": {
|
||||||
|
"@timestamp": {
|
||||||
|
"type": "date",
|
||||||
|
"format": "dateOptionalTime"
|
||||||
|
},
|
||||||
|
"@version": {
|
||||||
|
"type": "text"
|
||||||
|
},
|
||||||
|
"agent": {
|
||||||
|
"properties": {
|
||||||
|
"ip": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"id": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"name": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"manager": {
|
||||||
|
"properties": {
|
||||||
|
"name": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"dstuser": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"AlertsFile": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"full_log": {
|
||||||
|
"type": "text"
|
||||||
|
},
|
||||||
|
"previous_log": {
|
||||||
|
"type": "text"
|
||||||
|
},
|
||||||
|
"GeoLocation": {
|
||||||
|
"properties": {
|
||||||
|
"area_code": {
|
||||||
|
"type": "long"
|
||||||
|
},
|
||||||
|
"city_name": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"continent_code": {
|
||||||
|
"type": "text"
|
||||||
|
},
|
||||||
|
"coordinates": {
|
||||||
|
"type": "double"
|
||||||
|
},
|
||||||
|
"country_code2": {
|
||||||
|
"type": "text"
|
||||||
|
},
|
||||||
|
"country_code3": {
|
||||||
|
"type": "text"
|
||||||
|
},
|
||||||
|
"country_name": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"dma_code": {
|
||||||
|
"type": "long"
|
||||||
|
},
|
||||||
|
"ip": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"latitude": {
|
||||||
|
"type": "double"
|
||||||
|
},
|
||||||
|
"location": {
|
||||||
|
"type": "geo_point"
|
||||||
|
},
|
||||||
|
"longitude": {
|
||||||
|
"type": "double"
|
||||||
|
},
|
||||||
|
"postal_code": {
|
||||||
|
"type": "keyword"
|
||||||
|
},
|
||||||
|
"real_region_name": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"region_name": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"timezone": {
|
||||||
|
"type": "text"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"host": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"syscheck": {
|
||||||
|
"properties": {
|
||||||
|
"path": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"sha1_before": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"sha1_after": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"uid_before": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"uid_after": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"gid_before": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"gid_after": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"perm_before": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"perm_after": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"md5_after": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"md5_before": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"gname_after": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"gname_before": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"inode_after": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"inode_before": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"mtime_after": {
|
||||||
|
"type": "date",
|
||||||
|
"format": "dateOptionalTime",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"mtime_before": {
|
||||||
|
"type": "date",
|
||||||
|
"format": "dateOptionalTime",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"uname_after": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"uname_before": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"size_before": {
|
||||||
|
"type": "long",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"size_after": {
|
||||||
|
"type": "long",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"diff": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"event": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"location": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"message": {
|
||||||
|
"type": "text"
|
||||||
|
},
|
||||||
|
"offset": {
|
||||||
|
"type": "keyword"
|
||||||
|
},
|
||||||
|
"rule": {
|
||||||
|
"properties": {
|
||||||
|
"description": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"groups": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"level": {
|
||||||
|
"type": "long",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"id": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"cve": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"info": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"frequency": {
|
||||||
|
"type": "long",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"firedtimes": {
|
||||||
|
"type": "long",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"cis": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"pci_dss": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"decoder": {
|
||||||
|
"properties": {
|
||||||
|
"parent": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"name": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"ftscomment": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"fts": {
|
||||||
|
"type": "long",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"accumulate": {
|
||||||
|
"type": "long",
|
||||||
|
"doc_values": "true"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"srcip": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"protocol": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"action": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"dstip": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"dstport": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"srcuser": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"program_name": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"id": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"status": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"command": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"url": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"data": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"system_name": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"type": {
|
||||||
|
"type": "text"
|
||||||
|
},
|
||||||
|
"title": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"oscap": {
|
||||||
|
"properties": {
|
||||||
|
"check.title": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"check.id": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"check.result": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"check.severity": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"check.description": {
|
||||||
|
"type": "text"
|
||||||
|
},
|
||||||
|
"check.rationale": {
|
||||||
|
"type": "text"
|
||||||
|
},
|
||||||
|
"check.references": {
|
||||||
|
"type": "text"
|
||||||
|
},
|
||||||
|
"check.identifiers": {
|
||||||
|
"type": "text"
|
||||||
|
},
|
||||||
|
"check.oval.id": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"scan.id": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"scan.content": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"scan.benchmark.id": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"scan.profile.title": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"scan.profile.id": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"scan.score": {
|
||||||
|
"type": "double",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"scan.return_code": {
|
||||||
|
"type": "long",
|
||||||
|
"doc_values": "true"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"audit": {
|
||||||
|
"properties": {
|
||||||
|
"type": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"id": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"syscall": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"exit": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"ppid": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"pid": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"auid": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"uid": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"gid": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"euid": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"suid": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"fsuid": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"egid": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"sgid": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"fsgid": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"tty": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"session": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"command": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"exe": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"key": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"cwd": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"directory.name": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"directory.inode": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"directory.mode": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"file.name": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"file.inode": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"file.mode": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"acct": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"dev": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"enforcing": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"list": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"old-auid": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"old-ses": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"old_enforcing": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"old_prom": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"op": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"prom": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"res": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"srcip": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"subj": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
},
|
||||||
|
"success": {
|
||||||
|
"type": "keyword",
|
||||||
|
"doc_values": "true"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"agent": {
|
||||||
|
"properties": {
|
||||||
|
"@timestamp": {
|
||||||
|
"type": "date",
|
||||||
|
"format": "dateOptionalTime"
|
||||||
|
},
|
||||||
|
"status": {
|
||||||
|
"type": "keyword"
|
||||||
|
},
|
||||||
|
"ip": {
|
||||||
|
"type": "keyword"
|
||||||
|
},
|
||||||
|
"host": {
|
||||||
|
"type": "keyword"
|
||||||
|
},
|
||||||
|
"name": {
|
||||||
|
"type": "keyword"
|
||||||
|
},
|
||||||
|
"id": {
|
||||||
|
"type": "keyword"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,19 +0,0 @@
|
|||||||
# Wazuh Docker Copyright (C) 2019 Wazuh Inc. (License GPLv2)
|
|
||||||
FROM nginx:latest
|
|
||||||
|
|
||||||
ENV DEBIAN_FRONTEND noninteractive
|
|
||||||
|
|
||||||
RUN apt-get update && apt-get install -y openssl apache2-utils
|
|
||||||
|
|
||||||
COPY config/entrypoint.sh /entrypoint.sh
|
|
||||||
|
|
||||||
RUN chmod 755 /entrypoint.sh
|
|
||||||
|
|
||||||
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
|
|
||||||
|
|
||||||
VOLUME ["/etc/nginx/conf.d"]
|
|
||||||
|
|
||||||
ENV NGINX_NAME="foo" \
|
|
||||||
NGINX_PWD="bar"
|
|
||||||
|
|
||||||
ENTRYPOINT /entrypoint.sh
|
|
||||||
@@ -1,79 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Wazuh Docker Copyright (C) 2019 Wazuh Inc. (License GPLv2)
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
# Generating certificates.
|
|
||||||
if [ ! -d /etc/nginx/conf.d/ssl ]; then
|
|
||||||
echo "Generating SSL certificates"
|
|
||||||
mkdir -p /etc/nginx/conf.d/ssl/certs /etc/nginx/conf.d/ssl/private
|
|
||||||
openssl req -x509 -batch -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/conf.d/ssl/private/kibana-access.key -out /etc/nginx/conf.d/ssl/certs/kibana-access.pem >/dev/null
|
|
||||||
else
|
|
||||||
echo "SSL certificates already present"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Setting users credentials.
|
|
||||||
# In order to set NGINX_CREDENTIALS, before "docker-compose up -d" run (a or b):
|
|
||||||
#
|
|
||||||
# a) export NGINX_CREDENTIALS="user1:pass1;user2:pass2;" or
|
|
||||||
# export NGINX_CREDENTIALS="user1:pass1;user2:pass2"
|
|
||||||
#
|
|
||||||
# b) Set NGINX_CREDENTIALS in docker-compose.yml:
|
|
||||||
# NGINX_CREDENTIALS=user1:pass1;user2:pass2; or
|
|
||||||
# NGINX_CREDENTIALS=user1:pass1;user2:pass2
|
|
||||||
#
|
|
||||||
if [ ! -f /etc/nginx/conf.d/kibana.htpasswd ]; then
|
|
||||||
echo "Setting users credentials"
|
|
||||||
if [ ! -z "$NGINX_CREDENTIALS" ]; then
|
|
||||||
IFS=';' read -r -a users <<< "$NGINX_CREDENTIALS"
|
|
||||||
for index in "${!users[@]}"
|
|
||||||
do
|
|
||||||
IFS=':' read -r -a credentials <<< "${users[index]}"
|
|
||||||
if [ $index -eq 0 ]; then
|
|
||||||
echo ${credentials[1]}|htpasswd -i -c /etc/nginx/conf.d/kibana.htpasswd ${credentials[0]} >/dev/null
|
|
||||||
else
|
|
||||||
echo ${credentials[1]}|htpasswd -i /etc/nginx/conf.d/kibana.htpasswd ${credentials[0]} >/dev/null
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
else
|
|
||||||
# NGINX_PWD and NGINX_NAME are declared in nginx/Dockerfile
|
|
||||||
echo $NGINX_PWD|htpasswd -i -c /etc/nginx/conf.d/kibana.htpasswd $NGINX_NAME >/dev/null
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
echo "Kibana credentials already configured"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ "x${NGINX_PORT}" = "x" ]; then
|
|
||||||
NGINX_PORT=443
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ "x${KIBANA_HOST}" = "x" ]; then
|
|
||||||
KIBANA_HOST="kibana:5601"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "Configuring NGINX"
|
|
||||||
cat > /etc/nginx/conf.d/default.conf <<EOF
|
|
||||||
server {
|
|
||||||
listen 80;
|
|
||||||
listen [::]:80;
|
|
||||||
return 301 https://\$host:${NGINX_PORT}\$request_uri;
|
|
||||||
}
|
|
||||||
|
|
||||||
server {
|
|
||||||
listen ${NGINX_PORT} default_server;
|
|
||||||
listen [::]:${NGINX_PORT};
|
|
||||||
ssl on;
|
|
||||||
ssl_certificate /etc/nginx/conf.d/ssl/certs/kibana-access.pem;
|
|
||||||
ssl_certificate_key /etc/nginx/conf.d/ssl/private/kibana-access.key;
|
|
||||||
location / {
|
|
||||||
auth_basic "Restricted";
|
|
||||||
auth_basic_user_file /etc/nginx/conf.d/kibana.htpasswd;
|
|
||||||
proxy_pass http://${KIBANA_HOST}/;
|
|
||||||
proxy_buffer_size 128k;
|
|
||||||
proxy_buffers 4 256k;
|
|
||||||
proxy_busy_buffers_size 256k;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
nginx -g 'daemon off;'
|
|
||||||
@@ -1,80 +1,35 @@
|
|||||||
# Wazuh Docker Copyright (C) 2019 Wazuh Inc. (License GPLv2)
|
FROM centos:latest
|
||||||
FROM phusion/baseimage:latest
|
|
||||||
|
|
||||||
ARG FILEBEAT_VERSION=7.3.0
|
COPY config/*.repo /etc/yum.repos.d/
|
||||||
|
|
||||||
ARG WAZUH_VERSION=3.9.5-1
|
RUN yum -y update; yum clean all;
|
||||||
|
RUN yum -y install epel-release openssl useradd; yum clean all
|
||||||
|
RUN yum -y install postfix mailx cyrus-sasl cyrus-sasl-plain; yum clean all
|
||||||
|
RUN groupadd -g 1000 ossec
|
||||||
|
RUN useradd -u 1000 -g 1000 ossec
|
||||||
|
RUN yum install -y wazuh-manager wazuh-api
|
||||||
|
|
||||||
ENV API_USER="foo" \
|
|
||||||
API_PASS="bar"
|
|
||||||
|
|
||||||
ARG TEMPLATE_VERSION="v3.9.5"
|
|
||||||
|
|
||||||
# Set repositories.
|
|
||||||
RUN set -x && echo "deb https://packages.wazuh.com/3.x/apt/ stable main" | tee /etc/apt/sources.list.d/wazuh.list && \
|
|
||||||
curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | apt-key add - && \
|
|
||||||
curl --silent --location https://deb.nodesource.com/setup_8.x | bash - && \
|
|
||||||
echo "postfix postfix/mailname string wazuh-manager" | debconf-set-selections && \
|
|
||||||
echo "postfix postfix/main_mailer_type string 'Internet Site'" | debconf-set-selections && \
|
|
||||||
groupadd -g 1000 ossec && useradd -u 1000 -g 1000 -d /var/ossec ossec
|
|
||||||
|
|
||||||
RUN add-apt-repository universe && apt-get update && apt-get upgrade -y -o Dpkg::Options::="--force-confold" && \
|
|
||||||
apt-get --no-install-recommends --no-install-suggests -y install openssl postfix bsd-mailx python-boto python-pip \
|
|
||||||
apt-transport-https vim expect nodejs python-cryptography mailutils libsasl2-modules wazuh-manager=${WAZUH_VERSION} \
|
|
||||||
wazuh-api=${WAZUH_VERSION} && apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* && rm -f \
|
|
||||||
/var/ossec/logs/alerts/*/*/*.log && rm -f /var/ossec/logs/alerts/*/*/*.json && rm -f \
|
|
||||||
/var/ossec/logs/archives/*/*/*.log && rm -f /var/ossec/logs/archives/*/*/*.json && rm -f \
|
|
||||||
/var/ossec/logs/firewall/*/*/*.log && rm -f /var/ossec/logs/firewall/*/*/*.json
|
|
||||||
|
|
||||||
# Adding first run script and entrypoint
|
|
||||||
COPY config/data_dirs.env /data_dirs.env
|
|
||||||
COPY config/init.bash /init.bash
|
|
||||||
RUN mkdir /entrypoint-scripts
|
|
||||||
COPY config/entrypoint.sh /entrypoint.sh
|
|
||||||
COPY config/00-wazuh.sh /entrypoint-scripts/00-wazuh.sh
|
|
||||||
COPY config/01-config_filebeat.sh /entrypoint-scripts/01-config_filebeat.sh
|
|
||||||
|
|
||||||
|
ADD config/data_dirs.env /data_dirs.env
|
||||||
|
ADD config/init.bash /init.bash
|
||||||
# Sync calls are due to https://github.com/docker/docker/issues/9547
|
# Sync calls are due to https://github.com/docker/docker/issues/9547
|
||||||
RUN chmod 755 /init.bash && \
|
RUN chmod 755 /init.bash &&\
|
||||||
sync && /init.bash && \
|
sync && /init.bash &&\
|
||||||
sync && rm /init.bash && \
|
sync && rm /init.bash
|
||||||
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-${FILEBEAT_VERSION}-amd64.deb &&\
|
|
||||||
dpkg -i filebeat-${FILEBEAT_VERSION}-amd64.deb && rm -f filebeat-${FILEBEAT_VERSION}-amd64.deb && \
|
|
||||||
chmod 755 /entrypoint.sh && \
|
RUN curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.4.2-x86_64.rpm &&\
|
||||||
chmod 755 /entrypoint-scripts/00-wazuh.sh && \
|
rpm -vi filebeat-5.4.2-x86_64.rpm && rm filebeat-5.4.2-x86_64.rpm
|
||||||
chmod 755 /entrypoint-scripts/01-config_filebeat.sh
|
|
||||||
|
|
||||||
COPY config/filebeat.yml /etc/filebeat/
|
COPY config/filebeat.yml /etc/filebeat/
|
||||||
RUN chmod go-w /etc/filebeat/filebeat.yml
|
|
||||||
|
|
||||||
# Setting volumes
|
ADD config/run.sh /tmp/run.sh
|
||||||
|
RUN chmod 755 /tmp/run.sh
|
||||||
|
|
||||||
VOLUME ["/var/ossec/data"]
|
VOLUME ["/var/ossec/data"]
|
||||||
VOLUME ["/etc/filebeat"]
|
|
||||||
VOLUME ["/etc/postfix"]
|
|
||||||
VOLUME ["/var/lib/filebeat"]
|
|
||||||
|
|
||||||
# Services ports
|
EXPOSE 55000/tcp 1514/udp 1515/tcp 514/udp
|
||||||
EXPOSE 55000/tcp 1514/udp 1515/tcp 514/udp 1516/tcp
|
|
||||||
|
|
||||||
# Adding services
|
# Run supervisord so that the container will stay alive
|
||||||
RUN mkdir /etc/service/wazuh && \
|
|
||||||
mkdir /etc/service/wazuh-api && \
|
|
||||||
mkdir /etc/service/postfix && \
|
|
||||||
mkdir /etc/service/filebeat
|
|
||||||
|
|
||||||
COPY config/wazuh.runit.service /etc/service/wazuh/run
|
ENTRYPOINT ["/tmp/run.sh"]
|
||||||
COPY config/wazuh-api.runit.service /etc/service/wazuh-api/run
|
|
||||||
COPY config/postfix.runit.service /etc/service/postfix/run
|
|
||||||
COPY config/filebeat.runit.service /etc/service/filebeat/run
|
|
||||||
|
|
||||||
RUN chmod +x /etc/service/wazuh-api/run && \
|
|
||||||
chmod +x /etc/service/wazuh/run && \
|
|
||||||
chmod +x /etc/service/postfix/run && \
|
|
||||||
chmod +x /etc/service/filebeat/run
|
|
||||||
|
|
||||||
|
|
||||||
ADD https://raw.githubusercontent.com/wazuh/wazuh/$TEMPLATE_VERSION/extensions/elasticsearch/7.x/wazuh-template.json /etc/filebeat
|
|
||||||
RUN chmod go-w /etc/filebeat/wazuh-template.json
|
|
||||||
|
|
||||||
# Run all services
|
|
||||||
ENTRYPOINT ["/entrypoint.sh"]
|
|
||||||
|
|||||||
@@ -1,135 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Wazuh Docker Copyright (C) 2019 Wazuh Inc. (License GPLv2)
|
|
||||||
|
|
||||||
# Wazuh container bootstrap. See the README for information of the environment
|
|
||||||
# variables expected by this script.
|
|
||||||
|
|
||||||
# Startup the services
|
|
||||||
source /data_dirs.env
|
|
||||||
|
|
||||||
FIRST_TIME_INSTALLATION=false
|
|
||||||
|
|
||||||
WAZUH_INSTALL_PATH=/var/ossec
|
|
||||||
DATA_PATH=${WAZUH_INSTALL_PATH}/data
|
|
||||||
|
|
||||||
WAZUH_CONFIG_MOUNT=/wazuh-config-mount
|
|
||||||
|
|
||||||
print() {
|
|
||||||
echo -e $1
|
|
||||||
}
|
|
||||||
|
|
||||||
error_and_exit() {
|
|
||||||
echo "Error executing command: '$1'."
|
|
||||||
echo 'Exiting.'
|
|
||||||
exit 1
|
|
||||||
}
|
|
||||||
|
|
||||||
exec_cmd() {
|
|
||||||
eval $1 > /dev/null 2>&1 || error_and_exit "$1"
|
|
||||||
}
|
|
||||||
|
|
||||||
exec_cmd_stdout() {
|
|
||||||
eval $1 2>&1 || error_and_exit "$1"
|
|
||||||
}
|
|
||||||
|
|
||||||
edit_configuration() { # $1 -> setting, $2 -> value
|
|
||||||
sed -i "s/^config.$1\s=.*/config.$1 = \"$2\";/g" "${DATA_PATH}/api/configuration/config.js" || error_and_exit "sed (editing configuration)"
|
|
||||||
}
|
|
||||||
|
|
||||||
for ossecdir in "${DATA_DIRS[@]}"; do
|
|
||||||
if [ ! -e "${DATA_PATH}/${ossecdir}" ]
|
|
||||||
then
|
|
||||||
print "Installing ${ossecdir}"
|
|
||||||
exec_cmd "mkdir -p $(dirname ${DATA_PATH}/${ossecdir})"
|
|
||||||
exec_cmd "cp -pr /var/ossec/${ossecdir}-template ${DATA_PATH}/${ossecdir}"
|
|
||||||
FIRST_TIME_INSTALLATION=true
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
if [ -e ${WAZUH_INSTALL_PATH}/etc-template ]
|
|
||||||
then
|
|
||||||
cp -p /var/ossec/etc-template/internal_options.conf /var/ossec/etc/internal_options.conf
|
|
||||||
fi
|
|
||||||
rm /var/ossec/queue/db/.template.db
|
|
||||||
|
|
||||||
touch ${DATA_PATH}/process_list
|
|
||||||
chgrp ossec ${DATA_PATH}/process_list
|
|
||||||
chmod g+rw ${DATA_PATH}/process_list
|
|
||||||
|
|
||||||
AUTO_ENROLLMENT_ENABLED=${AUTO_ENROLLMENT_ENABLED:-true}
|
|
||||||
API_GENERATE_CERTS=${API_GENERATE_CERTS:-true}
|
|
||||||
|
|
||||||
if [ $FIRST_TIME_INSTALLATION == true ]
|
|
||||||
then
|
|
||||||
if [ $AUTO_ENROLLMENT_ENABLED == true ]
|
|
||||||
then
|
|
||||||
if [ ! -e ${DATA_PATH}/etc/sslmanager.key ]
|
|
||||||
then
|
|
||||||
print "Creating ossec-authd key and cert"
|
|
||||||
exec_cmd "openssl genrsa -out ${DATA_PATH}/etc/sslmanager.key 4096"
|
|
||||||
exec_cmd "openssl req -new -x509 -key ${DATA_PATH}/etc/sslmanager.key -out ${DATA_PATH}/etc/sslmanager.cert -days 3650 -subj /CN=${HOSTNAME}/"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
if [ $API_GENERATE_CERTS == true ]
|
|
||||||
then
|
|
||||||
if [ ! -e ${DATA_PATH}/api/configuration/ssl/server.crt ]
|
|
||||||
then
|
|
||||||
print "Enabling Wazuh API HTTPS"
|
|
||||||
edit_configuration "https" "yes"
|
|
||||||
print "Create Wazuh API key and cert"
|
|
||||||
exec_cmd "openssl genrsa -out ${DATA_PATH}/api/configuration/ssl/server.key 4096"
|
|
||||||
exec_cmd "openssl req -new -x509 -key ${DATA_PATH}/api/configuration/ssl/server.key -out ${DATA_PATH}/api/configuration/ssl/server.crt -days 3650 -subj /CN=${HOSTNAME}/"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
##############################################################################
|
|
||||||
# Copy all files from $WAZUH_CONFIG_MOUNT to $DATA_PATH and respect
|
|
||||||
# destination files permissions
|
|
||||||
#
|
|
||||||
# For example, to mount the file /var/ossec/data/etc/ossec.conf, mount it at
|
|
||||||
# $WAZUH_CONFIG_MOUNT/etc/ossec.conf in your container and this code will
|
|
||||||
# replace the ossec.conf file in /var/ossec/data/etc with yours.
|
|
||||||
##############################################################################
|
|
||||||
if [ -e "$WAZUH_CONFIG_MOUNT" ]
|
|
||||||
then
|
|
||||||
print "Identified Wazuh configuration files to mount..."
|
|
||||||
|
|
||||||
exec_cmd_stdout "cp --verbose -r $WAZUH_CONFIG_MOUNT/* $DATA_PATH"
|
|
||||||
else
|
|
||||||
print "No Wazuh configuration files to mount..."
|
|
||||||
fi
|
|
||||||
|
|
||||||
function ossec_shutdown(){
|
|
||||||
${WAZUH_INSTALL_PATH}/bin/ossec-control stop;
|
|
||||||
}
|
|
||||||
|
|
||||||
# Trap exit signals and do a proper shutdown
|
|
||||||
trap "ossec_shutdown; exit" SIGINT SIGTERM
|
|
||||||
|
|
||||||
chmod -R g+rw ${DATA_PATH}
|
|
||||||
|
|
||||||
##############################################################################
|
|
||||||
# Interpret any passed arguments (via docker command to this entrypoint) as
|
|
||||||
# paths or commands, and execute them.
|
|
||||||
#
|
|
||||||
# This can be useful for actions that need to be run before the services are
|
|
||||||
# started, such as "/var/ossec/bin/ossec-control enable agentless".
|
|
||||||
##############################################################################
|
|
||||||
for CUSTOM_COMMAND in "$@"
|
|
||||||
do
|
|
||||||
echo "Executing command \`${CUSTOM_COMMAND}\`"
|
|
||||||
exec_cmd_stdout "${CUSTOM_COMMAND}"
|
|
||||||
done
|
|
||||||
|
|
||||||
##############################################################################
|
|
||||||
# Change Wazuh API user credentials.
|
|
||||||
##############################################################################
|
|
||||||
|
|
||||||
pushd /var/ossec/api/configuration/auth/
|
|
||||||
|
|
||||||
echo "Change Wazuh API user credentials"
|
|
||||||
change_user="node htpasswd -b -c user $API_USER $API_PASS"
|
|
||||||
eval $change_user
|
|
||||||
|
|
||||||
popd
|
|
||||||
@@ -1,19 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Wazuh App Copyright (C) 2019 Wazuh Inc. (License GPLv2)
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
WAZUH_FILEBEAT_MODULE=wazuh-filebeat-0.1.tar.gz
|
|
||||||
|
|
||||||
# Modify the output to Elasticsearch if th ELASTICSEARCH_URL is set
|
|
||||||
if [ "$ELASTICSEARCH_URL" != "" ]; then
|
|
||||||
>&2 echo "Customize Elasticsearch ouput IP."
|
|
||||||
sed -i 's|http://elasticsearch:9200|'$ELASTICSEARCH_URL'|g' /etc/filebeat/filebeat.yml
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Install Wazuh Filebeat Module
|
|
||||||
|
|
||||||
curl -s "https://packages.wazuh.com/3.x/filebeat/${WAZUH_FILEBEAT_MODULE}" | tar -xvz -C /usr/share/filebeat/module
|
|
||||||
mkdir -p /usr/share/filebeat/module/wazuh
|
|
||||||
chmod 755 -R /usr/share/filebeat/module/wazuh
|
|
||||||
|
|
||||||
@@ -1,15 +1,9 @@
|
|||||||
i=0
|
i=0
|
||||||
DATA_DIRS[((i++))]="api/configuration"
|
|
||||||
DATA_DIRS[((i++))]="etc"
|
DATA_DIRS[((i++))]="etc"
|
||||||
|
DATA_DIRS[((i++))]="ruleset"
|
||||||
DATA_DIRS[((i++))]="logs"
|
DATA_DIRS[((i++))]="logs"
|
||||||
DATA_DIRS[((i++))]="queue/db"
|
DATA_DIRS[((i++))]="stats"
|
||||||
DATA_DIRS[((i++))]="queue/rootcheck"
|
DATA_DIRS[((i++))]="queue"
|
||||||
DATA_DIRS[((i++))]="queue/agent-groups"
|
DATA_DIRS[((i++))]="var/db"
|
||||||
DATA_DIRS[((i++))]="queue/agent-info"
|
DATA_DIRS[((i++))]="api"
|
||||||
DATA_DIRS[((i++))]="queue/agents-timestamp"
|
|
||||||
DATA_DIRS[((i++))]="queue/agentless"
|
|
||||||
DATA_DIRS[((i++))]="queue/cluster"
|
|
||||||
DATA_DIRS[((i++))]="queue/rids"
|
|
||||||
DATA_DIRS[((i++))]="queue/fts"
|
|
||||||
DATA_DIRS[((i++))]="var/multigroups"
|
|
||||||
export DATA_DIRS
|
export DATA_DIRS
|
||||||
|
|||||||
@@ -1,14 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Wazuh Docker Copyright (C) 2019 Wazuh Inc. (License GPLv2)
|
|
||||||
|
|
||||||
# It will run every .sh script located in entrypoint-scripts folder in lexicographical order
|
|
||||||
for script in `ls /entrypoint-scripts/*.sh | sort -n`; do
|
|
||||||
bash "$script"
|
|
||||||
|
|
||||||
done
|
|
||||||
|
|
||||||
##############################################################################
|
|
||||||
# Start Wazuh Server.
|
|
||||||
##############################################################################
|
|
||||||
|
|
||||||
/sbin/my_init
|
|
||||||
@@ -1,4 +0,0 @@
|
|||||||
#!/bin/sh
|
|
||||||
# Wazuh Docker Copyright (C) 2019 Wazuh Inc. (License GPLv2)
|
|
||||||
service filebeat start
|
|
||||||
tail -f /var/log/filebeat/filebeat
|
|
||||||
@@ -1,53 +1,16 @@
|
|||||||
# Wazuh Docker Copyright (C) 2019 Wazuh Inc. (License GPLv2)
|
filebeat:
|
||||||
filebeat.inputs:
|
prospectors:
|
||||||
- type: log
|
- input_type: log
|
||||||
paths:
|
paths:
|
||||||
- '/var/ossec/logs/alerts/alerts.json'
|
- "/var/ossec/data/logs/alerts/alerts.json"
|
||||||
|
document_type: wazuh-alerts
|
||||||
|
json.message_key: log
|
||||||
|
json.keys_under_root: true
|
||||||
|
json.overwrite_keys: true
|
||||||
|
|
||||||
setup.template.json.enabled: true
|
output:
|
||||||
setup.template.json.path: "/etc/filebeat/wazuh-template.json"
|
logstash:
|
||||||
setup.template.json.name: "wazuh"
|
# The Logstash hosts
|
||||||
setup.template.overwrite: true
|
hosts: ["logstash:5000"]
|
||||||
|
# ssl:
|
||||||
processors:
|
# certificate_authorities: ["/etc/filebeat/logstash.crt"]
|
||||||
- decode_json_fields:
|
|
||||||
fields: ['message']
|
|
||||||
process_array: true
|
|
||||||
max_depth: 200
|
|
||||||
target: ''
|
|
||||||
overwrite_keys: true
|
|
||||||
- drop_fields:
|
|
||||||
fields: ['message', 'ecs', 'beat', 'input_type', 'tags', 'count', '@version', 'log', 'offset', 'type', 'host']
|
|
||||||
- rename:
|
|
||||||
fields:
|
|
||||||
- from: "data.aws.sourceIPAddress"
|
|
||||||
to: "@src_ip"
|
|
||||||
ignore_missing: true
|
|
||||||
fail_on_error: false
|
|
||||||
when:
|
|
||||||
regexp:
|
|
||||||
data.aws.sourceIPAddress: \b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b
|
|
||||||
- rename:
|
|
||||||
fields:
|
|
||||||
- from: "data.srcip"
|
|
||||||
to: "@src_ip"
|
|
||||||
ignore_missing: true
|
|
||||||
fail_on_error: false
|
|
||||||
when:
|
|
||||||
regexp:
|
|
||||||
data.srcip: \b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b
|
|
||||||
- rename:
|
|
||||||
fields:
|
|
||||||
- from: "data.win.eventdata.ipAddress"
|
|
||||||
to: "@src_ip"
|
|
||||||
ignore_missing: true
|
|
||||||
fail_on_error: false
|
|
||||||
when:
|
|
||||||
regexp:
|
|
||||||
data.win.eventdata.ipAddress: \b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b
|
|
||||||
|
|
||||||
output.elasticsearch:
|
|
||||||
hosts: ['http://elasticsearch:9200']
|
|
||||||
#pipeline: geoip
|
|
||||||
indices:
|
|
||||||
- index: 'wazuh-alerts-3.x-%{+yyyy.MM.dd}'
|
|
||||||
|
|||||||
@@ -1,7 +1,8 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
# Wazuh Docker Copyright (C) 2019 Wazuh Inc. (License GPLv2)
|
|
||||||
|
|
||||||
|
#
|
||||||
# Initialize the custom data directory layout
|
# Initialize the custom data directory layout
|
||||||
|
#
|
||||||
source /data_dirs.env
|
source /data_dirs.env
|
||||||
|
|
||||||
cd /var/ossec
|
cd /var/ossec
|
||||||
|
|||||||
@@ -1,4 +0,0 @@
|
|||||||
#!/bin/sh
|
|
||||||
# Wazuh Docker Copyright (C) 2019 Wazuh Inc. (License GPLv2)
|
|
||||||
service postfix start
|
|
||||||
tail -f /var/log/mail.log
|
|
||||||
79
wazuh/config/run.sh
Normal file
79
wazuh/config/run.sh
Normal file
@@ -0,0 +1,79 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
#
|
||||||
|
# OSSEC container bootstrap. See the README for information of the environment
|
||||||
|
# variables expected by this script.
|
||||||
|
#
|
||||||
|
|
||||||
|
#
|
||||||
|
|
||||||
|
#
|
||||||
|
# Startup the services
|
||||||
|
#
|
||||||
|
|
||||||
|
source /data_dirs.env
|
||||||
|
FIRST_TIME_INSTALLATION=false
|
||||||
|
DATA_PATH=/var/ossec/data
|
||||||
|
|
||||||
|
for ossecdir in "${DATA_DIRS[@]}"; do
|
||||||
|
if [ ! -e "${DATA_PATH}/${ossecdir}" ]
|
||||||
|
then
|
||||||
|
echo "Installing ${ossecdir}"
|
||||||
|
mkdir -p $(dirname ${DATA_PATH}/${ossecdir})
|
||||||
|
cp -pr /var/ossec/${ossecdir}-template ${DATA_PATH}/${ossecdir}
|
||||||
|
FIRST_TIME_INSTALLATION=true
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
touch ${DATA_PATH}/process_list
|
||||||
|
chgrp ossec ${DATA_PATH}/process_list
|
||||||
|
chmod g+rw ${DATA_PATH}/process_list
|
||||||
|
|
||||||
|
AUTO_ENROLLMENT_ENABLED=${AUTO_ENROLLMENT_ENABLED:-true}
|
||||||
|
|
||||||
|
if [ $FIRST_TIME_INSTALLATION == true ]
|
||||||
|
then
|
||||||
|
|
||||||
|
if [ $AUTO_ENROLLMENT_ENABLED == true ]
|
||||||
|
then
|
||||||
|
if [ ! -e ${DATA_PATH}/etc/sslmanager.key ]
|
||||||
|
then
|
||||||
|
echo "Creating ossec-authd key and cert"
|
||||||
|
openssl genrsa -out ${DATA_PATH}/etc/sslmanager.key 4096
|
||||||
|
openssl req -new -x509 -key ${DATA_PATH}/etc/sslmanager.key\
|
||||||
|
-out ${DATA_PATH}/etc/sslmanager.cert -days 3650\
|
||||||
|
-subj /CN=${HOSTNAME}/
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
function ossec_shutdown(){
|
||||||
|
/var/ossec/bin/ossec-control stop;
|
||||||
|
if [ $AUTO_ENROLLMENT_ENABLED == true ]
|
||||||
|
then
|
||||||
|
kill $AUTHD_PID
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Trap exit signals and do a proper shutdown
|
||||||
|
trap "ossec_shutdown; exit" SIGINT SIGTERM
|
||||||
|
|
||||||
|
chmod -R g+rw ${DATA_PATH}
|
||||||
|
|
||||||
|
if [ $AUTO_ENROLLMENT_ENABLED == true ]
|
||||||
|
then
|
||||||
|
echo "Starting ossec-authd..."
|
||||||
|
/var/ossec/bin/ossec-authd -p 1515 -g ossec $AUTHD_OPTIONS >/dev/null 2>&1 &
|
||||||
|
AUTHD_PID=$!
|
||||||
|
fi
|
||||||
|
sleep 15 # give ossec a reasonable amount of time to start before checking status
|
||||||
|
LAST_OK_DATE=`date +%s`
|
||||||
|
|
||||||
|
## Start services
|
||||||
|
/usr/sbin/postfix start
|
||||||
|
/bin/node /var/ossec/api/app.js &
|
||||||
|
/usr/bin/filebeat.sh &
|
||||||
|
/var/ossec/bin/ossec-control restart
|
||||||
|
|
||||||
|
|
||||||
|
tail -f /var/ossec/logs/ossec.log
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
#!/bin/sh
|
|
||||||
# Wazuh Docker Copyright (C) 2019 Wazuh Inc. (License GPLv2)
|
|
||||||
service wazuh-api start
|
|
||||||
tail -f /var/ossec/data/logs/api.log
|
|
||||||
|
|
||||||
7
wazuh/config/wazuh.repo
Normal file
7
wazuh/config/wazuh.repo
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
[wazuh_repo]
|
||||||
|
gpgcheck=1
|
||||||
|
gpgkey=https://packages.wazuh.com/key/GPG-KEY-WAZUH
|
||||||
|
enabled=1
|
||||||
|
name=CENTOS-$releasever - Wazuh
|
||||||
|
baseurl=https://packages.wazuh.com/yum/el/$releasever/$basearch
|
||||||
|
protect=1
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
#!/bin/sh
|
|
||||||
# Wazuh Docker Copyright (C) 2019 Wazuh Inc. (License GPLv2)
|
|
||||||
service wazuh-manager start
|
|
||||||
tail -f /var/ossec/data/logs/ossec.log
|
|
||||||
|
|
||||||
Reference in New Issue
Block a user