Compare commits

..

128 Commits

Author SHA1 Message Date
9 Technology Group LTD
19f8f85d38 Merge pull request #309 from jaas666/qol/remove-arch-selection
Remove architecture selection from agent installation UI
2025-11-17 01:35:31 +00:00
9 Technology Group LTD
2b7aeda980 Merge pull request #321 from PatchMon/feature/alpine
ui improvements on the toggle
2025-11-17 01:23:24 +00:00
Muhammad Ibrahim
d2b36f36c9 ui improvements on the toggle 2025-11-17 01:18:20 +00:00
9 Technology Group LTD
b02ed6aa3b Merge pull request #320 from PatchMon/feature/alpine
added reboot required flag
2025-11-16 22:54:15 +00:00
Muhammad Ibrahim
539bbb7fbc added reboot required flag
added kernel running/installed version
Added dasboard card and filteres for reboot
fixed security qty updated on dnf
2025-11-16 22:50:41 +00:00
9 Technology Group LTD
3a6f04f748 Merge pull request #319 from PatchMon/feature/alpine
updated agent files
2025-11-16 19:03:27 +00:00
Muhammad Ibrahim
8df6ca2342 updated agent files
Fixed removal script
Fixed agent version checking
2025-11-16 19:00:10 +00:00
9 Technology Group LTD
0e8d74e821 Merge pull request #318 from PatchMon/feature/alpine
openssl alpine binary for Docker
2025-11-15 01:04:44 +00:00
Muhammad Ibrahim
0dbcc3c2c3 openssl alpine binary for Docker 2025-11-15 01:03:43 +00:00
9 Technology Group LTD
ab23eaf7bd Merge pull request #317 from PatchMon/feature/alpine
fixed node permissions
2025-11-15 00:54:29 +00:00
Muhammad Ibrahim
bfc1cc3bf0 fixed node permissions 2025-11-15 00:53:37 +00:00
9 Technology Group LTD
d33992b5f7 Merge pull request #316 from PatchMon/feature/alpine
adding ssl
2025-11-15 00:46:00 +00:00
Muhammad Ibrahim
3e5af312b6 adding ssl 2025-11-15 00:34:19 +00:00
9 Technology Group LTD
823b89f2c1 Merge pull request #315 from PatchMon/feature/alpine
arm builder
2025-11-15 00:30:43 +00:00
Muhammad Ibrahim
84cdc7224f arm builder 2025-11-15 00:24:08 +00:00
9 Technology Group LTD
c770bf1444 API for auto-enrollment
Api
2025-11-15 00:08:37 +00:00
Muhammad Ibrahim
307970ebd4 Fixed formatting 2025-11-15 00:03:02 +00:00
Muhammad Ibrahim
9da341f84c auto-enrolment enhancements 2025-11-14 23:57:43 +00:00
Muhammad Ibrahim
1ca8bf8581 better auto-enrollment system 2025-11-14 22:53:48 +00:00
Muhammad Ibrahim
a4bc9c4aed workspace fix 2025-11-14 21:00:55 +00:00
Muhammad Ibrahim
8f25bc5b8b fixed docker build 2025-11-14 20:54:43 +00:00
Muhammad Ibrahim
a37b479de6 1.3.4 2025-11-14 20:45:40 +00:00
Juan A.
8cf29c9bbb remove-architecture-selection 2025-11-13 19:28:11 -06:00
9 Technology Group LTD
3f18074f01 Merge pull request #304 from PatchMon/feature/alpine
new binary for alpie apk support
2025-11-11 12:49:09 +00:00
Muhammad Ibrahim
ab700a3bc8 new binary for alpie apk support 2025-11-11 12:42:00 +00:00
9 Technology Group LTD
9857d7cdfc Merge pull request #302 from PatchMon/feature/api
added the migration file
2025-11-10 22:02:37 +00:00
Muhammad Ibrahim
3f6466c80a added the migration file 2025-11-10 22:00:02 +00:00
9 Technology Group LTD
d7d47089b2 Merge pull request #301 from PatchMon/feature/api
Feature/api
2025-11-10 20:41:36 +00:00
Muhammad Ibrahim
d1069a8bd0 api endpoint and scopes created 2025-11-10 20:34:03 +00:00
Muhammad Ibrahim
bedcd1ac73 added api scope creator 2025-11-10 20:32:40 +00:00
Muhammad Ibrahim
f0b028cb77 alpine support on the agent installation script 2025-11-08 22:00:34 +00:00
9 Technology Group LTD
427743b81e Merge pull request #294 from PatchMon/feature/alpine
alpine support (apk) support agents
2025-11-08 21:26:04 +00:00
Muhammad Ibrahim
8c2d4aa42b alpine support (apk) support agents 2025-11-08 21:15:08 +00:00
9 Technology Group LTD
a4922b4e54 Merge pull request #292 from PatchMon/feature/alpine
arm support
2025-11-08 12:24:42 +00:00
Muhammad Ibrahim
082ceed27c arm support 2025-11-08 12:23:50 +00:00
9 Technology Group LTD
5a3938d7fc Merge pull request #291 from PatchMon/1-3-3
fixed env flag issue when auto-enrolling into proxmox
2025-11-08 09:35:52 +00:00
Muhammad Ibrahim
eb433719dd fixed env flag issue when auto-enrolling into proxmox 2025-11-08 09:34:39 +00:00
9 Technology Group LTD
106ab6f5f8 Merge pull request #290 from PatchMon/1-3-3
new agent files for 1.3.3
2025-11-07 23:07:49 +00:00
Muhammad Ibrahim
148ff2e77f new agent files for 1.3.3 2025-11-07 23:02:53 +00:00
9 Technology Group LTD
a9e4349f5f Merge pull request #289 from PatchMon/1-3-3
1 3 3
2025-11-07 22:33:28 +00:00
Muhammad Ibrahim
a655a24f2f 1.3.3 agents 2025-11-07 22:32:11 +00:00
Muhammad Ibrahim
417f6deccf 1.3.3 version changes 2025-11-07 22:29:13 +00:00
9 Technology Group LTD
3c780d07ff Merge pull request #288 from PatchMon/1-3-3
Bug fixes
2025-11-07 22:14:37 +00:00
Muhammad Ibrahim
55de7b40ed Merge remote 1-3-3 branch with local changes 2025-11-07 22:07:30 +00:00
Muhammad Ibrahim
90e56d62bb Update biome to 2.3.4 to match CI 2025-11-07 22:07:21 +00:00
Muhammad Ibrahim
497aeb8068 fixed biome and added tz 2025-11-07 22:04:27 +00:00
Muhammad Ibrahim
f5b0e930f7 Fixed reporting mechanism 2025-11-07 18:33:17 +00:00
Muhammad Ibrahim
e73ebc383c fixed the issue with graphs on dashboard not showing information correctly, so now statistics are queued into a new table specifically for this and addd this in automation queue 2025-11-07 10:00:19 +00:00
9 Technology Group LTD
ef0bcd2240 Merge pull request #285 from FutureCow/patch-1
Add IPv6 support to server configuration
2025-11-07 08:23:46 +00:00
Muhammad Ibrahim
63831caba3 fixed tfa route for handling insertion of tfa number
Better handling of existing systems already enrolled, done via checking if the config.yml file exists and ping through its credentials as opposed to checking for machine_ID
UI justification improvements on repositories pages
2025-11-07 08:20:42 +00:00
Muhammad Ibrahim
8e5eb54e02 fixed code quality 2025-11-06 22:16:35 +00:00
Muhammad Ibrahim
a8eb3ec21c fix docker error handling
fix websocket routes
Add timezone variable in code
changed the env.example to suit
2025-11-06 22:08:00 +00:00
FutureCow
c56debc80e Add IPv6 support to server configuration
Add IPv6 support to server configuration
2025-11-05 20:34:13 +01:00
Muhammad Ibrahim
e57ff7612e removed emoji 2025-11-01 03:25:31 +00:00
Muhammad Ibrahim
7a3d98862f Fix emoji parsing error in print functions
- Changed from echo -e to printf for safer special character handling
- Store emoji characters in variables using bash octal escape sequences
- Prevents 'command not found' error when bash interprets emoji as commands
- Fixes issue where line 41 error occurred during setup.sh --update
2025-11-01 02:58:34 +00:00
9 Technology Group LTD
913976b7f6 1.3.2 final
fixed theme and user profile settings
2025-10-31 22:25:22 +00:00
Muhammad Ibrahim
53ff3bb1e2 fixed theme and user profile settings 2025-10-31 22:17:24 +00:00
9 Technology Group LTD
428207bc58 Merge pull request #262 from PatchMon/release/1-3-2
Fixed TFA fingerprint sending in CORS
2025-10-31 20:57:59 +00:00
Muhammad Ibrahim
1547af6986 Fixed TFA fingerprint sending in CORS
Ammended firstname and lastname adding issue in profile
2025-10-31 20:50:01 +00:00
9 Technology Group LTD
39fbafe01f Merge pull request #261 from PatchMon/release/1-3-1
Added migration file for the theme preferences
2025-10-31 18:18:47 +00:00
Muhammad Ibrahim
f296cf2003 Added migration file for the theme preferences 2025-10-31 18:17:48 +00:00
9 Technology Group LTD
052a77dce8 Merge pull request #260 from PatchMon/release/1-3-2
Release/1 3 2
2025-10-31 17:46:58 +00:00
Muhammad Ibrahim
94bfffd882 Theme settings per user 2025-10-31 17:33:47 +00:00
Muhammad Ibrahim
37462f4831 New 1.3.2 Agent 2025-10-31 15:41:01 +00:00
Muhammad Ibrahim
5457a1e9bc Docker implementation
Profile fixes
Hostgroup fixes
TFA fixes
2025-10-31 15:24:53 +00:00
9 Technology Group LTD
f3bca4a6d5 Merge pull request #245 from alan7000/alan7000-patch-1
Enable listen IPv6 for port 3000
2025-10-29 20:39:29 +00:00
Flambard alan
ca4d34c230 Enable listen IPv6 for port 3000 2025-10-29 17:07:50 +01:00
9 Technology Group LTD
1e75f2b1fe Merge pull request #242 from PatchMon/release/1-3-1
Agent update initiation from server
2025-10-28 21:55:57 +00:00
Muhammad Ibrahim
79317b0052 Added server initiated Agent update 2025-10-28 21:49:19 +00:00
Muhammad Ibrahim
77a945a5b6 fix: add Prisma connection pool variables to update_env_file function
- Added DB_CONNECTION_LIMIT, DB_POOL_TIMEOUT, DB_CONNECT_TIMEOUT, DB_IDLE_TIMEOUT, DB_MAX_LIFETIME
- Sets defaults (30, 20, 10, 300, 1800) if not already in .env
- Checks for missing variables during update mode
- Automatically adds them with comment header
- Ensures existing installations get connection pool settings on update
2025-10-28 19:55:42 +00:00
9 Technology Group LTD
276d910e83 Merge pull request #238 from PatchMon/release/1-3-1
fix: handle non-git installations in update mode
2025-10-28 19:33:22 +00:00
Muhammad Ibrahim
dae536e96b fix: handle non-git installations in update mode
- Added fallback to re-initialize git repository if .git directory missing
- Automatically detects current version from package.json
- Attempts to match to correct release branch
- Falls back to main branch if version detection fails
- Prevents update failures for legacy installations
2025-10-28 19:21:11 +00:00
9 Technology Group LTD
8361caabe8 Merge pull request #236 from PatchMon/release/1-3-1
Release/1 3 1
2025-10-28 19:03:34 +00:00
Muhammad Ibrahim
f6d23e45b2 feat: make triangular accents more visible across background
- Increased triangle opacity from 2-5% to 5-13%
- Increased coverage from 30% to 60% of grid positions
- Slightly larger triangles (50-150px vs 40-120px)
- Smaller cell size (180px vs 200px) for better distribution
- Now clearly visible across entire background
2025-10-28 18:55:30 +00:00
Muhammad Ibrahim
aba0f5cb6b feat: add subtle triangular accents to clean gradient background
- Kept clean radial gradient base
- Added very faint triangular shapes (2-5% opacity)
- Sparse pattern - only 30% of grid positions
- Random sizes (40-120px) and positions for organic feel
- Perfect balance of clean + subtle geometric detail
2025-10-28 18:53:46 +00:00
Muhammad Ibrahim
2ec2b3992c feat: replace with clean radial gradient background
- Removed busy triangular pattern
- Clean, subtle radial gradient like modern Linux wallpapers
- Light center (top-left) flowing to darker bottom-right corner
- Uses theme colors but much more minimalist
- Daily color rotation maintained
2025-10-28 18:52:00 +00:00
Muhammad Ibrahim
f85721b292 feat: improve background with low-poly triangular pattern
- Replaced square grid with triangular mesh for more organic look
- Increased default cell size from 75px to 150px for subtlety
- Added variance to point positions for natural randomness
- Each cell now renders two triangles with gradient fills
- Maintains theme colors and daily variation
2025-10-28 18:49:39 +00:00
Muhammad Ibrahim
1d2c003830 fix: replace trianglify with pure browser gradient mesh generator
- Removed trianglify dependency (which required Node.js canvas)
- Implemented custom gradient mesh using HTML5 Canvas API
- Browser-native solution works in Docker without native dependencies
- Maintains daily variation and theme color support
- No more 'i.from is not a function' errors
2025-10-28 18:47:56 +00:00
Muhammad Ibrahim
2975da0f69 fix: use SVG-based rendering for Trianglify in browser instead of Node.js canvas 2025-10-28 18:35:52 +00:00
Muhammad Ibrahim
93760d03e1 feat: maintain nginx-unprivileged security while adding canvas runtime libraries via multi-stage build 2025-10-28 18:35:03 +00:00
Muhammad Ibrahim
43fb54a683 fix: change nginx-unprivileged to nginx:alpine for canvas runtime dependencies 2025-10-28 18:33:43 +00:00
Muhammad Ibrahim
e9368d1a95 feat: add canvas runtime dependencies to frontend Docker image for Trianglify support 2025-10-28 18:31:55 +00:00
Muhammad Ibrahim
3ce8c02a31 fix: suppress Trianglify errors in production to reduce console noise 2025-10-28 18:30:49 +00:00
Muhammad Ibrahim
ac420901a6 fix: add try-catch protection for Trianglify canvas generation to prevent runtime errors in Docker 2025-10-28 18:25:13 +00:00
Muhammad Ibrahim
eb0218bdcb fix: unset color variables before sourcing .env to prevent ANSI escape sequence errors 2025-10-28 18:21:52 +00:00
9 Technology Group LTD
1f6f58360f Merge pull request #235 from PatchMon/release/1-3-1
fix: Remove --ignore-scripts to allow trianglify/canvas to install
2025-10-28 18:12:04 +00:00
Muhammad Ibrahim
746451c296 debug: Add verbose logging to npm install for ARM64 builds
Added echo statements and --loglevel=verbose to see exactly what npm is
doing during the long ARM64 build process. This will help diagnose why
it's taking so long under QEMU emulation.
2025-10-28 18:01:02 +00:00
Muhammad Ibrahim
285e4c59ee fix: Install canvas build dependencies to make trianglify work
Canvas requires native build tools to compile:
- python3: For node-gyp
- make, g++, pkgconfig: C++ compiler and build tools
- cairo-dev, jpeg-dev, pango-dev, giflib-dev: Native libraries

By installing these dependencies before npm install, canvas can properly
build its native bindings for both AMD64 and ARM64 architectures.

Removed try-catch blocks around trianglify since it should now work properly.
2025-10-28 17:36:18 +00:00
Muhammad Ibrahim
9050595b7c fix: Make trianglify/canvas optional and handle gracefully
Canvas requires Python and native build tools which fail in Docker builds
with --ignore-scripts. Since trianglify is only used for cosmetic backgrounds
on the login and layout pages, we can make it optional.

Changes:
- Added try-catch around trianglify usage in Login and Layout
- Using --ignore-scripts in frontend Dockerfile to skip canvas native build
- Gracefully fail when trianglify is not available (background won't render)

This allows the frontend to build and run even without canvas/trianglify.
2025-10-28 17:35:15 +00:00
Muhammad Ibrahim
cc46940b0c fix: Remove --ignore-scripts to allow trianglify/canvas to install
The 'i.from is not a function' error occurs because trianglify depends
on canvas, and --ignore-scripts prevented canvas from installing properly.

Solution: Remove --ignore-scripts and allow npm to run install scripts,
but gracefully handle canvas rebuild failures since native bindings
are optional for the production build.
2025-10-28 17:31:39 +00:00
9 Technology Group LTD
203a065479 Merge pull request #234 from PatchMon/release/1-3-1
fix: Add npm fetch retries to handle transient network errors
2025-10-28 17:18:12 +00:00
Muhammad Ibrahim
8864de6c15 fix: Add npm fetch retries to handle transient network errors
ARM64 builds under QEMU are slower and more prone to network timeouts.
Changed from --fetch-retries=0 to --fetch-retries=3 with exponential backoff:
- Min timeout: 20 seconds
- Max timeout: 120 seconds

This should handle transient ECONNRESET errors from npm registry.
2025-10-28 17:10:45 +00:00
9 Technology Group LTD
96aedbe761 Merge pull request #233 from PatchMon/release/1-3-1
Release/1 3 1
2025-10-28 17:06:02 +00:00
Muhammad Ibrahim
3df2057f7e fix: Remove --omit=dev to install Vite and other build tools
Vite is in devDependencies and is required to build the frontend.
Using --omit=dev skipped it, causing 'vite: not found' error.

Build dependencies are needed during the builder stage, they won't be
included in the final production image since we copy only the built
artifacts (dist/) to the nginx image.
2025-10-28 16:53:52 +00:00
Muhammad Ibrahim
42f4e58bb4 fix: Add --ignore-scripts to prevent canvas native build in frontend
The trianglify package depends on canvas, which tries to build native binaries
requiring Python and build tools. Since canvas is not actually needed for the
browser build (trianglify uses it only server-side), we can skip all install
scripts with --ignore-scripts to avoid the build failure.

This fixes the ARM64 and AMD64 frontend builds.
2025-10-28 16:51:39 +00:00
Muhammad Ibrahim
12eef22912 fix: Use npm install instead of npm ci for frontend (no package-lock.json)
The frontend subdirectory doesn't have its own package-lock.json (only at
workspace root), so npm ci fails. Using npm install instead to generate
dependencies from package.json.
2025-10-28 16:43:25 +00:00
Muhammad Ibrahim
c2121e3995 fix: Build only frontend workspace, not root monorepo dependencies
The previous approach installed workspace root dependencies which included
backend packages like 'canvas' that require native build tools (Python, gcc).

Changes:
- Work directly in /app/frontend instead of /app root
- Copy only frontend/package*.json (not root package.json)
- Run 'npm run build' instead of 'npm run build:frontend'
- This installs only frontend dependencies (Vite, React, etc.)

This avoids attempting to build unnecessary backend dependencies in the
frontend Docker image.
2025-10-28 16:41:35 +00:00
Muhammad Ibrahim
6792f96af9 fix: Ensure rollup ARM64 native binaries are installed in frontend build
Removed --ignore-scripts flag and added cache cleanup to ensure optional
dependencies like @rollup/rollup-linux-arm64-musl are properly installed.
This mirrors the fix applied to the backend Dockerfile for ARM64 support.
2025-10-28 16:39:27 +00:00
Muhammad Ibrahim
1e617c8bb8 fix: Regenerate package-lock.json to remove corrupted npm registry URLs
The workspace package-lock.json had corrupted 'resolved' URLs for many packages
including string_decoder, causing Docker builds to fail with 404 errors.

Changes:
- Regenerated root package-lock.json which manages all workspace dependencies
- Restored npm ci in Dockerfile (now that lockfile is fixed)
- Keep PRISMA_CLI_BINARY_TYPE=binary for ARM64 compatibility

This should resolve both AMD64 and ARM64 Docker build failures.
2025-10-28 16:35:28 +00:00
Muhammad Ibrahim
a76c5b8963 fix: Use npm install to regenerate package-lock.json and bypass corruption
- Delete package-lock.json in Docker build to avoid corrupted string_decoder entry
- Use npm install instead of npm ci to regenerate package-lock.json fresh
- This avoids the 'string_decoder is a core module' error that package-lock.json had cached
2025-10-28 16:31:52 +00:00
Muhammad Ibrahim
212b24b1c8 fix: Force npm to prefer online registry and disable fetch retries
- Added --prefer-online flag to npm ci to bypass corrupted cache
- Set --fetch-retries=0 to fail fast on corrupted packages
- Removed /root/.npm directory as well to clear all caches
- Fixed typo in workflow name
2025-10-28 16:30:14 +00:00
Muhammad Ibrahim
9fc3f4f9d1 fix: Enable ARM64 builds with improved QEMU support
- Re-enabled linux/arm64 platform builds
- Added PRISMA_CLI_BINARY_TYPE=binary to use pre-compiled binaries
- This avoids native compilation issues under QEMU emulation
- Use npm script for Prisma generation instead of npx
2025-10-28 16:27:45 +00:00
Muhammad Ibrahim
3029278742 fix: Build only linux/amd64 to avoid QEMU emulation failures
ARM64 builds were failing with 'Illegal instruction' errors during npm ci
due to QEMU emulation issues. Since PatchMon is a Node.js application,
AMD64 images will run fine on ARM64 systems (like Apple Silicon Macs)
using Rosetta emulation. This simplifies the build process.
2025-10-28 16:25:58 +00:00
Muhammad Ibrahim
e4d6c1205c fix: Remove entire npm cache directory to fix corrupted tarball issue
The get-intrinsic package tarball is getting corrupted in the npm cache.
By removing ~/.npm entirely and fetching fresh packages, we avoid the
corrupted cache issue that's causing the string_decoder error.
2025-10-28 16:19:25 +00:00
Muhammad Ibrahim
0f5272d12a fix: Add legacy-peer-deps flag to npm ci to resolve string_decoder build error
The string_decoder error occurs when npm tries to install it as a separate package
when it's actually a Node.js core module. Using --legacy-peer-deps helps resolve
this peer dependency conflict during Docker builds.
2025-10-28 16:12:46 +00:00
Muhammad Ibrahim
5776d32e71 fix: Improve Docker build reliability by cleaning npm cache before npm ci
- Move npm cache clean before npm ci to prevent corrupted package issues
- This fixes the string_decoder error occurring during GitHub Actions builds
2025-10-28 16:12:31 +00:00
Muhammad Ibrahim
a11ff842eb fix: Remove unused getSettings import in metricsReporting.js 2025-10-28 16:06:36 +00:00
Muhammad Ibrahim
48ce1951de fix: Resolve all linting errors
- Remove unused imports and variables in metricsRoutes.js
- Prefix unused error variables with underscore
- Fix useEffect dependency in Login.jsx
- Add aria-label and title to all SVG elements for accessibility
2025-10-28 16:06:36 +00:00
Muhammad Ibrahim
9705e24b83 docs: Add complete Prisma connection pool variables to Docker Compose files
Added DB_IDLE_TIMEOUT and DB_MAX_LIFETIME to both production and dev Docker Compose files to complete the connection pool configuration. These variables were already documented but missing from the compose files.
2025-10-28 16:06:36 +00:00
Muhammad Ibrahim
933c7a067e perf: Optimize packages endpoint - reduce from N*3 queries to 3 batch queries
- Replace individual queries per package with batch GROUP BY queries
- Reduces from potentially hundreds of queries to just 3 queries total
- Creates lookup maps for O(1) access when assembling results
- Improves packages page loading time significantly
2025-10-28 16:06:36 +00:00
Muhammad Ibrahim
68f10c6c43 fix: Revert axios timeout to 10 seconds
Nothing should take longer than 10 seconds to respond. If it does, it's a server/query optimization issue, not a timeout issue.
2025-10-28 16:06:36 +00:00
Muhammad Ibrahim
4b6f19c28e fix: Replace SSE with polling for WebSocket status to prevent connection pool exhaustion
- Replace persistent SSE connections with lightweight polling (10s interval)
- Optimize WebSocket status fetching using bulk endpoint instead of N individual calls
- Fix N+1 query problem in /dashboard/hosts endpoint (39 queries → 4 queries)
- Increase database connection pool limit from 5 to 50 via environment variables
- Increase Axios timeout from 10s to 30s for complex operations
- Fix malformed WebSocket routes causing 404 on bulk status endpoint

Fixes timeout issues when adding hosts with multiple WebSocket agents connected.
Reduces database connections from 19 persistent SSE + retries to 1 poll every 10 seconds.
2025-10-28 16:06:36 +00:00
Muhammad Ibrahim
ae6afb0ef4 Building Docker compatibilty within the Agent 2025-10-28 16:06:36 +00:00
9 Technology Group LTD
61523c9a44 Add bulk status endpoint and update frontend
This script fixes the HTTP connection limit issue by adding a bulk status endpoint to the backend and updating the frontend to utilize this endpoint. Backups of the modified files are created before changes are applied.
2025-10-28 12:23:46 +00:00
9 Technology Group LTD
3f9a5576ac Fix file path in fixconnlimit.sh 2025-10-27 16:57:53 +00:00
9 Technology Group LTD
e2dd7acca5 Rename fixconnlimit to fixconnlimit.sh 2025-10-27 16:56:00 +00:00
9 Technology Group LTD
1c3b01f13c Add script to update connection pool values in prisma.js
This script updates hardcoded connection pool values in prisma.js, allowing for customizable connection settings through command-line arguments. It also creates a backup of the original file before applying changes.
2025-10-27 16:55:48 +00:00
Muhammad Ibrahim
2c5a35b6c2 Added Diagnostics scripts and improved setup with more redis db server handling 2025-10-24 21:25:15 +01:00
Muhammad Ibrahim
f42c53d34b Added support for allowing self-signed certificates that the new Go agent can also use 2025-10-23 20:57:31 +01:00
Muhammad Ibrahim
95800e6d76 Upgrading to version 1.3.1 2025-10-23 20:27:11 +01:00
9 Technology Group LTD
8d372411be Merge pull request #208 from PatchMon/post1-3-0
Fixed some ratelimits that were hardcoded and ammended docker compose…
2025-10-22 15:37:50 +01:00
Muhammad Ibrahim
de449c547f Fixed some ratelimits that were hardcoded and ammended docker compose to take into consideration rate limits 2025-10-22 15:22:14 +01:00
9 Technology Group LTD
cd03f0e66a Merge pull request #206 from PatchMon/post1-3-0
Made the setup.sh regenerate the .env variables
2025-10-22 14:33:18 +01:00
Muhammad Ibrahim
a8bd09be89 Made the setup.sh regenerate the .env variables 2025-10-22 14:15:49 +01:00
9 Technology Group LTD
deb6bed1a6 Merge pull request #204 from PatchMon/post1-3-0
Improving the setup.sh script to handle the nginx configuration changes on bare-metal type instances.

Also amended the env.example files to suit.
2025-10-22 13:47:03 +01:00
Muhammad Ibrahim
3ae8422487 modified nginx config for updates 2025-10-22 12:12:06 +01:00
Muhammad Ibrahim
c98203a997 Fixed bug on nginx configuration 2025-10-22 02:31:53 +01:00
Muhammad Ibrahim
37c8f5fa76 Modified setup.sh to handle the changes in version 1.3.0 2025-10-22 02:09:23 +01:00
115 changed files with 14976 additions and 5278 deletions

View File

@@ -43,6 +43,8 @@ jobs:
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
with:
platforms: linux/arm64
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
@@ -74,3 +76,4 @@ jobs:
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha,scope=${{ matrix.image }}
cache-to: type=gha,mode=max,scope=${{ matrix.image }}
provenance: false

270
agents/direct_host_auto_enroll.sh Executable file
View File

@@ -0,0 +1,270 @@
#!/bin/sh
# PatchMon Direct Host Auto-Enrollment Script
# POSIX-compliant shell script (works with dash, ash, bash, etc.)
# Usage: curl -s "https://patchmon.example.com/api/v1/auto-enrollment/script?type=direct-host&token_key=KEY&token_secret=SECRET" | sh
set -e
SCRIPT_VERSION="1.0.0"
# =============================================================================
# PatchMon Direct Host Auto-Enrollment Script
# =============================================================================
# This script automatically enrolls the current host into PatchMon for patch
# management.
#
# Usage:
# curl -s "https://patchmon.example.com/api/v1/auto-enrollment/script?type=direct-host&token_key=KEY&token_secret=SECRET" | sh
#
# With custom friendly name:
# curl -s "https://patchmon.example.com/api/v1/auto-enrollment/script?type=direct-host&token_key=KEY&token_secret=SECRET" | FRIENDLY_NAME="My Server" sh
#
# Requirements:
# - Run as root or with sudo
# - Auto-enrollment token from PatchMon
# - Network access to PatchMon server
# =============================================================================
# ===== CONFIGURATION =====
PATCHMON_URL="${PATCHMON_URL:-https://patchmon.example.com}"
AUTO_ENROLLMENT_KEY="${AUTO_ENROLLMENT_KEY:-}"
AUTO_ENROLLMENT_SECRET="${AUTO_ENROLLMENT_SECRET:-}"
CURL_FLAGS="${CURL_FLAGS:--s}"
FORCE_INSTALL="${FORCE_INSTALL:-false}"
FRIENDLY_NAME="${FRIENDLY_NAME:-}" # Optional: Custom friendly name for the host
# ===== COLOR OUTPUT =====
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# ===== LOGGING FUNCTIONS =====
info() { printf "%b\n" "${GREEN}[INFO]${NC} $1"; }
warn() { printf "%b\n" "${YELLOW}[WARN]${NC} $1"; }
error() { printf "%b\n" "${RED}[ERROR]${NC} $1" >&2; exit 1; }
success() { printf "%b\n" "${GREEN}[SUCCESS]${NC} $1"; }
debug() { [ "${DEBUG:-false}" = "true" ] && printf "%b\n" "${BLUE}[DEBUG]${NC} $1" || true; }
# ===== BANNER =====
cat << "EOF"
╔═══════════════════════════════════════════════════════════════╗
║ ║
║ ____ _ _ __ __ ║
| _ \ __ _| |_ ___| |__ | \/ | ___ _ __ ║
| |_) / _` | __/ __| '_ \| |\/| |/ _ \| '_ \
| __/ (_| | || (__| | | | | | | (_) | | | |
|_| \__,_|\__\___|_| |_|_| |_|\___/|_| |_|
║ ║
║ Direct Host Auto-Enrollment Script ║
║ ║
╚═══════════════════════════════════════════════════════════════╝
EOF
echo ""
# ===== VALIDATION =====
info "Validating configuration..."
if [ -z "$AUTO_ENROLLMENT_KEY" ] || [ -z "$AUTO_ENROLLMENT_SECRET" ]; then
error "AUTO_ENROLLMENT_KEY and AUTO_ENROLLMENT_SECRET must be set"
fi
if [ -z "$PATCHMON_URL" ]; then
error "PATCHMON_URL must be set"
fi
# Check if running as root
if [ "$(id -u)" -ne 0 ]; then
error "This script must be run as root (use sudo)"
fi
# Check for required commands
for cmd in curl; do
if ! command -v $cmd >/dev/null 2>&1; then
error "Required command '$cmd' not found. Please install it first."
fi
done
info "Configuration validated successfully"
info "PatchMon Server: $PATCHMON_URL"
echo ""
# ===== GATHER HOST INFORMATION =====
info "Gathering host information..."
# Get hostname
hostname=$(hostname)
# Use FRIENDLY_NAME env var if provided, otherwise use hostname
if [ -n "$FRIENDLY_NAME" ]; then
friendly_name="$FRIENDLY_NAME"
info "Using custom friendly name: $friendly_name"
else
friendly_name="$hostname"
fi
# Try to get machine_id (optional, for tracking)
machine_id=""
if [ -f /etc/machine-id ]; then
machine_id=$(cat /etc/machine-id 2>/dev/null || echo "")
elif [ -f /var/lib/dbus/machine-id ]; then
machine_id=$(cat /var/lib/dbus/machine-id 2>/dev/null || echo "")
fi
# Get OS information
os_info="unknown"
if [ -f /etc/os-release ]; then
os_info=$(grep "^PRETTY_NAME=" /etc/os-release 2>/dev/null | cut -d'"' -f2 || echo "unknown")
fi
# Get IP address (first non-loopback)
ip_address=$(hostname -I 2>/dev/null | awk '{print $1}' || echo "unknown")
# Detect architecture
arch_raw=$(uname -m 2>/dev/null || echo "unknown")
case "$arch_raw" in
"x86_64")
architecture="amd64"
;;
"i386"|"i686")
architecture="386"
;;
"aarch64"|"arm64")
architecture="arm64"
;;
"armv7l"|"armv6l"|"arm")
architecture="arm"
;;
*)
warn " ⚠ Unknown architecture '$arch_raw', defaulting to amd64"
architecture="amd64"
;;
esac
info "Hostname: $hostname"
info "Friendly Name: $friendly_name"
info "IP Address: $ip_address"
info "OS: $os_info"
info "Architecture: $architecture"
if [ -n "$machine_id" ]; then
# POSIX-compliant substring (first 16 chars)
machine_id_short=$(printf "%.16s" "$machine_id")
info "Machine ID: ${machine_id_short}..."
else
info "Machine ID: (not available)"
fi
echo ""
# ===== CHECK IF AGENT ALREADY INSTALLED =====
info "Checking if agent is already configured..."
config_check=$(sh -c "
if [ -f /etc/patchmon/config.yml ] && [ -f /etc/patchmon/credentials.yml ]; then
if [ -f /usr/local/bin/patchmon-agent ]; then
# Try to ping using existing configuration
if /usr/local/bin/patchmon-agent ping >/dev/null 2>&1; then
echo 'ping_success'
else
echo 'ping_failed'
fi
else
echo 'binary_missing'
fi
else
echo 'not_configured'
fi
" 2>/dev/null || echo "error")
if [ "$config_check" = "ping_success" ]; then
success "Host already enrolled and agent ping successful - nothing to do"
exit 0
elif [ "$config_check" = "ping_failed" ]; then
warn "Agent configuration exists but ping failed - will reinstall"
elif [ "$config_check" = "binary_missing" ]; then
warn "Config exists but agent binary missing - will reinstall"
elif [ "$config_check" = "not_configured" ]; then
info "Agent not yet configured - proceeding with enrollment"
else
warn "Could not check agent status - proceeding with enrollment"
fi
echo ""
# ===== ENROLL HOST =====
info "Enrolling $friendly_name in PatchMon..."
# Build JSON payload
json_payload=$(cat <<EOF
{
"friendly_name": "$friendly_name",
"metadata": {
"hostname": "$hostname",
"ip_address": "$ip_address",
"os_info": "$os_info",
"architecture": "$architecture"
}
}
EOF
)
# Add machine_id if available
if [ -n "$machine_id" ]; then
json_payload=$(echo "$json_payload" | sed "s/\"friendly_name\"/\"machine_id\": \"$machine_id\",\n \"friendly_name\"/")
fi
response=$(curl $CURL_FLAGS -X POST \
-H "X-Auto-Enrollment-Key: $AUTO_ENROLLMENT_KEY" \
-H "X-Auto-Enrollment-Secret: $AUTO_ENROLLMENT_SECRET" \
-H "Content-Type: application/json" \
-d "$json_payload" \
"$PATCHMON_URL/api/v1/auto-enrollment/enroll" \
-w "\n%{http_code}" 2>&1)
http_code=$(echo "$response" | tail -n 1)
body=$(echo "$response" | sed '$d')
if [ "$http_code" = "201" ]; then
# Use grep and cut instead of jq since jq may not be installed
api_id=$(echo "$body" | grep -o '"api_id":"[^"]*' | cut -d'"' -f4 || echo "")
api_key=$(echo "$body" | grep -o '"api_key":"[^"]*' | cut -d'"' -f4 || echo "")
if [ -z "$api_id" ] || [ -z "$api_key" ]; then
error "Failed to parse API credentials from response"
fi
success "Host enrolled successfully: $api_id"
echo ""
# ===== INSTALL AGENT =====
info "Installing PatchMon agent..."
# Build install URL with force flag and architecture
install_url="$PATCHMON_URL/api/v1/hosts/install?arch=$architecture"
if [ "$FORCE_INSTALL" = "true" ]; then
install_url="$install_url&force=true"
info "Using force mode - will bypass broken packages"
fi
info "Using architecture: $architecture"
# Download and execute installation script
install_exit_code=0
install_output=$(curl $CURL_FLAGS \
-H "X-API-ID: $api_id" \
-H "X-API-KEY: $api_key" \
"$install_url" | sh 2>&1) || install_exit_code=$?
# Check both exit code AND success message in output
if [ "$install_exit_code" -eq 0 ] || echo "$install_output" | grep -q "PatchMon Agent installation completed successfully"; then
success "Agent installed successfully"
else
error "Failed to install agent (exit: $install_exit_code)"
fi
else
printf "%b\n" "${RED}[ERROR]${NC} Failed to enroll $friendly_name - HTTP $http_code" >&2
printf "%b\n" "Response: $body" >&2
exit 1
fi
echo ""
success "Auto-enrollment complete!"
exit 0

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -1,7 +1,7 @@
#!/bin/bash
#!/bin/sh
# PatchMon Agent Installation Script
# Usage: curl -s {PATCHMON_URL}/api/v1/hosts/install -H "X-API-ID: {API_ID}" -H "X-API-KEY: {API_KEY}" | bash
# POSIX-compliant shell script (works with dash, ash, bash, etc.)
# Usage: curl -s {PATCHMON_URL}/api/v1/hosts/install -H "X-API-ID: {API_ID}" -H "X-API-KEY: {API_KEY}" | sh
set -e
@@ -19,65 +19,69 @@ NC='\033[0m' # No Color
# Functions
error() {
echo -e "${RED}ERROR: $1${NC}" >&2
printf "%b\n" "${RED}ERROR: $1${NC}" >&2
exit 1
}
info() {
echo -e "${BLUE} $1${NC}"
printf "%b\n" "${BLUE}INFO: $1${NC}"
}
success() {
echo -e "${GREEN} $1${NC}"
printf "%b\n" "${GREEN}SUCCESS: $1${NC}"
}
warning() {
echo -e "${YELLOW}⚠️ $1${NC}"
printf "%b\n" "${YELLOW}WARNING: $1${NC}"
}
# Check if running as root
if [[ $EUID -ne 0 ]]; then
if [ "$(id -u)" -ne 0 ]; then
error "This script must be run as root (use sudo)"
fi
# Verify system datetime and timezone
verify_datetime() {
info "🕐 Verifying system datetime and timezone..."
info "Verifying system datetime and timezone..."
# Get current system time
local system_time=$(date)
local timezone=$(timedatectl show --property=Timezone --value 2>/dev/null || echo "Unknown")
system_time=$(date)
timezone=$(timedatectl show --property=Timezone --value 2>/dev/null || echo "Unknown")
# Display current datetime info
echo ""
echo -e "${BLUE}📅 Current System Date/Time:${NC}"
printf "%b\n" "${BLUE}Current System Date/Time:${NC}"
echo " • Date/Time: $system_time"
echo " • Timezone: $timezone"
echo ""
# Check if we can read from stdin (interactive terminal)
if [[ -t 0 ]]; then
if [ -t 0 ]; then
# Interactive terminal - ask user
read -p "Does this date/time look correct to you? (y/N): " -r response
if [[ "$response" =~ ^[Yy]$ ]]; then
success "✅ Date/time verification passed"
printf "Does this date/time look correct to you? (y/N): "
read -r response
case "$response" in
[Yy]*)
success "Date/time verification passed"
echo ""
return 0
;;
*)
echo ""
return 0
else
printf "%b\n" "${RED}Date/time verification failed${NC}"
echo ""
echo -e "${RED}❌ Date/time verification failed${NC}"
echo ""
echo -e "${YELLOW}💡 Please fix the date/time and re-run the installation script:${NC}"
printf "%b\n" "${YELLOW}Please fix the date/time and re-run the installation script:${NC}"
echo " sudo timedatectl set-time 'YYYY-MM-DD HH:MM:SS'"
echo " sudo timedatectl set-timezone 'America/New_York' # or your timezone"
echo " sudo timedatectl list-timezones # to see available timezones"
echo ""
echo -e "${BLUE} After fixing the date/time, re-run this installation script.${NC}"
error "Installation cancelled - please fix date/time and re-run"
fi
printf "%b\n" "${BLUE}After fixing the date/time, re-run this installation script.${NC}"
error "Installation cancelled - please fix date/time and re-run"
;;
esac
else
# Non-interactive (piped from curl) - show warning and continue
echo -e "${YELLOW}⚠️ Non-interactive installation detected${NC}"
printf "%b\n" "${YELLOW}Non-interactive installation detected${NC}"
echo ""
echo "Please verify the date/time shown above is correct."
echo "If the date/time is incorrect, it may cause issues with:"
@@ -85,8 +89,8 @@ verify_datetime() {
echo " • Scheduled updates"
echo " • Data synchronization"
echo ""
echo -e "${GREEN}Continuing with installation...${NC}"
success "Date/time verification completed (assumed correct)"
printf "%b\n" "${GREEN}Continuing with installation...${NC}"
success "Date/time verification completed (assumed correct)"
echo ""
fi
}
@@ -121,9 +125,9 @@ cleanup_old_files
# Generate or retrieve machine ID
get_machine_id() {
# Try multiple sources for machine ID
if [[ -f /etc/machine-id ]]; then
if [ -f /etc/machine-id ]; then
cat /etc/machine-id
elif [[ -f /var/lib/dbus/machine-id ]]; then
elif [ -f /var/lib/dbus/machine-id ]; then
cat /var/lib/dbus/machine-id
else
# Fallback: generate from hardware info (less ideal but works)
@@ -132,45 +136,62 @@ get_machine_id() {
}
# Parse arguments from environment (passed via HTTP headers)
if [[ -z "$PATCHMON_URL" ]] || [[ -z "$API_ID" ]] || [[ -z "$API_KEY" ]]; then
if [ -z "$PATCHMON_URL" ] || [ -z "$API_ID" ] || [ -z "$API_KEY" ]; then
error "Missing required parameters. This script should be called via the PatchMon web interface."
fi
# Parse architecture parameter (default to amd64)
ARCHITECTURE="${ARCHITECTURE:-amd64}"
if [[ "$ARCHITECTURE" != "amd64" && "$ARCHITECTURE" != "386" && "$ARCHITECTURE" != "arm64" ]]; then
error "Invalid architecture '$ARCHITECTURE'. Must be one of: amd64, 386, arm64"
# Auto-detect architecture if not explicitly set
if [ -z "$ARCHITECTURE" ]; then
arch_raw=$(uname -m 2>/dev/null || echo "unknown")
# Map architecture to supported values
case "$arch_raw" in
"x86_64")
ARCHITECTURE="amd64"
;;
"i386"|"i686")
ARCHITECTURE="386"
;;
"aarch64"|"arm64")
ARCHITECTURE="arm64"
;;
"armv7l"|"armv6l"|"arm")
ARCHITECTURE="arm"
;;
*)
warning "Unknown architecture '$arch_raw', defaulting to amd64"
ARCHITECTURE="amd64"
;;
esac
fi
# Validate architecture
if [ "$ARCHITECTURE" != "amd64" ] && [ "$ARCHITECTURE" != "386" ] && [ "$ARCHITECTURE" != "arm64" ] && [ "$ARCHITECTURE" != "arm" ]; then
error "Invalid architecture '$ARCHITECTURE'. Must be one of: amd64, 386, arm64, arm"
fi
# Check if --force flag is set (for bypassing broken packages)
FORCE_INSTALL="${FORCE_INSTALL:-false}"
if [[ "$*" == *"--force"* ]] || [[ "$FORCE_INSTALL" == "true" ]]; then
case "$*" in
*"--force"*) FORCE_INSTALL="true" ;;
esac
if [ "$FORCE_INSTALL" = "true" ]; then
FORCE_INSTALL="true"
warning "⚠️ Force mode enabled - will bypass broken packages"
warning "Force mode enabled - will bypass broken packages"
fi
# Get unique machine ID for this host
MACHINE_ID=$(get_machine_id)
export MACHINE_ID
info "🚀 Starting PatchMon Agent Installation..."
info "📋 Server: $PATCHMON_URL"
info "🔑 API ID: ${API_ID:0:16}..."
info "🆔 Machine ID: ${MACHINE_ID:0:16}..."
info "🏗️ Architecture: $ARCHITECTURE"
# Display diagnostic information
echo ""
echo -e "${BLUE}🔧 Installation Diagnostics:${NC}"
echo " • URL: $PATCHMON_URL"
echo " • CURL FLAGS: $CURL_FLAGS"
echo " • API ID: ${API_ID:0:16}..."
echo " • API Key: ${API_KEY:0:16}..."
echo " • Architecture: $ARCHITECTURE"
echo ""
info "Starting PatchMon Agent Installation..."
info "Server: $PATCHMON_URL"
info "API ID: $(echo "$API_ID" | cut -c1-16)..."
info "Machine ID: $(echo "$MACHINE_ID" | cut -c1-16)..."
info "Architecture: $ARCHITECTURE"
# Install required dependencies
info "📦 Installing required dependencies..."
info "Installing required dependencies..."
echo ""
# Function to check if a command exists
@@ -180,52 +201,56 @@ command_exists() {
# Function to install packages with error handling
install_apt_packages() {
local packages=("$@")
local missing_packages=()
# Space-separated list of packages
_packages="$*"
_missing_packages=""
# Check which packages are missing
for pkg in "${packages[@]}"; do
for pkg in $_packages; do
if ! command_exists "$pkg"; then
missing_packages+=("$pkg")
_missing_packages="$_missing_packages $pkg"
fi
done
if [ ${#missing_packages[@]} -eq 0 ]; then
# Trim leading space
_missing_packages=$(echo "$_missing_packages" | sed 's/^ //')
if [ -z "$_missing_packages" ]; then
success "All required packages are already installed"
return 0
fi
info "Need to install: ${missing_packages[*]}"
info "Need to install: $_missing_packages"
# Build apt-get command based on force mode
local apt_cmd="apt-get install ${missing_packages[*]} -y"
_apt_cmd="apt-get install $_missing_packages -y"
if [[ "$FORCE_INSTALL" == "true" ]]; then
if [ "$FORCE_INSTALL" = "true" ]; then
info "Using force mode - bypassing broken packages..."
apt_cmd="$apt_cmd -o APT::Get::Fix-Broken=false -o DPkg::Options::=\"--force-confold\" -o DPkg::Options::=\"--force-confdef\""
_apt_cmd="$_apt_cmd -o APT::Get::Fix-Broken=false -o DPkg::Options::=\"--force-confold\" -o DPkg::Options::=\"--force-confdef\""
fi
# Try to install packages
if eval "$apt_cmd" 2>&1 | tee /tmp/patchmon_apt_install.log; then
if eval "$_apt_cmd" 2>&1 | tee /tmp/patchmon_apt_install.log; then
success "Packages installed successfully"
return 0
else
warning "Package installation encountered issues, checking if required tools are available..."
# Verify critical dependencies are actually available
local all_ok=true
for pkg in "${packages[@]}"; do
_all_ok=true
for pkg in $_packages; do
if ! command_exists "$pkg"; then
if [[ "$FORCE_INSTALL" == "true" ]]; then
if [ "$FORCE_INSTALL" = "true" ]; then
error "Critical dependency '$pkg' is not available even with --force. Please install manually."
else
error "Critical dependency '$pkg' is not available. Try again with --force flag or install manually: apt-get install $pkg"
fi
all_ok=false
_all_ok=false
fi
done
if $all_ok; then
if $_all_ok; then
success "All required tools are available despite installation warnings"
return 0
else
@@ -234,7 +259,144 @@ install_apt_packages() {
fi
}
# Detect package manager and install jq and curl
# Function to check and install packages for yum/dnf
install_yum_dnf_packages() {
_pkg_manager="$1"
shift
_packages="$*"
_missing_packages=""
# Check which packages are missing
for pkg in $_packages; do
if ! command_exists "$pkg"; then
_missing_packages="$_missing_packages $pkg"
fi
done
# Trim leading space
_missing_packages=$(echo "$_missing_packages" | sed 's/^ //')
if [ -z "$_missing_packages" ]; then
success "All required packages are already installed"
return 0
fi
info "Need to install: $_missing_packages"
if [ "$_pkg_manager" = "yum" ]; then
yum install -y $_missing_packages
else
dnf install -y $_missing_packages
fi
}
# Function to check and install packages for zypper
install_zypper_packages() {
_packages="$*"
_missing_packages=""
# Check which packages are missing
for pkg in $_packages; do
if ! command_exists "$pkg"; then
_missing_packages="$_missing_packages $pkg"
fi
done
# Trim leading space
_missing_packages=$(echo "$_missing_packages" | sed 's/^ //')
if [ -z "$_missing_packages" ]; then
success "All required packages are already installed"
return 0
fi
info "Need to install: $_missing_packages"
zypper install -y $_missing_packages
}
# Function to check and install packages for pacman
install_pacman_packages() {
_packages="$*"
_missing_packages=""
# Check which packages are missing
for pkg in $_packages; do
if ! command_exists "$pkg"; then
_missing_packages="$_missing_packages $pkg"
fi
done
# Trim leading space
_missing_packages=$(echo "$_missing_packages" | sed 's/^ //')
if [ -z "$_missing_packages" ]; then
success "All required packages are already installed"
return 0
fi
info "Need to install: $_missing_packages"
pacman -S --noconfirm $_missing_packages
}
# Function to check and install packages for apk
install_apk_packages() {
_packages="$*"
_missing_packages=""
# Check which packages are missing
for pkg in $_packages; do
if ! command_exists "$pkg"; then
_missing_packages="$_missing_packages $pkg"
fi
done
# Trim leading space
_missing_packages=$(echo "$_missing_packages" | sed 's/^ //')
if [ -z "$_missing_packages" ]; then
success "All required packages are already installed"
return 0
fi
info "Need to install: $_missing_packages"
# Update package index before installation
info "Updating package index..."
apk update -q || true
# Build apk command
_apk_cmd="apk add --no-cache $_missing_packages"
# Try to install packages
if eval "$_apk_cmd" 2>&1 | tee /tmp/patchmon_apk_install.log; then
success "Packages installed successfully"
return 0
else
warning "Package installation encountered issues, checking if required tools are available..."
# Verify critical dependencies are actually available
_all_ok=true
for pkg in $_packages; do
if ! command_exists "$pkg"; then
if [ "$FORCE_INSTALL" = "true" ]; then
error "Critical dependency '$pkg' is not available even with --force. Please install manually."
else
error "Critical dependency '$pkg' is not available. Try again with --force flag or install manually: apk add $pkg"
fi
_all_ok=false
fi
done
if $_all_ok; then
success "All required tools are available despite installation warnings"
return 0
else
return 1
fi
fi
}
# Detect package manager and install jq, curl, and bc
if command -v apt-get >/dev/null 2>&1; then
# Debian/Ubuntu
info "Detected apt-get (Debian/Ubuntu)"
@@ -242,10 +404,10 @@ if command -v apt-get >/dev/null 2>&1; then
# Check for broken packages
if dpkg -l | grep -q "^iH\|^iF" 2>/dev/null; then
if [[ "$FORCE_INSTALL" == "true" ]]; then
if [ "$FORCE_INSTALL" = "true" ]; then
warning "Detected broken packages on system - force mode will work around them"
else
warning "⚠️ Broken packages detected on system"
warning "Broken packages detected on system"
warning "If installation fails, retry with: curl -s {URL}/api/v1/hosts/install --force -H ..."
fi
fi
@@ -260,31 +422,31 @@ elif command -v yum >/dev/null 2>&1; then
info "Detected yum (CentOS/RHEL 7)"
echo ""
info "Installing jq, curl, and bc..."
yum install -y jq curl bc
install_yum_dnf_packages yum jq curl bc
elif command -v dnf >/dev/null 2>&1; then
# CentOS/RHEL 8+/Fedora
info "Detected dnf (CentOS/RHEL 8+/Fedora)"
echo ""
info "Installing jq, curl, and bc..."
dnf install -y jq curl bc
install_yum_dnf_packages dnf jq curl bc
elif command -v zypper >/dev/null 2>&1; then
# openSUSE
info "Detected zypper (openSUSE)"
echo ""
info "Installing jq, curl, and bc..."
zypper install -y jq curl bc
install_zypper_packages jq curl bc
elif command -v pacman >/dev/null 2>&1; then
# Arch Linux
info "Detected pacman (Arch Linux)"
echo ""
info "Installing jq, curl, and bc..."
pacman -S --noconfirm jq curl bc
install_pacman_packages jq curl bc
elif command -v apk >/dev/null 2>&1; then
# Alpine Linux
info "Detected apk (Alpine Linux)"
echo ""
info "Installing jq, curl, and bc..."
apk add --no-cache jq curl bc
install_apk_packages jq curl bc
else
warning "Could not detect package manager. Please ensure 'jq', 'curl', and 'bc' are installed manually."
fi
@@ -294,57 +456,88 @@ success "Dependencies installation completed"
echo ""
# Step 1: Handle existing configuration directory
info "📁 Setting up configuration directory..."
info "Setting up configuration directory..."
# Check if configuration directory already exists
if [[ -d "/etc/patchmon" ]]; then
warning "⚠️ Configuration directory already exists at /etc/patchmon"
warning "⚠️ Preserving existing configuration files"
if [ -d "/etc/patchmon" ]; then
warning "Configuration directory already exists at /etc/patchmon"
warning "Preserving existing configuration files"
# List existing files for user awareness
info "📋 Existing files in /etc/patchmon:"
info "Existing files in /etc/patchmon:"
ls -la /etc/patchmon/ 2>/dev/null | grep -v "^total" | while read -r line; do
echo " $line"
done
else
info "📁 Creating new configuration directory..."
info "Creating new configuration directory..."
mkdir -p /etc/patchmon
fi
# Check if agent is already configured and working (before we overwrite anything)
info "Checking if agent is already configured..."
if [ -f /etc/patchmon/config.yml ] && [ -f /etc/patchmon/credentials.yml ]; then
if [ -f /usr/local/bin/patchmon-agent ]; then
info "Found existing agent configuration"
info "Testing existing configuration with ping..."
if /usr/local/bin/patchmon-agent ping >/dev/null 2>&1; then
success "Agent is already configured and ping successful"
info "Existing configuration is working - skipping installation"
info ""
info "If you want to reinstall, remove the configuration files first:"
info " sudo rm -f /etc/patchmon/config.yml /etc/patchmon/credentials.yml"
echo ""
exit 0
else
warning "Agent configuration exists but ping failed"
warning "Will move existing configuration and reinstall"
echo ""
fi
else
warning "Configuration files exist but agent binary is missing"
warning "Will move existing configuration and reinstall"
echo ""
fi
else
success "Agent not yet configured - proceeding with installation"
echo ""
fi
# Step 2: Create configuration files
info "🔐 Creating configuration files..."
info "Creating configuration files..."
# Check if config file already exists
if [[ -f "/etc/patchmon/config.yml" ]]; then
warning "⚠️ Config file already exists at /etc/patchmon/config.yml"
warning "⚠️ Moving existing file out of the way for fresh installation"
if [ -f "/etc/patchmon/config.yml" ]; then
warning "Config file already exists at /etc/patchmon/config.yml"
warning "Moving existing file out of the way for fresh installation"
# Clean up old config backups (keep only last 3)
ls -t /etc/patchmon/config.yml.backup.* 2>/dev/null | tail -n +4 | xargs -r rm -f
# Move existing file out of the way
mv /etc/patchmon/config.yml /etc/patchmon/config.yml.backup.$(date +%Y%m%d_%H%M%S)
info "📋 Moved existing config to: /etc/patchmon/config.yml.backup.$(date +%Y%m%d_%H%M%S)"
info "Moved existing config to: /etc/patchmon/config.yml.backup.$(date +%Y%m%d_%H%M%S)"
fi
# Check if credentials file already exists
if [[ -f "/etc/patchmon/credentials.yml" ]]; then
warning "⚠️ Credentials file already exists at /etc/patchmon/credentials.yml"
warning "⚠️ Moving existing file out of the way for fresh installation"
if [ -f "/etc/patchmon/credentials.yml" ]; then
warning "Credentials file already exists at /etc/patchmon/credentials.yml"
warning "Moving existing file out of the way for fresh installation"
# Clean up old credential backups (keep only last 3)
ls -t /etc/patchmon/credentials.yml.backup.* 2>/dev/null | tail -n +4 | xargs -r rm -f
# Move existing file out of the way
mv /etc/patchmon/credentials.yml /etc/patchmon/credentials.yml.backup.$(date +%Y%m%d_%H%M%S)
info "📋 Moved existing credentials to: /etc/patchmon/credentials.yml.backup.$(date +%Y%m%d_%H%M%S)"
info "Moved existing credentials to: /etc/patchmon/credentials.yml.backup.$(date +%Y%m%d_%H%M%S)"
fi
# Clean up old credentials file if it exists (from previous installations)
if [[ -f "/etc/patchmon/credentials" ]]; then
warning "⚠️ Found old credentials file, removing it..."
if [ -f "/etc/patchmon/credentials" ]; then
warning "Found old credentials file, removing it..."
rm -f /etc/patchmon/credentials
info "📋 Removed old credentials file"
info "Removed old credentials file"
fi
# Create main config file
@@ -356,6 +549,7 @@ api_version: "v1"
credentials_file: "/etc/patchmon/credentials.yml"
log_file: "/etc/patchmon/logs/patchmon-agent.log"
log_level: "info"
skip_ssl_verify: ${SKIP_SSL_VERIFY:-false}
EOF
# Create credentials file
@@ -370,29 +564,29 @@ chmod 600 /etc/patchmon/config.yml
chmod 600 /etc/patchmon/credentials.yml
# Step 3: Download the PatchMon agent binary using API credentials
info "📥 Downloading PatchMon agent binary..."
info "Downloading PatchMon agent binary..."
# Determine the binary filename based on architecture
BINARY_NAME="patchmon-agent-linux-${ARCHITECTURE}"
# Check if agent binary already exists
if [[ -f "/usr/local/bin/patchmon-agent" ]]; then
warning "⚠️ Agent binary already exists at /usr/local/bin/patchmon-agent"
warning "⚠️ Moving existing file out of the way for fresh installation"
if [ -f "/usr/local/bin/patchmon-agent" ]; then
warning "Agent binary already exists at /usr/local/bin/patchmon-agent"
warning "Moving existing file out of the way for fresh installation"
# Clean up old agent backups (keep only last 3)
ls -t /usr/local/bin/patchmon-agent.backup.* 2>/dev/null | tail -n +4 | xargs -r rm -f
# Move existing file out of the way
mv /usr/local/bin/patchmon-agent /usr/local/bin/patchmon-agent.backup.$(date +%Y%m%d_%H%M%S)
info "📋 Moved existing agent to: /usr/local/bin/patchmon-agent.backup.$(date +%Y%m%d_%H%M%S)"
info "Moved existing agent to: /usr/local/bin/patchmon-agent.backup.$(date +%Y%m%d_%H%M%S)"
fi
# Clean up old shell script if it exists (from previous installations)
if [[ -f "/usr/local/bin/patchmon-agent.sh" ]]; then
warning "⚠️ Found old shell script agent, removing it..."
if [ -f "/usr/local/bin/patchmon-agent.sh" ]; then
warning "Found old shell script agent, removing it..."
rm -f /usr/local/bin/patchmon-agent.sh
info "📋 Removed old shell script agent"
info "Removed old shell script agent"
fi
# Download the binary
@@ -406,83 +600,52 @@ chmod +x /usr/local/bin/patchmon-agent
# Get the agent version from the binary
AGENT_VERSION=$(/usr/local/bin/patchmon-agent version 2>/dev/null || echo "Unknown")
info "📋 Agent version: $AGENT_VERSION"
info "Agent version: $AGENT_VERSION"
# Handle existing log files and create log directory
info "📁 Setting up log directory..."
info "Setting up log directory..."
# Create log directory if it doesn't exist
mkdir -p /etc/patchmon/logs
# Handle existing log files
if [[ -f "/etc/patchmon/logs/patchmon-agent.log" ]]; then
warning "⚠️ Existing log file found at /etc/patchmon/logs/patchmon-agent.log"
warning "⚠️ Rotating log file for fresh start"
if [ -f "/etc/patchmon/logs/patchmon-agent.log" ]; then
warning "Existing log file found at /etc/patchmon/logs/patchmon-agent.log"
warning "Rotating log file for fresh start"
# Rotate the log file
mv /etc/patchmon/logs/patchmon-agent.log /etc/patchmon/logs/patchmon-agent.log.old.$(date +%Y%m%d_%H%M%S)
info "📋 Log file rotated to: /etc/patchmon/logs/patchmon-agent.log.old.$(date +%Y%m%d_%H%M%S)"
info "Log file rotated to: /etc/patchmon/logs/patchmon-agent.log.old.$(date +%Y%m%d_%H%M%S)"
fi
# Step 4: Test the configuration
# Check if this machine is already enrolled
info "🔍 Checking if machine is already enrolled..."
existing_check=$(curl $CURL_FLAGS -s -X POST \
-H "X-API-ID: $API_ID" \
-H "X-API-KEY: $API_KEY" \
-H "Content-Type: application/json" \
-d "{\"machine_id\": \"$MACHINE_ID\"}" \
"$PATCHMON_URL/api/v1/hosts/check-machine-id" \
-w "\n%{http_code}" 2>&1)
http_code=$(echo "$existing_check" | tail -n 1)
response_body=$(echo "$existing_check" | sed '$d')
if [[ "$http_code" == "200" ]]; then
already_enrolled=$(echo "$response_body" | jq -r '.exists' 2>/dev/null || echo "false")
if [[ "$already_enrolled" == "true" ]]; then
warning "⚠️ This machine is already enrolled in PatchMon"
info "Machine ID: $MACHINE_ID"
info "Existing host: $(echo "$response_body" | jq -r '.host.friendly_name' 2>/dev/null)"
info ""
info "The agent will be reinstalled/updated with existing credentials."
echo ""
else
success "✅ Machine not yet enrolled - proceeding with installation"
fi
fi
info "🧪 Testing API credentials and connectivity..."
info "Testing API credentials and connectivity..."
if /usr/local/bin/patchmon-agent ping; then
success "TEST: API credentials are valid and server is reachable"
success "TEST: API credentials are valid and server is reachable"
else
error "Failed to validate API credentials or reach server"
error "Failed to validate API credentials or reach server"
fi
# Step 5: Send initial data and setup systemd service
info "📊 Sending initial package data to server..."
if /usr/local/bin/patchmon-agent report; then
success "✅ UPDATE: Initial package data sent successfully"
else
warning "⚠️ Failed to send initial data. You can retry later with: /usr/local/bin/patchmon-agent report"
fi
# Step 6: Setup systemd service for WebSocket connection
info "🔧 Setting up systemd service..."
# Stop and disable existing service if it exists
if systemctl is-active --quiet patchmon-agent.service 2>/dev/null; then
warning "⚠️ Stopping existing PatchMon agent service..."
systemctl stop patchmon-agent.service
fi
if systemctl is-enabled --quiet patchmon-agent.service 2>/dev/null; then
warning "⚠️ Disabling existing PatchMon agent service..."
systemctl disable patchmon-agent.service
fi
# Create systemd service file
cat > /etc/systemd/system/patchmon-agent.service << EOF
# Step 5: Setup service for WebSocket connection
# Note: The service will automatically send an initial report on startup (see serve.go)
# Detect init system and create appropriate service
if command -v systemctl >/dev/null 2>&1; then
# Systemd is available
info "Setting up systemd service..."
# Stop and disable existing service if it exists
if systemctl is-active --quiet patchmon-agent.service 2>/dev/null; then
warning "Stopping existing PatchMon agent service..."
systemctl stop patchmon-agent.service
fi
if systemctl is-enabled --quiet patchmon-agent.service 2>/dev/null; then
warning "Disabling existing PatchMon agent service..."
systemctl disable patchmon-agent.service
fi
# Create systemd service file
cat > /etc/systemd/system/patchmon-agent.service << EOF
[Unit]
Description=PatchMon Agent Service
After=network.target
@@ -504,59 +667,154 @@ SyslogIdentifier=patchmon-agent
[Install]
WantedBy=multi-user.target
EOF
# Clean up old crontab entries if they exist (from previous installations)
if crontab -l 2>/dev/null | grep -q "patchmon-agent"; then
warning "Found old crontab entries, removing them..."
crontab -l 2>/dev/null | grep -v "patchmon-agent" | crontab -
info "Removed old crontab entries"
fi
# Reload systemd and enable/start the service
systemctl daemon-reload
systemctl enable patchmon-agent.service
systemctl start patchmon-agent.service
# Check if service started successfully
if systemctl is-active --quiet patchmon-agent.service; then
success "PatchMon Agent service started successfully"
info "WebSocket connection established"
else
warning "Service may have failed to start. Check status with: systemctl status patchmon-agent"
fi
SERVICE_TYPE="systemd"
elif [ -d /etc/init.d ] && command -v rc-service >/dev/null 2>&1; then
# OpenRC is available (Alpine Linux)
info "Setting up OpenRC service..."
# Stop and disable existing service if it exists
if rc-service patchmon-agent status >/dev/null 2>&1; then
warning "Stopping existing PatchMon agent service..."
rc-service patchmon-agent stop
fi
if rc-update show default 2>/dev/null | grep -q "patchmon-agent"; then
warning "Disabling existing PatchMon agent service..."
rc-update del patchmon-agent default
fi
# Create OpenRC service file
cat > /etc/init.d/patchmon-agent << 'EOF'
#!/sbin/openrc-run
# Clean up old crontab entries if they exist (from previous installations)
if crontab -l 2>/dev/null | grep -q "patchmon-agent"; then
warning "⚠️ Found old crontab entries, removing them..."
crontab -l 2>/dev/null | grep -v "patchmon-agent" | crontab -
info "📋 Removed old crontab entries"
fi
name="patchmon-agent"
description="PatchMon Agent Service"
command="/usr/local/bin/patchmon-agent"
command_args="serve"
command_user="root"
pidfile="/var/run/patchmon-agent.pid"
command_background="yes"
working_dir="/etc/patchmon"
# Reload systemd and enable/start the service
systemctl daemon-reload
systemctl enable patchmon-agent.service
systemctl start patchmon-agent.service
# Check if service started successfully
if systemctl is-active --quiet patchmon-agent.service; then
success "✅ PatchMon Agent service started successfully"
info "🔗 WebSocket connection established"
depend() {
need net
after net
}
EOF
chmod +x /etc/init.d/patchmon-agent
# Clean up old crontab entries if they exist (from previous installations)
if crontab -l 2>/dev/null | grep -q "patchmon-agent"; then
warning "Found old crontab entries, removing them..."
crontab -l 2>/dev/null | grep -v "patchmon-agent" | crontab -
info "Removed old crontab entries"
fi
# Enable and start the service
rc-update add patchmon-agent default
rc-service patchmon-agent start
# Check if service started successfully
if rc-service patchmon-agent status >/dev/null 2>&1; then
success "PatchMon Agent service started successfully"
info "WebSocket connection established"
else
warning "Service may have failed to start. Check status with: rc-service patchmon-agent status"
fi
SERVICE_TYPE="openrc"
else
warning "⚠️ Service may have failed to start. Check status with: systemctl status patchmon-agent"
# No init system detected, use crontab as fallback
warning "No init system detected (systemd or OpenRC). Using crontab for service management."
# Clean up old crontab entries if they exist
if crontab -l 2>/dev/null | grep -q "patchmon-agent"; then
warning "Found old crontab entries, removing them..."
crontab -l 2>/dev/null | grep -v "patchmon-agent" | crontab -
info "Removed old crontab entries"
fi
# Add crontab entry to run the agent
(crontab -l 2>/dev/null; echo "@reboot /usr/local/bin/patchmon-agent serve >/dev/null 2>&1") | crontab -
info "Added crontab entry for PatchMon agent"
# Start the agent manually
/usr/local/bin/patchmon-agent serve >/dev/null 2>&1 &
success "PatchMon Agent started in background"
info "WebSocket connection established"
SERVICE_TYPE="crontab"
fi
# Installation complete
success "🎉 PatchMon Agent installation completed successfully!"
success "PatchMon Agent installation completed successfully!"
echo ""
echo -e "${GREEN}📋 Installation Summary:${NC}"
printf "%b\n" "${GREEN}Installation Summary:${NC}"
echo " • Configuration directory: /etc/patchmon"
echo " • Agent binary installed: /usr/local/bin/patchmon-agent"
echo " • Architecture: $ARCHITECTURE"
echo " • Dependencies installed: jq, curl, bc"
echo " • Systemd service configured and running"
if [ "$SERVICE_TYPE" = "systemd" ]; then
echo " • Systemd service configured and running"
elif [ "$SERVICE_TYPE" = "openrc" ]; then
echo " • OpenRC service configured and running"
else
echo " • Service configured via crontab"
fi
echo " • API credentials configured and tested"
echo " • WebSocket connection established"
echo " • Logs directory: /etc/patchmon/logs"
# Check for moved files and show them
MOVED_FILES=$(ls /etc/patchmon/credentials.yml.backup.* /etc/patchmon/config.yml.backup.* /usr/local/bin/patchmon-agent.backup.* /etc/patchmon/logs/patchmon-agent.log.old.* /usr/local/bin/patchmon-agent.sh.backup.* /etc/patchmon/credentials.backup.* 2>/dev/null || true)
if [[ -n "$MOVED_FILES" ]]; then
if [ -n "$MOVED_FILES" ]; then
echo ""
echo -e "${YELLOW}📋 Files Moved for Fresh Installation:${NC}"
printf "%b\n" "${YELLOW}Files Moved for Fresh Installation:${NC}"
echo "$MOVED_FILES" | while read -r moved_file; do
echo "$moved_file"
done
echo ""
echo -e "${BLUE}💡 Note: Old files are automatically cleaned up (keeping last 3)${NC}"
printf "%b\n" "${BLUE}Note: Old files are automatically cleaned up (keeping last 3)${NC}"
fi
echo ""
echo -e "${BLUE}🔧 Management Commands:${NC}"
printf "%b\n" "${BLUE}Management Commands:${NC}"
echo " • Test connection: /usr/local/bin/patchmon-agent ping"
echo " • Manual report: /usr/local/bin/patchmon-agent report"
echo " • Check status: /usr/local/bin/patchmon-agent diagnostics"
echo " • Service status: systemctl status patchmon-agent"
echo " • Service logs: journalctl -u patchmon-agent -f"
echo " • Restart service: systemctl restart patchmon-agent"
if [ "$SERVICE_TYPE" = "systemd" ]; then
echo " • Service status: systemctl status patchmon-agent"
echo " • Service logs: journalctl -u patchmon-agent -f"
echo " • Restart service: systemctl restart patchmon-agent"
elif [ "$SERVICE_TYPE" = "openrc" ]; then
echo " • Service status: rc-service patchmon-agent status"
echo " • Service logs: tail -f /etc/patchmon/logs/patchmon-agent.log"
echo " • Restart service: rc-service patchmon-agent restart"
else
echo " • Service logs: tail -f /etc/patchmon/logs/patchmon-agent.log"
echo " • Restart service: pkill -f 'patchmon-agent serve' && /usr/local/bin/patchmon-agent serve &"
fi
echo ""
success "Your system is now being monitored by PatchMon!"
success "Your system is now being monitored by PatchMon!"

View File

@@ -1,7 +1,10 @@
#!/bin/bash
#!/bin/sh
# PatchMon Agent Removal Script
# Usage: curl -s {PATCHMON_URL}/api/v1/hosts/remove | bash
# POSIX-compliant shell script (works with dash, ash, bash, etc.)
# Usage: curl -s {PATCHMON_URL}/api/v1/hosts/remove | sudo sh
# curl -s {PATCHMON_URL}/api/v1/hosts/remove | sudo REMOVE_BACKUPS=1 sh
# curl -s {PATCHMON_URL}/api/v1/hosts/remove | sudo SILENT=1 sh
# This script completely removes PatchMon from the system
set -e
@@ -11,80 +14,271 @@ set -e
# future (left for consistency with install script).
CURL_FLAGS=""
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Detect if running in silent mode (only with explicit SILENT env var)
SILENT_MODE=0
if [ -n "$SILENT" ]; then
SILENT_MODE=1
fi
# Check if backup files should be removed (default: preserve for safety)
# Usage: REMOVE_BACKUPS=1 when piping the script
REMOVE_BACKUPS="${REMOVE_BACKUPS:-0}"
# Colors for output (disabled in silent mode)
if [ "$SILENT_MODE" -eq 0 ]; then
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
else
RED=''
GREEN=''
YELLOW=''
BLUE=''
NC=''
fi
# Functions
error() {
echo -e "${RED}❌ ERROR: $1${NC}" >&2
printf "%b\n" "${RED}❌ ERROR: $1${NC}" >&2
exit 1
}
info() {
echo -e "${BLUE} $1${NC}"
if [ "$SILENT_MODE" -eq 0 ]; then
printf "%b\n" "${BLUE} $1${NC}"
fi
}
success() {
echo -e "${GREEN}$1${NC}"
if [ "$SILENT_MODE" -eq 0 ]; then
printf "%b\n" "${GREEN}$1${NC}"
fi
}
warning() {
echo -e "${YELLOW}⚠️ $1${NC}"
if [ "$SILENT_MODE" -eq 0 ]; then
printf "%b\n" "${YELLOW}⚠️ $1${NC}"
fi
}
# Check if running as root
if [[ $EUID -ne 0 ]]; then
if [ "$(id -u)" -ne 0 ]; then
error "This script must be run as root (use sudo)"
fi
info "🗑️ Starting PatchMon Agent Removal..."
echo ""
[ "$SILENT_MODE" -eq 0 ] && echo ""
# Step 1: Stop any running PatchMon processes
info "🛑 Stopping PatchMon processes..."
if pgrep -f "patchmon-agent.sh" >/dev/null; then
warning "Found running PatchMon processes, stopping them..."
pkill -f "patchmon-agent.sh" || true
# Step 1: Stop systemd/OpenRC service if it exists
info "🛑 Stopping PatchMon service..."
SERVICE_STOPPED=0
# Check for systemd service
if command -v systemctl >/dev/null 2>&1; then
info "📋 Checking systemd service status..."
# Check if service is active
if systemctl is-active --quiet patchmon-agent.service 2>/dev/null; then
SERVICE_STATUS=$(systemctl is-active patchmon-agent.service 2>/dev/null || echo "unknown")
warning "Service is active (status: $SERVICE_STATUS). Stopping it now..."
if systemctl stop patchmon-agent.service 2>/dev/null; then
success "✓ Service stopped successfully"
SERVICE_STOPPED=1
else
warning "✗ Failed to stop service (continuing anyway...)"
fi
# Verify it stopped
sleep 1
if systemctl is-active --quiet patchmon-agent.service 2>/dev/null; then
warning "⚠️ Service is STILL ACTIVE after stop command!"
else
info "✓ Verified: Service is no longer active"
fi
else
info "Service is not active"
fi
# Check if service is enabled
if systemctl is-enabled --quiet patchmon-agent.service 2>/dev/null; then
ENABLED_STATUS=$(systemctl is-enabled patchmon-agent.service 2>/dev/null || echo "unknown")
warning "Service is enabled (status: $ENABLED_STATUS). Disabling it now..."
if systemctl disable patchmon-agent.service 2>/dev/null; then
success "✓ Service disabled successfully"
else
warning "✗ Failed to disable service (may already be disabled)"
fi
else
info "Service is not enabled"
fi
# Check for service file
if [ -f "/etc/systemd/system/patchmon-agent.service" ]; then
warning "Found service file: /etc/systemd/system/patchmon-agent.service"
info "Removing service file..."
if rm -f /etc/systemd/system/patchmon-agent.service 2>/dev/null; then
success "✓ Service file removed"
else
warning "✗ Failed to remove service file (check permissions)"
fi
info "Reloading systemd daemon..."
if systemctl daemon-reload 2>/dev/null; then
success "✓ Systemd daemon reloaded"
else
warning "✗ Failed to reload systemd daemon"
fi
SERVICE_STOPPED=1
# Verify the file is gone
if [ -f "/etc/systemd/system/patchmon-agent.service" ]; then
warning "⚠️ Service file STILL EXISTS after removal!"
else
info "✓ Verified: Service file removed"
fi
else
info "Service file not found at /etc/systemd/system/patchmon-agent.service"
fi
# Final status check
info "📊 Final systemd status check..."
FINAL_STATUS=$(systemctl is-active patchmon-agent.service 2>&1 || echo "not-found")
info "Service status: $FINAL_STATUS"
fi
# Check for OpenRC service (Alpine Linux)
if command -v rc-service >/dev/null 2>&1; then
if rc-service patchmon-agent status >/dev/null 2>&1; then
warning "Stopping OpenRC service..."
rc-service patchmon-agent stop || true
SERVICE_STOPPED=1
fi
if rc-update show default 2>/dev/null | grep -q "patchmon-agent"; then
warning "Removing from runlevel..."
rc-update del patchmon-agent default || true
fi
if [ -f "/etc/init.d/patchmon-agent" ]; then
warning "Removing OpenRC service file..."
rm -f /etc/init.d/patchmon-agent
success "OpenRC service removed"
SERVICE_STOPPED=1
fi
fi
# Stop any remaining running processes (legacy or manual starts)
info "🔍 Checking for running PatchMon processes..."
if pgrep -f "patchmon-agent" >/dev/null; then
PROCESS_COUNT=$(pgrep -f "patchmon-agent" | wc -l | tr -d ' ')
warning "Found $PROCESS_COUNT running PatchMon process(es)"
# Show process details
if [ "$SILENT_MODE" -eq 0 ]; then
info "Process details:"
ps aux | grep "[p]atchmon-agent" | while IFS= read -r line; do
echo " $line"
done
fi
warning "Sending SIGTERM to all patchmon-agent processes..."
if pkill -f "patchmon-agent" 2>/dev/null; then
success "✓ Sent SIGTERM signal"
else
warning "Failed to send SIGTERM (processes may have already stopped)"
fi
sleep 2
success "PatchMon processes stopped"
# Check if processes still exist
if pgrep -f "patchmon-agent" >/dev/null; then
REMAINING=$(pgrep -f "patchmon-agent" | wc -l | tr -d ' ')
warning "⚠️ $REMAINING process(es) still running! Sending SIGKILL..."
pkill -9 -f "patchmon-agent" 2>/dev/null || true
sleep 1
if pgrep -f "patchmon-agent" >/dev/null; then
warning "⚠️ CRITICAL: Processes still running after SIGKILL!"
else
success "✓ All processes terminated"
fi
else
success "✓ All processes stopped successfully"
fi
SERVICE_STOPPED=1
else
info "No running PatchMon processes found"
fi
if [ "$SERVICE_STOPPED" -eq 1 ]; then
success "PatchMon service/processes stopped"
else
info "No running PatchMon service or processes found"
fi
# Step 2: Remove crontab entries
info "📅 Removing PatchMon crontab entries..."
if crontab -l 2>/dev/null | grep -q "patchmon-agent.sh"; then
if crontab -l 2>/dev/null | grep -q "patchmon-agent"; then
warning "Found PatchMon crontab entries, removing them..."
crontab -l 2>/dev/null | grep -v "patchmon-agent.sh" | crontab -
crontab -l 2>/dev/null | grep -v "patchmon-agent" | crontab -
success "Crontab entries removed"
else
info "No PatchMon crontab entries found"
fi
# Step 3: Remove agent script
info "📄 Removing agent script..."
if [[ -f "/usr/local/bin/patchmon-agent.sh" ]]; then
warning "Removing agent script: /usr/local/bin/patchmon-agent.sh"
# Step 3: Remove agent binaries and scripts
info "📄 Removing agent binaries and scripts..."
AGENTS_REMOVED=0
# Remove Go agent binary
if [ -f "/usr/local/bin/patchmon-agent" ]; then
warning "Removing Go agent binary: /usr/local/bin/patchmon-agent"
rm -f /usr/local/bin/patchmon-agent
AGENTS_REMOVED=1
fi
# Remove legacy shell script agent
if [ -f "/usr/local/bin/patchmon-agent.sh" ]; then
warning "Removing legacy agent script: /usr/local/bin/patchmon-agent.sh"
rm -f /usr/local/bin/patchmon-agent.sh
success "Agent script removed"
AGENTS_REMOVED=1
fi
# Remove backup files for Go agent
if ls /usr/local/bin/patchmon-agent.backup.* >/dev/null 2>&1; then
warning "Removing Go agent backup files..."
rm -f /usr/local/bin/patchmon-agent.backup.*
AGENTS_REMOVED=1
fi
# Remove backup files for legacy shell script
if ls /usr/local/bin/patchmon-agent.sh.backup.* >/dev/null 2>&1; then
warning "Removing legacy agent backup files..."
rm -f /usr/local/bin/patchmon-agent.sh.backup.*
AGENTS_REMOVED=1
fi
if [ "$AGENTS_REMOVED" -eq 1 ]; then
success "Agent binaries and scripts removed"
else
info "Agent script not found"
info "No agent binaries or scripts found"
fi
# Step 4: Remove configuration directory and files
info "📁 Removing configuration files..."
if [[ -d "/etc/patchmon" ]]; then
if [ -d "/etc/patchmon" ]; then
warning "Removing configuration directory: /etc/patchmon"
# Show what's being removed
info "📋 Files in /etc/patchmon:"
ls -la /etc/patchmon/ 2>/dev/null | grep -v "^total" | while read -r line; do
echo " $line"
done
# Show what's being removed (only in verbose mode)
if [ "$SILENT_MODE" -eq 0 ]; then
info "📋 Files in /etc/patchmon:"
ls -la /etc/patchmon/ 2>/dev/null | grep -v "^total" | while read -r line; do
echo " $line"
done
fi
# Remove the directory
rm -rf /etc/patchmon
@@ -95,7 +289,7 @@ fi
# Step 5: Remove log files
info "📝 Removing log files..."
if [[ -f "/var/log/patchmon-agent.log" ]]; then
if [ -f "/var/log/patchmon-agent.log" ]; then
warning "Removing log file: /var/log/patchmon-agent.log"
rm -f /var/log/patchmon-agent.log
success "Log file removed"
@@ -103,120 +297,164 @@ else
info "Log file not found"
fi
# Step 6: Clean up backup files (optional)
info "🧹 Cleaning up backup files..."
# Step 6: Clean up backup files
info "🧹 Checking backup files..."
BACKUP_COUNT=0
BACKUP_REMOVED=0
# Count credential backups
CRED_BACKUPS=$(ls /etc/patchmon/credentials.backup.* 2>/dev/null | wc -l || echo "0")
if [[ $CRED_BACKUPS -gt 0 ]]; then
BACKUP_COUNT=$((BACKUP_COUNT + CRED_BACKUPS))
fi
# Count agent backups
AGENT_BACKUPS=$(ls /usr/local/bin/patchmon-agent.sh.backup.* 2>/dev/null | wc -l || echo "0")
if [[ $AGENT_BACKUPS -gt 0 ]]; then
BACKUP_COUNT=$((BACKUP_COUNT + AGENT_BACKUPS))
fi
# Count log backups
LOG_BACKUPS=$(ls /var/log/patchmon-agent.log.old.* 2>/dev/null | wc -l || echo "0")
if [[ $LOG_BACKUPS -gt 0 ]]; then
BACKUP_COUNT=$((BACKUP_COUNT + LOG_BACKUPS))
fi
if [[ $BACKUP_COUNT -gt 0 ]]; then
warning "Found $BACKUP_COUNT backup files"
echo ""
echo -e "${YELLOW}📋 Backup files found:${NC}"
if [ "$REMOVE_BACKUPS" -eq 1 ]; then
info "Removing backup files (REMOVE_BACKUPS=1)..."
# Show credential backups
if [[ $CRED_BACKUPS -gt 0 ]]; then
echo " Credential backups:"
ls /etc/patchmon/credentials.backup.* 2>/dev/null | while read -r file; do
echo "$file"
done
# Remove credential backups (already removed with /etc/patchmon directory, but check anyway)
if ls /etc/patchmon/credentials.backup.* >/dev/null 2>&1; then
CRED_BACKUPS=$(ls /etc/patchmon/credentials.backup.* 2>/dev/null | wc -l | tr -d ' ')
warning "Removing $CRED_BACKUPS credential backup file(s)..."
rm -f /etc/patchmon/credentials.backup.*
BACKUP_COUNT=$((BACKUP_COUNT + CRED_BACKUPS))
BACKUP_REMOVED=1
fi
# Show agent backups
if [[ $AGENT_BACKUPS -gt 0 ]]; then
echo " Agent script backups:"
ls /usr/local/bin/patchmon-agent.sh.backup.* 2>/dev/null | while read -r file; do
echo "$file"
done
# Remove config backups (already removed with /etc/patchmon directory, but check anyway)
if ls /etc/patchmon/*.backup.* >/dev/null 2>&1; then
CONFIG_BACKUPS=$(ls /etc/patchmon/*.backup.* 2>/dev/null | wc -l | tr -d ' ')
warning "Removing $CONFIG_BACKUPS config backup file(s)..."
rm -f /etc/patchmon/*.backup.*
BACKUP_COUNT=$((BACKUP_COUNT + CONFIG_BACKUPS))
BACKUP_REMOVED=1
fi
# Show log backups
if [[ $LOG_BACKUPS -gt 0 ]]; then
echo " Log file backups:"
ls /var/log/patchmon-agent.log.old.* 2>/dev/null | while read -r file; do
echo "$file"
done
# Remove Go agent backups
if ls /usr/local/bin/patchmon-agent.backup.* >/dev/null 2>&1; then
GO_AGENT_BACKUPS=$(ls /usr/local/bin/patchmon-agent.backup.* 2>/dev/null | wc -l | tr -d ' ')
warning "Removing $GO_AGENT_BACKUPS Go agent backup file(s)..."
rm -f /usr/local/bin/patchmon-agent.backup.*
BACKUP_COUNT=$((BACKUP_COUNT + GO_AGENT_BACKUPS))
BACKUP_REMOVED=1
fi
echo ""
echo -e "${BLUE}💡 Note: Backup files are preserved for safety${NC}"
echo -e "${BLUE}💡 You can remove them manually if not needed${NC}"
# Remove legacy shell agent backups
if ls /usr/local/bin/patchmon-agent.sh.backup.* >/dev/null 2>&1; then
SHELL_AGENT_BACKUPS=$(ls /usr/local/bin/patchmon-agent.sh.backup.* 2>/dev/null | wc -l | tr -d ' ')
warning "Removing $SHELL_AGENT_BACKUPS legacy agent backup file(s)..."
rm -f /usr/local/bin/patchmon-agent.sh.backup.*
BACKUP_COUNT=$((BACKUP_COUNT + SHELL_AGENT_BACKUPS))
BACKUP_REMOVED=1
fi
# Remove log backups
if ls /var/log/patchmon-agent.log.old.* >/dev/null 2>&1; then
LOG_BACKUPS=$(ls /var/log/patchmon-agent.log.old.* 2>/dev/null | wc -l | tr -d ' ')
warning "Removing $LOG_BACKUPS log backup file(s)..."
rm -f /var/log/patchmon-agent.log.old.*
BACKUP_COUNT=$((BACKUP_COUNT + LOG_BACKUPS))
BACKUP_REMOVED=1
fi
if [ "$BACKUP_REMOVED" -eq 1 ]; then
success "Removed $BACKUP_COUNT backup file(s)"
else
info "No backup files found to remove"
fi
else
info "No backup files found"
# Just count backup files without removing
CRED_BACKUPS=0
CONFIG_BACKUPS=0
GO_AGENT_BACKUPS=0
SHELL_AGENT_BACKUPS=0
LOG_BACKUPS=0
if ls /etc/patchmon/credentials.backup.* >/dev/null 2>&1; then
CRED_BACKUPS=$(ls /etc/patchmon/credentials.backup.* 2>/dev/null | wc -l | tr -d ' ')
fi
if ls /etc/patchmon/*.backup.* >/dev/null 2>&1; then
CONFIG_BACKUPS=$(ls /etc/patchmon/*.backup.* 2>/dev/null | wc -l | tr -d ' ')
fi
if ls /usr/local/bin/patchmon-agent.backup.* >/dev/null 2>&1; then
GO_AGENT_BACKUPS=$(ls /usr/local/bin/patchmon-agent.backup.* 2>/dev/null | wc -l | tr -d ' ')
fi
if ls /usr/local/bin/patchmon-agent.sh.backup.* >/dev/null 2>&1; then
SHELL_AGENT_BACKUPS=$(ls /usr/local/bin/patchmon-agent.sh.backup.* 2>/dev/null | wc -l | tr -d ' ')
fi
if ls /var/log/patchmon-agent.log.old.* >/dev/null 2>&1; then
LOG_BACKUPS=$(ls /var/log/patchmon-agent.log.old.* 2>/dev/null | wc -l | tr -d ' ')
fi
BACKUP_COUNT=$((CRED_BACKUPS + CONFIG_BACKUPS + GO_AGENT_BACKUPS + SHELL_AGENT_BACKUPS + LOG_BACKUPS))
if [ "$BACKUP_COUNT" -gt 0 ]; then
info "Found $BACKUP_COUNT backup file(s) - preserved for safety"
if [ "$SILENT_MODE" -eq 0 ]; then
printf "%b\n" "${BLUE}💡 To remove backups, run with: REMOVE_BACKUPS=1${NC}"
fi
else
info "No backup files found"
fi
fi
# Step 7: Remove dependencies (optional)
info "📦 Checking for PatchMon-specific dependencies..."
if command -v jq >/dev/null 2>&1; then
warning "jq is installed (used by PatchMon)"
echo -e "${BLUE}💡 Note: jq may be used by other applications${NC}"
echo -e "${BLUE}💡 Consider keeping it unless you're sure it's not needed${NC}"
else
info "jq not found"
fi
if command -v curl >/dev/null 2>&1; then
warning "curl is installed (used by PatchMon)"
echo -e "${BLUE}💡 Note: curl is commonly used by many applications${NC}"
echo -e "${BLUE}💡 Consider keeping it unless you're sure it's not needed${NC}"
else
info "curl not found"
fi
# Step 8: Final verification
# Step 7: Final verification
info "🔍 Verifying removal..."
REMAINING_FILES=0
if [[ -f "/usr/local/bin/patchmon-agent.sh" ]]; then
if [ -f "/usr/local/bin/patchmon-agent" ]; then
REMAINING_FILES=$((REMAINING_FILES + 1))
fi
if [[ -d "/etc/patchmon" ]]; then
if [ -f "/usr/local/bin/patchmon-agent.sh" ]; then
REMAINING_FILES=$((REMAINING_FILES + 1))
fi
if [[ -f "/var/log/patchmon-agent.log" ]]; then
if [ -f "/etc/systemd/system/patchmon-agent.service" ]; then
REMAINING_FILES=$((REMAINING_FILES + 1))
fi
if crontab -l 2>/dev/null | grep -q "patchmon-agent.sh"; then
if [ -f "/etc/init.d/patchmon-agent" ]; then
REMAINING_FILES=$((REMAINING_FILES + 1))
fi
if [[ $REMAINING_FILES -eq 0 ]]; then
if [ -d "/etc/patchmon" ]; then
REMAINING_FILES=$((REMAINING_FILES + 1))
fi
if [ -f "/var/log/patchmon-agent.log" ]; then
REMAINING_FILES=$((REMAINING_FILES + 1))
fi
if crontab -l 2>/dev/null | grep -q "patchmon-agent"; then
REMAINING_FILES=$((REMAINING_FILES + 1))
fi
if [ "$REMAINING_FILES" -eq 0 ]; then
success "✅ PatchMon has been completely removed from the system!"
else
warning "⚠️ Some PatchMon files may still remain ($REMAINING_FILES items)"
echo -e "${BLUE}💡 You may need to remove them manually${NC}"
if [ "$SILENT_MODE" -eq 0 ]; then
printf "%b\n" "${BLUE}💡 You may need to remove them manually${NC}"
fi
fi
echo ""
echo -e "${GREEN}📋 Removal Summary:${NC}"
echo " • Agent script: Removed"
echo " • Configuration files: Removed"
echo " • Log files: Removed"
echo " • Crontab entries: Removed"
echo " • Running processes: Stopped"
echo " • Backup files: Preserved (if any)"
echo ""
echo -e "${BLUE}🔧 Manual cleanup (if needed):${NC}"
echo " • Remove backup files: rm /etc/patchmon/credentials.backup.* /usr/local/bin/patchmon-agent.sh.backup.* /var/log/patchmon-agent.log.old.*"
echo " • Remove dependencies: apt remove jq curl (if not needed by other apps)"
echo ""
if [ "$SILENT_MODE" -eq 0 ]; then
echo ""
printf "%b\n" "${GREEN}📋 Removal Summary:${NC}"
echo " • Agent binaries: Removed"
echo " • System services: Removed (systemd/OpenRC)"
echo " • Configuration files: Removed"
echo " • Log files: Removed"
echo " • Crontab entries: Removed"
echo " • Running processes: Stopped"
if [ "$REMOVE_BACKUPS" -eq 1 ]; then
echo " • Backup files: Removed"
else
echo " • Backup files: Preserved (${BACKUP_COUNT} files)"
fi
echo ""
if [ "$REMOVE_BACKUPS" -eq 0 ] && [ "$BACKUP_COUNT" -gt 0 ]; then
printf "%b\n" "${BLUE}🔧 Manual cleanup (if needed):${NC}"
echo " • Remove backup files: curl -s \${PATCHMON_URL}/api/v1/hosts/remove | sudo REMOVE_BACKUPS=1 sh"
echo ""
fi
fi
success "🎉 PatchMon removal completed!"

View File

@@ -230,6 +230,40 @@ while IFS= read -r line; do
info " ✓ Host enrolled successfully: $api_id"
# Check if agent is already installed and working
info " Checking if agent is already configured..."
config_check=$(timeout 10 pct exec "$vmid" -- bash -c "
if [[ -f /etc/patchmon/config.yml ]] && [[ -f /etc/patchmon/credentials.yml ]]; then
if [[ -f /usr/local/bin/patchmon-agent ]]; then
# Try to ping using existing configuration
if /usr/local/bin/patchmon-agent ping >/dev/null 2>&1; then
echo 'ping_success'
else
echo 'ping_failed'
fi
else
echo 'binary_missing'
fi
else
echo 'not_configured'
fi
" 2>/dev/null </dev/null || echo "error")
if [[ "$config_check" == "ping_success" ]]; then
info " ✓ Host already enrolled and agent ping successful - skipping"
((skipped_count++)) || true
echo ""
continue
elif [[ "$config_check" == "ping_failed" ]]; then
warn " ⚠ Agent configuration exists but ping failed - will reinstall"
elif [[ "$config_check" == "binary_missing" ]]; then
warn " ⚠ Config exists but agent binary missing - will reinstall"
elif [[ "$config_check" == "not_configured" ]]; then
info " Agent not yet configured - proceeding with installation"
else
warn " ⚠ Could not check agent status - proceeding with installation"
fi
# Ensure curl is installed in the container
info " Checking for curl in container..."
curl_check=$(timeout 10 pct exec "$vmid" -- bash -c "command -v curl >/dev/null 2>&1 && echo 'installed' || echo 'missing'" 2>/dev/null </dev/null || echo "error")
@@ -283,9 +317,11 @@ while IFS= read -r line; do
install_exit_code=0
# Download and execute in separate steps to avoid stdin issues with piping
# Pass CURL_FLAGS as environment variable to container
install_output=$(timeout 180 pct exec "$vmid" -- bash -c "
export CURL_FLAGS='$CURL_FLAGS'
cd /tmp
curl $CURL_FLAGS \
curl \$CURL_FLAGS \
-H \"X-API-ID: $api_id\" \
-H \"X-API-KEY: $api_key\" \
-o patchmon-install.sh \
@@ -422,9 +458,11 @@ if [[ ${#dpkg_error_containers[@]} -gt 0 ]]; then
info " Retrying agent installation..."
install_exit_code=0
# Pass CURL_FLAGS as environment variable to container
install_output=$(timeout 180 pct exec "$vmid" -- bash -c "
export CURL_FLAGS='$CURL_FLAGS'
cd /tmp
curl $CURL_FLAGS \
curl \$CURL_FLAGS \
-H \"X-API-ID: $api_id\" \
-H \"X-API-KEY: $api_key\" \
-o patchmon-install.sh \

View File

@@ -1,23 +1,36 @@
# Database Configuration
DATABASE_URL="postgresql://patchmon_user:p@tchm0n_p@55@localhost:5432/patchmon_db"
DATABASE_URL="postgresql://patchmon_user:your-password-here@localhost:5432/patchmon_db"
PM_DB_CONN_MAX_ATTEMPTS=30
PM_DB_CONN_WAIT_INTERVAL=2
# Redis Configuration
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_USER=your-redis-username-here
REDIS_PASSWORD=your-redis-password-here
REDIS_DB=0
# Database Connection Pool Configuration (Prisma)
DB_CONNECTION_LIMIT=30 # Maximum connections per instance (default: 30)
DB_POOL_TIMEOUT=20 # Seconds to wait for available connection (default: 20)
DB_CONNECT_TIMEOUT=10 # Seconds to wait for initial connection (default: 10)
DB_IDLE_TIMEOUT=300 # Seconds before closing idle connections (default: 300)
DB_MAX_LIFETIME=1800 # Maximum lifetime of a connection in seconds (default: 1800)
# JWT Configuration
JWT_SECRET=your-secure-random-secret-key-change-this-in-production
JWT_EXPIRES_IN=1h
JWT_REFRESH_EXPIRES_IN=7d
# Server Configuration
PORT=3001
NODE_ENV=development
NODE_ENV=production
# API Configuration
API_VERSION=v1
# CORS Configuration
CORS_ORIGIN=http://localhost:3000
# Session Configuration
SESSION_INACTIVITY_TIMEOUT_MINUTES=30
# User Configuration
DEFAULT_USER_ROLE=user
# Rate Limiting (times in milliseconds)
RATE_LIMIT_WINDOW_MS=900000
RATE_LIMIT_MAX=5000
@@ -26,20 +39,23 @@ AUTH_RATE_LIMIT_MAX=500
AGENT_RATE_LIMIT_WINDOW_MS=60000
AGENT_RATE_LIMIT_MAX=1000
# Redis Configuration
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_USER=your-redis-username-here
REDIS_PASSWORD=your-redis-password-here
REDIS_DB=0
# Logging
LOG_LEVEL=info
ENABLE_LOGGING=true
# User Registration
DEFAULT_USER_ROLE=user
# JWT Configuration
JWT_SECRET=your-secure-random-secret-key-change-this-in-production
JWT_EXPIRES_IN=1h
JWT_REFRESH_EXPIRES_IN=7d
SESSION_INACTIVITY_TIMEOUT_MINUTES=30
# TFA Configuration
# TFA Configuration (optional - used if TFA is enabled)
TFA_REMEMBER_ME_EXPIRES_IN=30d
TFA_MAX_REMEMBER_SESSIONS=5
TFA_SUSPICIOUS_ACTIVITY_THRESHOLD=3
# Timezone Configuration
# Set the timezone for timestamps and logs (e.g., 'UTC', 'America/New_York', 'Europe/London')
# Defaults to UTC if not set. This ensures consistent timezone handling across the application.
TZ=UTC

View File

@@ -1,6 +1,6 @@
{
"name": "patchmon-backend",
"version": "1.3.0",
"version": "1.3.4",
"description": "Backend API for Linux Patch Monitoring System",
"license": "AGPL-3.0",
"main": "src/server.js",

View File

@@ -0,0 +1,16 @@
-- CreateTable
CREATE TABLE "system_statistics" (
"id" TEXT NOT NULL,
"unique_packages_count" INTEGER NOT NULL,
"unique_security_count" INTEGER NOT NULL,
"total_packages" INTEGER NOT NULL,
"total_hosts" INTEGER NOT NULL,
"hosts_needing_updates" INTEGER NOT NULL,
"timestamp" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT "system_statistics_pkey" PRIMARY KEY ("id")
);
-- CreateIndex
CREATE INDEX "system_statistics_timestamp_idx" ON "system_statistics"("timestamp");

View File

@@ -0,0 +1,4 @@
-- AlterTable
-- Add color_theme field to settings table for customizable app theming
ALTER TABLE "settings" ADD COLUMN "color_theme" TEXT NOT NULL DEFAULT 'default';

View File

@@ -0,0 +1,14 @@
-- AddMetricsTelemetry
-- Add anonymous metrics and telemetry fields to settings table
-- Add metrics fields to settings table
ALTER TABLE "settings" ADD COLUMN "metrics_enabled" BOOLEAN NOT NULL DEFAULT true;
ALTER TABLE "settings" ADD COLUMN "metrics_anonymous_id" TEXT;
ALTER TABLE "settings" ADD COLUMN "metrics_last_sent" TIMESTAMP(3);
-- Generate UUID for existing records (if any exist)
-- This will use PostgreSQL's gen_random_uuid() function
UPDATE "settings"
SET "metrics_anonymous_id" = gen_random_uuid()::text
WHERE "metrics_anonymous_id" IS NULL;

View File

@@ -0,0 +1,74 @@
-- CreateTable
CREATE TABLE "docker_volumes" (
"id" TEXT NOT NULL,
"host_id" TEXT NOT NULL,
"volume_id" TEXT NOT NULL,
"name" TEXT NOT NULL,
"driver" TEXT NOT NULL,
"mountpoint" TEXT,
"renderer" TEXT,
"scope" TEXT NOT NULL DEFAULT 'local',
"labels" JSONB,
"options" JSONB,
"size_bytes" BIGINT,
"ref_count" INTEGER NOT NULL DEFAULT 0,
"created_at" TIMESTAMP(3) NOT NULL,
"updated_at" TIMESTAMP(3) NOT NULL,
"last_checked" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT "docker_volumes_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "docker_networks" (
"id" TEXT NOT NULL,
"host_id" TEXT NOT NULL,
"network_id" TEXT NOT NULL,
"name" TEXT NOT NULL,
"driver" TEXT NOT NULL,
"scope" TEXT NOT NULL DEFAULT 'local',
"ipv6_enabled" BOOLEAN NOT NULL DEFAULT false,
"internal" BOOLEAN NOT NULL DEFAULT false,
"attachable" BOOLEAN NOT NULL DEFAULT true,
"ingress" BOOLEAN NOT NULL DEFAULT false,
"config_only" BOOLEAN NOT NULL DEFAULT false,
"labels" JSONB,
"ipam" JSONB,
"container_count" INTEGER NOT NULL DEFAULT 0,
"created_at" TIMESTAMP(3),
"updated_at" TIMESTAMP(3) NOT NULL,
"last_checked" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT "docker_networks_pkey" PRIMARY KEY ("id")
);
-- CreateIndex
CREATE INDEX "docker_volumes_host_id_idx" ON "docker_volumes"("host_id");
-- CreateIndex
CREATE INDEX "docker_volumes_name_idx" ON "docker_volumes"("name");
-- CreateIndex
CREATE INDEX "docker_volumes_driver_idx" ON "docker_volumes"("driver");
-- CreateIndex
CREATE UNIQUE INDEX "docker_volumes_host_id_volume_id_key" ON "docker_volumes"("host_id", "volume_id");
-- CreateIndex
CREATE INDEX "docker_networks_host_id_idx" ON "docker_networks"("host_id");
-- CreateIndex
CREATE INDEX "docker_networks_name_idx" ON "docker_networks"("name");
-- CreateIndex
CREATE INDEX "docker_networks_driver_idx" ON "docker_networks"("driver");
-- CreateIndex
CREATE UNIQUE INDEX "docker_networks_host_id_network_id_key" ON "docker_networks"("host_id", "network_id");
-- AddForeignKey
ALTER TABLE "docker_volumes" ADD CONSTRAINT "docker_volumes_host_id_fkey" FOREIGN KEY ("host_id") REFERENCES "hosts"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "docker_networks" ADD CONSTRAINT "docker_networks_host_id_fkey" FOREIGN KEY ("host_id") REFERENCES "hosts"("id") ON DELETE CASCADE ON UPDATE CASCADE;

View File

@@ -0,0 +1,6 @@
-- AlterTable
ALTER TABLE "users" ADD COLUMN IF NOT EXISTS "theme_preference" VARCHAR(10) DEFAULT 'dark';
-- AlterTable
ALTER TABLE "users" ADD COLUMN IF NOT EXISTS "color_theme" VARCHAR(50) DEFAULT 'cyber_blue';

View File

@@ -0,0 +1,3 @@
-- AlterTable
ALTER TABLE "auto_enrollment_tokens" ADD COLUMN "scopes" JSONB;

View File

@@ -0,0 +1,13 @@
-- Remove machine_id unique constraint and make it nullable
-- This allows multiple hosts with the same machine_id
-- Duplicate detection now relies on config.yml/credentials.yml checking instead
-- Drop the unique constraint
ALTER TABLE "hosts" DROP CONSTRAINT IF EXISTS "hosts_machine_id_key";
-- Make machine_id nullable
ALTER TABLE "hosts" ALTER COLUMN "machine_id" DROP NOT NULL;
-- Keep the index for query performance (but not unique)
CREATE INDEX IF NOT EXISTS "hosts_machine_id_idx" ON "hosts"("machine_id");

View File

@@ -0,0 +1,7 @@
-- AlterTable
ALTER TABLE "hosts" ADD COLUMN "needs_reboot" BOOLEAN DEFAULT false;
ALTER TABLE "hosts" ADD COLUMN "installed_kernel_version" TEXT;
-- CreateIndex
CREATE INDEX "hosts_needs_reboot_idx" ON "hosts"("needs_reboot");

View File

@@ -1,5 +1,6 @@
generator client {
provider = "prisma-client-js"
provider = "prisma-client-js"
binaryTargets = ["native", "linux-musl-openssl-3.0.x"]
}
datasource db {
@@ -81,7 +82,7 @@ model host_repositories {
model hosts {
id String @id
machine_id String @unique
machine_id String?
friendly_name String
ip String?
os_type String
@@ -109,15 +110,19 @@ model hosts {
swap_size Int?
system_uptime String?
notes String?
needs_reboot Boolean? @default(false)
host_packages host_packages[]
host_repositories host_repositories[]
host_group_memberships host_group_memberships[]
update_history update_history[]
job_history job_history[]
docker_volumes docker_volumes[]
docker_networks docker_networks[]
@@index([machine_id])
@@index([friendly_name])
@@index([hostname])
@@index([needs_reboot])
}
model packages {
@@ -170,34 +175,37 @@ model role_permissions {
}
model settings {
id String @id
server_url String @default("http://localhost:3001")
server_protocol String @default("http")
server_host String @default("localhost")
server_port Int @default(3001)
created_at DateTime @default(now())
updated_at DateTime
update_interval Int @default(60)
auto_update Boolean @default(false)
github_repo_url String @default("https://github.com/PatchMon/PatchMon.git")
ssh_key_path String?
repository_type String @default("public")
last_update_check DateTime?
latest_version String?
update_available Boolean @default(false)
signup_enabled Boolean @default(false)
default_user_role String @default("user")
ignore_ssl_self_signed Boolean @default(false)
logo_dark String? @default("/assets/logo_dark.png")
logo_light String? @default("/assets/logo_light.png")
favicon String? @default("/assets/logo_square.svg")
id String @id
server_url String @default("http://localhost:3001")
server_protocol String @default("http")
server_host String @default("localhost")
server_port Int @default(3001)
created_at DateTime @default(now())
updated_at DateTime
update_interval Int @default(60)
auto_update Boolean @default(false)
github_repo_url String @default("https://github.com/PatchMon/PatchMon.git")
ssh_key_path String?
repository_type String @default("public")
last_update_check DateTime?
latest_version String?
update_available Boolean @default(false)
signup_enabled Boolean @default(false)
default_user_role String @default("user")
ignore_ssl_self_signed Boolean @default(false)
logo_dark String? @default("/assets/logo_dark.png")
logo_light String? @default("/assets/logo_light.png")
favicon String? @default("/assets/logo_square.svg")
metrics_enabled Boolean @default(true)
metrics_anonymous_id String?
metrics_last_sent DateTime?
}
model update_history {
id String @id
host_id String
packages_count Int
security_count Int
security_count Int
total_packages Int?
payload_size_kb Float?
execution_time Float?
@@ -207,6 +215,18 @@ model update_history {
hosts hosts @relation(fields: [host_id], references: [id], onDelete: Cascade)
}
model system_statistics {
id String @id
unique_packages_count Int
unique_security_count Int
total_packages Int
total_hosts Int
hosts_needing_updates Int
timestamp DateTime @default(now())
@@index([timestamp])
}
model users {
id String @id
username String @unique
@@ -222,6 +242,8 @@ model users {
tfa_secret String?
first_name String?
last_name String?
theme_preference String? @default("dark")
color_theme String? @default("cyber_blue")
dashboard_preferences dashboard_preferences[]
user_sessions user_sessions[]
auto_enrollment_tokens auto_enrollment_tokens[]
@@ -269,6 +291,7 @@ model auto_enrollment_tokens {
last_used_at DateTime?
expires_at DateTime?
metadata Json?
scopes Json?
users users? @relation(fields: [created_by_user_id], references: [id], onDelete: SetNull)
host_groups host_groups? @relation(fields: [default_host_group_id], references: [id], onDelete: SetNull)
@@ -338,6 +361,56 @@ model docker_image_updates {
@@index([is_security_update])
}
model docker_volumes {
id String @id
host_id String
volume_id String
name String
driver String
mountpoint String?
renderer String?
scope String @default("local")
labels Json?
options Json?
size_bytes BigInt?
ref_count Int @default(0)
created_at DateTime
updated_at DateTime
last_checked DateTime @default(now())
hosts hosts @relation(fields: [host_id], references: [id], onDelete: Cascade)
@@unique([host_id, volume_id])
@@index([host_id])
@@index([name])
@@index([driver])
}
model docker_networks {
id String @id
host_id String
network_id String
name String
driver String
scope String @default("local")
ipv6_enabled Boolean @default(false)
internal Boolean @default(false)
attachable Boolean @default(true)
ingress Boolean @default(false)
config_only Boolean @default(false)
labels Json?
ipam Json? // IPAM configuration (driver, config, options)
container_count Int @default(0)
created_at DateTime?
updated_at DateTime
last_checked DateTime @default(now())
hosts hosts @relation(fields: [host_id], references: [id], onDelete: Cascade)
@@unique([host_id, network_id])
@@index([host_id])
@@index([name])
@@index([driver])
}
model job_history {
id String @id
job_id String

View File

@@ -16,12 +16,28 @@ function getOptimizedDatabaseUrl() {
// Parse the URL
const url = new URL(originalUrl);
// Add connection pooling parameters for multiple instances
url.searchParams.set("connection_limit", "5"); // Reduced from default 10
url.searchParams.set("pool_timeout", "10"); // 10 seconds
url.searchParams.set("connect_timeout", "10"); // 10 seconds
url.searchParams.set("idle_timeout", "300"); // 5 minutes
url.searchParams.set("max_lifetime", "1800"); // 30 minutes
// Add connection pooling parameters - configurable via environment variables
const connectionLimit = process.env.DB_CONNECTION_LIMIT || "30";
const poolTimeout = process.env.DB_POOL_TIMEOUT || "20";
const connectTimeout = process.env.DB_CONNECT_TIMEOUT || "10";
const idleTimeout = process.env.DB_IDLE_TIMEOUT || "300";
const maxLifetime = process.env.DB_MAX_LIFETIME || "1800";
url.searchParams.set("connection_limit", connectionLimit);
url.searchParams.set("pool_timeout", poolTimeout);
url.searchParams.set("connect_timeout", connectTimeout);
url.searchParams.set("idle_timeout", idleTimeout);
url.searchParams.set("max_lifetime", maxLifetime);
// Log connection pool settings in development/debug mode
if (
process.env.ENABLE_LOGGING === "true" ||
process.env.LOG_LEVEL === "debug"
) {
console.log(
`[Database Pool] connection_limit=${connectionLimit}, pool_timeout=${poolTimeout}s, connect_timeout=${connectTimeout}s`,
);
}
return url.toString();
}

View File

@@ -0,0 +1,113 @@
const { getPrismaClient } = require("../config/prisma");
const bcrypt = require("bcryptjs");
const prisma = getPrismaClient();
/**
* Middleware factory to authenticate API tokens using Basic Auth
* @param {string} integrationType - The expected integration type (e.g., "api", "gethomepage")
* @returns {Function} Express middleware function
*/
const authenticateApiToken = (integrationType) => {
return async (req, res, next) => {
try {
const authHeader = req.headers.authorization;
if (!authHeader || !authHeader.startsWith("Basic ")) {
return res
.status(401)
.json({ error: "Missing or invalid authorization header" });
}
// Decode base64 credentials
const base64Credentials = authHeader.split(" ")[1];
const credentials = Buffer.from(base64Credentials, "base64").toString(
"ascii",
);
const [apiKey, apiSecret] = credentials.split(":");
if (!apiKey || !apiSecret) {
return res.status(401).json({ error: "Invalid credentials format" });
}
// Find the token in database
const token = await prisma.auto_enrollment_tokens.findUnique({
where: { token_key: apiKey },
include: {
users: {
select: {
id: true,
username: true,
role: true,
},
},
},
});
if (!token) {
console.log(`API key not found: ${apiKey}`);
return res.status(401).json({ error: "Invalid API key" });
}
// Check if token is active
if (!token.is_active) {
return res.status(401).json({ error: "API key is disabled" });
}
// Check if token has expired
if (token.expires_at && new Date(token.expires_at) < new Date()) {
return res.status(401).json({ error: "API key has expired" });
}
// Check if token is for the expected integration type
if (token.metadata?.integration_type !== integrationType) {
return res.status(401).json({ error: "Invalid API key type" });
}
// Verify the secret
const isValidSecret = await bcrypt.compare(apiSecret, token.token_secret);
if (!isValidSecret) {
return res.status(401).json({ error: "Invalid API secret" });
}
// Check IP restrictions if any
if (token.allowed_ip_ranges && token.allowed_ip_ranges.length > 0) {
const clientIp = req.ip || req.connection.remoteAddress;
const forwardedFor = req.headers["x-forwarded-for"];
const realIp = req.headers["x-real-ip"];
// Get the actual client IP (considering proxies)
const actualClientIp = forwardedFor
? forwardedFor.split(",")[0].trim()
: realIp || clientIp;
const isAllowedIp = token.allowed_ip_ranges.some((range) => {
// Simple IP range check (can be enhanced for CIDR support)
return actualClientIp.startsWith(range) || actualClientIp === range;
});
if (!isAllowedIp) {
console.log(
`IP validation failed. Client IP: ${actualClientIp}, Allowed ranges: ${token.allowed_ip_ranges.join(", ")}`,
);
return res.status(403).json({ error: "IP address not allowed" });
}
}
// Update last used timestamp
await prisma.auto_enrollment_tokens.update({
where: { id: token.id },
data: { last_used_at: new Date() },
});
// Attach token info to request
req.apiToken = token;
next();
} catch (error) {
console.error("API key authentication error:", error);
res.status(500).json({ error: "Authentication failed" });
}
};
};
module.exports = { authenticateApiToken };

View File

@@ -0,0 +1,76 @@
/**
* Middleware factory to validate API token scopes
* Only applies to tokens with metadata.integration_type === "api"
* @param {string} resource - The resource being accessed (e.g., "host")
* @param {string} action - The action being performed (e.g., "get", "put", "patch", "update", "delete")
* @returns {Function} Express middleware function
*/
const requireApiScope = (resource, action) => {
return async (req, res, next) => {
try {
const token = req.apiToken;
// If no token attached, this should have been caught by auth middleware
if (!token) {
return res.status(401).json({ error: "Unauthorized" });
}
// Only validate scopes for API type tokens
if (token.metadata?.integration_type !== "api") {
// For non-API tokens, skip scope validation
return next();
}
// Check if token has scopes field
if (!token.scopes || typeof token.scopes !== "object") {
console.warn(
`API token ${token.token_key} missing scopes field for ${resource}:${action}`,
);
return res.status(403).json({
error: "Access denied",
message: "This API key does not have the required permissions",
});
}
// Check if resource exists in scopes
if (!token.scopes[resource]) {
console.warn(
`API token ${token.token_key} missing resource ${resource} for ${action}`,
);
return res.status(403).json({
error: "Access denied",
message: `This API key does not have access to ${resource}`,
});
}
// Check if action exists in resource scopes
if (!Array.isArray(token.scopes[resource])) {
console.warn(
`API token ${token.token_key} has invalid scopes structure for ${resource}`,
);
return res.status(403).json({
error: "Access denied",
message: "Invalid API key permissions configuration",
});
}
if (!token.scopes[resource].includes(action)) {
console.warn(
`API token ${token.token_key} missing action ${action} for resource ${resource}`,
);
return res.status(403).json({
error: "Access denied",
message: `This API key does not have permission to ${action} ${resource}`,
});
}
// Scope validation passed
next();
} catch (error) {
console.error("Scope validation error:", error);
res.status(500).json({ error: "Scope validation failed" });
}
};
};
module.exports = { requireApiScope };

View File

@@ -0,0 +1,143 @@
const express = require("express");
const { getPrismaClient } = require("../config/prisma");
const { authenticateApiToken } = require("../middleware/apiAuth");
const { requireApiScope } = require("../middleware/apiScope");
const router = express.Router();
const prisma = getPrismaClient();
// Helper function to check if a string is a valid UUID
const isUUID = (str) => {
const uuidRegex =
/^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$/i;
return uuidRegex.test(str);
};
// GET /api/v1/api/hosts - List hosts with IP and groups
router.get(
"/hosts",
authenticateApiToken("api"),
requireApiScope("host", "get"),
async (req, res) => {
try {
const { hostgroup } = req.query;
let whereClause = {};
let filterValues = [];
// Parse hostgroup filter (comma-separated names or UUIDs)
if (hostgroup) {
filterValues = hostgroup.split(",").map((g) => g.trim());
// Separate UUIDs from names
const uuidFilters = [];
const nameFilters = [];
for (const value of filterValues) {
if (isUUID(value)) {
uuidFilters.push(value);
} else {
nameFilters.push(value);
}
}
// Find host group IDs from names
const groupIds = [...uuidFilters];
if (nameFilters.length > 0) {
const groups = await prisma.host_groups.findMany({
where: {
name: {
in: nameFilters,
},
},
select: {
id: true,
name: true,
},
});
// Add found group IDs
groupIds.push(...groups.map((g) => g.id));
// Check if any name filters didn't match
const foundNames = groups.map((g) => g.name);
const notFoundNames = nameFilters.filter(
(name) => !foundNames.includes(name),
);
if (notFoundNames.length > 0) {
console.warn(`Host groups not found: ${notFoundNames.join(", ")}`);
}
}
// Filter hosts by group memberships
if (groupIds.length > 0) {
whereClause = {
host_group_memberships: {
some: {
host_group_id: {
in: groupIds,
},
},
},
};
} else {
// No valid groups found, return empty result
return res.json({
hosts: [],
total: 0,
filtered_by_groups: filterValues,
});
}
}
// Query hosts with groups
const hosts = await prisma.hosts.findMany({
where: whereClause,
select: {
id: true,
friendly_name: true,
hostname: true,
ip: true,
host_group_memberships: {
include: {
host_groups: {
select: {
id: true,
name: true,
},
},
},
},
},
orderBy: {
friendly_name: "asc",
},
});
// Format response
const formattedHosts = hosts.map((host) => ({
id: host.id,
friendly_name: host.friendly_name,
hostname: host.hostname,
ip: host.ip,
host_groups: host.host_group_memberships.map((membership) => ({
id: membership.host_groups.id,
name: membership.host_groups.name,
})),
}));
res.json({
hosts: formattedHosts,
total: formattedHosts.length,
filtered_by_groups: filterValues.length > 0 ? filterValues : undefined,
});
} catch (error) {
console.error("Error fetching hosts:", error);
res.status(500).json({ error: "Failed to fetch hosts" });
}
},
);
module.exports = router;

View File

@@ -17,6 +17,7 @@ const {
refresh_access_token,
revoke_session,
revoke_all_user_sessions,
generate_device_fingerprint,
} = require("../utils/session_manager");
const router = express.Router();
@@ -788,11 +789,39 @@ router.post(
// Check if TFA is enabled
if (user.tfa_enabled) {
return res.status(200).json({
message: "TFA verification required",
requiresTfa: true,
username: user.username,
});
// Get device fingerprint from X-Device-ID header
const device_fingerprint = generate_device_fingerprint(req);
// Check if this device has a valid TFA bypass
if (device_fingerprint) {
const remembered_session = await prisma.user_sessions.findFirst({
where: {
user_id: user.id,
device_fingerprint: device_fingerprint,
tfa_remember_me: true,
tfa_bypass_until: { gt: new Date() }, // Bypass still valid
},
});
if (remembered_session) {
// Device is remembered and bypass is still valid - skip TFA
// Continue with login below
} else {
// No valid bypass for this device - require TFA
return res.status(200).json({
message: "TFA verification required",
requiresTfa: true,
username: user.username,
});
}
} else {
// No device ID provided - require TFA
return res.status(200).json({
message: "TFA verification required",
requiresTfa: true,
username: user.username,
});
}
}
// Update last login
@@ -807,7 +836,13 @@ router.post(
// Create session with access and refresh tokens
const ip_address = req.ip || req.connection.remoteAddress;
const user_agent = req.get("user-agent");
const session = await create_session(user.id, ip_address, user_agent);
const session = await create_session(
user.id,
ip_address,
user_agent,
false,
req,
);
res.json({
message: "Login successful",
@@ -825,6 +860,9 @@ router.post(
last_login: user.last_login,
created_at: user.created_at,
updated_at: user.updated_at,
// Include user preferences so they're available immediately after login
theme_preference: user.theme_preference,
color_theme: user.color_theme,
},
});
} catch (error) {
@@ -841,8 +879,10 @@ router.post(
body("username").notEmpty().withMessage("Username is required"),
body("token")
.isLength({ min: 6, max: 6 })
.withMessage("Token must be 6 digits"),
body("token").isNumeric().withMessage("Token must contain only numbers"),
.withMessage("Token must be 6 characters"),
body("token")
.matches(/^[A-Z0-9]{6}$/)
.withMessage("Token must be 6 alphanumeric characters"),
body("remember_me")
.optional()
.isBoolean()
@@ -915,10 +955,24 @@ router.post(
return res.status(401).json({ error: "Invalid verification code" });
}
// Update last login
await prisma.users.update({
// Update last login and fetch complete user data
const updatedUser = await prisma.users.update({
where: { id: user.id },
data: { last_login: new Date() },
select: {
id: true,
username: true,
email: true,
first_name: true,
last_name: true,
role: true,
is_active: true,
last_login: true,
created_at: true,
updated_at: true,
theme_preference: true,
color_theme: true,
},
});
// Create session with access and refresh tokens
@@ -938,14 +992,7 @@ router.post(
refresh_token: session.refresh_token,
expires_at: session.expires_at,
tfa_bypass_until: session.tfa_bypass_until,
user: {
id: user.id,
username: user.username,
email: user.email,
first_name: user.first_name,
last_name: user.last_name,
role: user.role,
},
user: updatedUser,
});
} catch (error) {
console.error("TFA verification error:", error);
@@ -977,13 +1024,27 @@ router.put(
.withMessage("Username must be at least 3 characters"),
body("email").optional().isEmail().withMessage("Valid email is required"),
body("first_name")
.optional()
.isLength({ min: 1 })
.withMessage("First name must be at least 1 character"),
.optional({ nullable: true, checkFalsy: true })
.custom((value) => {
// Allow null, undefined, or empty string to clear the field
if (value === null || value === undefined || value === "") {
return true;
}
// If provided, must be at least 1 character after trimming
return typeof value === "string" && value.trim().length >= 1;
})
.withMessage("First name must be at least 1 character if provided"),
body("last_name")
.optional()
.isLength({ min: 1 })
.withMessage("Last name must be at least 1 character"),
.optional({ nullable: true, checkFalsy: true })
.custom((value) => {
// Allow null, undefined, or empty string to clear the field
if (value === null || value === undefined || value === "") {
return true;
}
// If provided, must be at least 1 character after trimming
return typeof value === "string" && value.trim().length >= 1;
})
.withMessage("Last name must be at least 1 character if provided"),
],
async (req, res) => {
try {
@@ -993,12 +1054,27 @@ router.put(
}
const { username, email, first_name, last_name } = req.body;
const updateData = {};
const updateData = {
updated_at: new Date(),
};
if (username) updateData.username = username;
if (email) updateData.email = email;
if (first_name !== undefined) updateData.first_name = first_name || null;
if (last_name !== undefined) updateData.last_name = last_name || null;
// Handle all fields consistently - trim and update if provided
if (username) updateData.username = username.trim();
if (email) updateData.email = email.trim();
if (first_name !== undefined) {
// Allow null or empty string to clear the field, otherwise trim
updateData.first_name =
first_name === "" || first_name === null
? null
: first_name.trim() || null;
}
if (last_name !== undefined) {
// Allow null or empty string to clear the field, otherwise trim
updateData.last_name =
last_name === "" || last_name === null
? null
: last_name.trim() || null;
}
// Check if username/email already exists (excluding current user)
if (username || email) {
@@ -1023,6 +1099,7 @@ router.put(
}
}
// Update user with explicit commit
const updatedUser = await prisma.users.update({
where: { id: req.user.id },
data: updateData,
@@ -1039,9 +1116,29 @@ router.put(
},
});
// Explicitly refresh user data from database to ensure we return latest data
// This ensures consistency especially in high-concurrency scenarios
const freshUser = await prisma.users.findUnique({
where: { id: req.user.id },
select: {
id: true,
username: true,
email: true,
first_name: true,
last_name: true,
role: true,
is_active: true,
last_login: true,
updated_at: true,
},
});
// Use fresh data if available, otherwise fallback to updatedUser
const responseUser = freshUser || updatedUser;
res.json({
message: "Profile updated successfully",
user: updatedUser,
user: responseUser,
});
} catch (error) {
console.error("Update profile error:", error);

View File

@@ -125,6 +125,10 @@ router.post(
.optional({ nullable: true, checkFalsy: true })
.isISO8601()
.withMessage("Invalid date format"),
body("scopes")
.optional()
.isObject()
.withMessage("Scopes must be an object"),
],
async (req, res) => {
try {
@@ -140,6 +144,7 @@ router.post(
default_host_group_id,
expires_at,
metadata = {},
scopes,
} = req.body;
// Validate host group if provided
@@ -153,6 +158,32 @@ router.post(
}
}
// Validate scopes for API tokens
if (metadata.integration_type === "api" && scopes) {
// Validate scopes structure
if (typeof scopes !== "object" || scopes === null) {
return res.status(400).json({ error: "Scopes must be an object" });
}
// Validate each resource in scopes
for (const [resource, actions] of Object.entries(scopes)) {
if (!Array.isArray(actions)) {
return res.status(400).json({
error: `Scopes for resource "${resource}" must be an array of actions`,
});
}
// Validate action names
for (const action of actions) {
if (typeof action !== "string") {
return res.status(400).json({
error: `All actions in scopes must be strings`,
});
}
}
}
}
const { token_key, token_secret } = generate_auto_enrollment_token();
const hashed_secret = await bcrypt.hash(token_secret, 10);
@@ -168,6 +199,7 @@ router.post(
default_host_group_id: default_host_group_id || null,
expires_at: expires_at ? new Date(expires_at) : null,
metadata: { integration_type: "proxmox-lxc", ...metadata },
scopes: metadata.integration_type === "api" ? scopes || null : null,
updated_at: new Date(),
},
include: {
@@ -201,6 +233,7 @@ router.post(
default_host_group: token.host_groups,
created_by: token.users,
expires_at: token.expires_at,
scopes: token.scopes,
},
warning: "⚠️ Save the token_secret now - it cannot be retrieved later!",
});
@@ -232,6 +265,7 @@ router.get(
created_at: true,
default_host_group_id: true,
metadata: true,
scopes: true,
host_groups: {
select: {
id: true,
@@ -314,6 +348,10 @@ router.patch(
body("max_hosts_per_day").optional().isInt({ min: 1, max: 1000 }),
body("allowed_ip_ranges").optional().isArray(),
body("expires_at").optional().isISO8601(),
body("scopes")
.optional()
.isObject()
.withMessage("Scopes must be an object"),
],
async (req, res) => {
try {
@@ -323,6 +361,16 @@ router.patch(
}
const { tokenId } = req.params;
// First, get the existing token to check its integration type
const existing_token = await prisma.auto_enrollment_tokens.findUnique({
where: { id: tokenId },
});
if (!existing_token) {
return res.status(404).json({ error: "Token not found" });
}
const update_data = { updated_at: new Date() };
if (req.body.is_active !== undefined)
@@ -334,6 +382,41 @@ router.patch(
if (req.body.expires_at !== undefined)
update_data.expires_at = new Date(req.body.expires_at);
// Handle scopes updates for API tokens only
if (req.body.scopes !== undefined) {
if (existing_token.metadata?.integration_type === "api") {
// Validate scopes structure
const scopes = req.body.scopes;
if (typeof scopes !== "object" || scopes === null) {
return res.status(400).json({ error: "Scopes must be an object" });
}
// Validate each resource in scopes
for (const [resource, actions] of Object.entries(scopes)) {
if (!Array.isArray(actions)) {
return res.status(400).json({
error: `Scopes for resource "${resource}" must be an array of actions`,
});
}
// Validate action names
for (const action of actions) {
if (typeof action !== "string") {
return res.status(400).json({
error: `All actions in scopes must be strings`,
});
}
}
}
update_data.scopes = scopes;
} else {
return res.status(400).json({
error: "Scopes can only be updated for API integration tokens",
});
}
}
const token = await prisma.auto_enrollment_tokens.update({
where: { id: tokenId },
data: update_data,
@@ -398,19 +481,22 @@ router.delete(
);
// ========== AUTO-ENROLLMENT ENDPOINTS (Used by Scripts) ==========
// Future integrations can follow this pattern:
// - /proxmox-lxc - Proxmox LXC containers
// - /vmware-esxi - VMware ESXi VMs
// - /docker - Docker containers
// - /kubernetes - Kubernetes pods
// - /aws-ec2 - AWS EC2 instances
// Universal script-serving endpoint with type parameter
// Supported types:
// - proxmox-lxc - Proxmox LXC containers
// - direct-host - Direct host enrollment
// Future types:
// - vmware-esxi - VMware ESXi VMs
// - docker - Docker containers
// - kubernetes - Kubernetes pods
// Serve the Proxmox LXC enrollment script with credentials injected
router.get("/proxmox-lxc", async (req, res) => {
// Serve auto-enrollment scripts with credentials injected
router.get("/script", async (req, res) => {
try {
// Get token from query params
// Get parameters from query params
const token_key = req.query.token_key;
const token_secret = req.query.token_secret;
const script_type = req.query.type;
if (!token_key || !token_secret) {
return res
@@ -418,6 +504,25 @@ router.get("/proxmox-lxc", async (req, res) => {
.json({ error: "Token key and secret required as query parameters" });
}
if (!script_type) {
return res.status(400).json({
error:
"Script type required as query parameter (e.g., ?type=proxmox-lxc or ?type=direct-host)",
});
}
// Map script types to script file paths
const scriptMap = {
"proxmox-lxc": "proxmox_auto_enroll.sh",
"direct-host": "direct_host_auto_enroll.sh",
};
if (!scriptMap[script_type]) {
return res.status(400).json({
error: `Invalid script type: ${script_type}. Supported types: ${Object.keys(scriptMap).join(", ")}`,
});
}
// Validate token
const token = await prisma.auto_enrollment_tokens.findUnique({
where: { token_key: token_key },
@@ -443,13 +548,13 @@ router.get("/proxmox-lxc", async (req, res) => {
const script_path = path.join(
__dirname,
"../../../agents/proxmox_auto_enroll.sh",
`../../../agents/${scriptMap[script_type]}`,
);
if (!fs.existsSync(script_path)) {
return res
.status(404)
.json({ error: "Proxmox enrollment script not found" });
return res.status(404).json({
error: `Enrollment script not found: ${scriptMap[script_type]}`,
});
}
let script = fs.readFileSync(script_path, "utf8");
@@ -484,7 +589,7 @@ router.get("/proxmox-lxc", async (req, res) => {
const force_install = req.query.force === "true" || req.query.force === "1";
// Inject the token credentials, server URL, curl flags, and force flag into the script
const env_vars = `#!/bin/bash
const env_vars = `#!/bin/sh
# PatchMon Auto-Enrollment Configuration (Auto-generated)
export PATCHMON_URL="${server_url}"
export AUTO_ENROLLMENT_KEY="${token.token_key}"
@@ -508,11 +613,11 @@ export FORCE_INSTALL="${force_install ? "true" : "false"}"
res.setHeader("Content-Type", "text/plain");
res.setHeader(
"Content-Disposition",
'inline; filename="proxmox_auto_enroll.sh"',
`inline; filename="${scriptMap[script_type]}"`,
);
res.send(script);
} catch (error) {
console.error("Proxmox script serve error:", error);
console.error("Script serve error:", error);
res.status(500).json({ error: "Failed to serve enrollment script" });
}
});
@@ -526,8 +631,11 @@ router.post(
.isLength({ min: 1, max: 255 })
.withMessage("Friendly name is required"),
body("machine_id")
.optional()
.isLength({ min: 1, max: 255 })
.withMessage("Machine ID is required"),
.withMessage(
"Machine ID must be between 1 and 255 characters if provided",
),
body("metadata").optional().isObject(),
],
async (req, res) => {
@@ -543,24 +651,7 @@ router.post(
const api_id = `patchmon_${crypto.randomBytes(8).toString("hex")}`;
const api_key = crypto.randomBytes(32).toString("hex");
// Check if host already exists by machine_id (not hostname)
const existing_host = await prisma.hosts.findUnique({
where: { machine_id },
});
if (existing_host) {
return res.status(409).json({
error: "Host already exists",
host_id: existing_host.id,
api_id: existing_host.api_id,
machine_id: existing_host.machine_id,
friendly_name: existing_host.friendly_name,
message:
"This machine is already enrolled in PatchMon (matched by machine ID)",
});
}
// Create host
// Create host (no duplicate check - using config.yml checking instead)
const host = await prisma.hosts.create({
data: {
id: uuidv4(),
@@ -677,30 +768,7 @@ router.post(
try {
const { friendly_name, machine_id } = host_data;
if (!machine_id) {
results.failed.push({
friendly_name,
error: "Machine ID is required",
});
continue;
}
// Check if host already exists by machine_id
const existing_host = await prisma.hosts.findUnique({
where: { machine_id },
});
if (existing_host) {
results.skipped.push({
friendly_name,
machine_id,
reason: "Machine already enrolled",
api_id: existing_host.api_id,
});
continue;
}
// Generate credentials
// Generate credentials (no duplicate check - using config.yml checking instead)
const api_id = `patchmon_${crypto.randomBytes(8).toString("hex")}`;
const api_key = crypto.randomBytes(32).toString("hex");

View File

@@ -218,6 +218,54 @@ router.post(
},
);
// Trigger manual Docker inventory cleanup
router.post(
"/trigger/docker-inventory-cleanup",
authenticateToken,
async (_req, res) => {
try {
const job = await queueManager.triggerDockerInventoryCleanup();
res.json({
success: true,
data: {
jobId: job.id,
message: "Docker inventory cleanup triggered successfully",
},
});
} catch (error) {
console.error("Error triggering Docker inventory cleanup:", error);
res.status(500).json({
success: false,
error: "Failed to trigger Docker inventory cleanup",
});
}
},
);
// Trigger manual system statistics collection
router.post(
"/trigger/system-statistics",
authenticateToken,
async (_req, res) => {
try {
const job = await queueManager.triggerSystemStatistics();
res.json({
success: true,
data: {
jobId: job.id,
message: "System statistics collection triggered successfully",
},
});
} catch (error) {
console.error("Error triggering system statistics collection:", error);
res.status(500).json({
success: false,
error: "Failed to trigger system statistics collection",
});
}
},
);
// Get queue health status
router.get("/health", authenticateToken, async (_req, res) => {
try {
@@ -274,7 +322,9 @@ router.get("/overview", authenticateToken, async (_req, res) => {
queueManager.getRecentJobs(QUEUE_NAMES.SESSION_CLEANUP, 1),
queueManager.getRecentJobs(QUEUE_NAMES.ORPHANED_REPO_CLEANUP, 1),
queueManager.getRecentJobs(QUEUE_NAMES.ORPHANED_PACKAGE_CLEANUP, 1),
queueManager.getRecentJobs(QUEUE_NAMES.DOCKER_INVENTORY_CLEANUP, 1),
queueManager.getRecentJobs(QUEUE_NAMES.AGENT_COMMANDS, 1),
queueManager.getRecentJobs(QUEUE_NAMES.SYSTEM_STATISTICS, 1),
]);
// Calculate overview metrics
@@ -283,19 +333,25 @@ router.get("/overview", authenticateToken, async (_req, res) => {
stats[QUEUE_NAMES.GITHUB_UPDATE_CHECK].delayed +
stats[QUEUE_NAMES.SESSION_CLEANUP].delayed +
stats[QUEUE_NAMES.ORPHANED_REPO_CLEANUP].delayed +
stats[QUEUE_NAMES.ORPHANED_PACKAGE_CLEANUP].delayed,
stats[QUEUE_NAMES.ORPHANED_PACKAGE_CLEANUP].delayed +
stats[QUEUE_NAMES.DOCKER_INVENTORY_CLEANUP].delayed +
stats[QUEUE_NAMES.SYSTEM_STATISTICS].delayed,
runningTasks:
stats[QUEUE_NAMES.GITHUB_UPDATE_CHECK].active +
stats[QUEUE_NAMES.SESSION_CLEANUP].active +
stats[QUEUE_NAMES.ORPHANED_REPO_CLEANUP].active +
stats[QUEUE_NAMES.ORPHANED_PACKAGE_CLEANUP].active,
stats[QUEUE_NAMES.ORPHANED_PACKAGE_CLEANUP].active +
stats[QUEUE_NAMES.DOCKER_INVENTORY_CLEANUP].active +
stats[QUEUE_NAMES.SYSTEM_STATISTICS].active,
failedTasks:
stats[QUEUE_NAMES.GITHUB_UPDATE_CHECK].failed +
stats[QUEUE_NAMES.SESSION_CLEANUP].failed +
stats[QUEUE_NAMES.ORPHANED_REPO_CLEANUP].failed +
stats[QUEUE_NAMES.ORPHANED_PACKAGE_CLEANUP].failed,
stats[QUEUE_NAMES.ORPHANED_PACKAGE_CLEANUP].failed +
stats[QUEUE_NAMES.DOCKER_INVENTORY_CLEANUP].failed +
stats[QUEUE_NAMES.SYSTEM_STATISTICS].failed,
totalAutomations: Object.values(stats).reduce((sum, queueStats) => {
return (
@@ -375,10 +431,11 @@ router.get("/overview", authenticateToken, async (_req, res) => {
stats: stats[QUEUE_NAMES.ORPHANED_PACKAGE_CLEANUP],
},
{
name: "Collect Host Statistics",
queue: QUEUE_NAMES.AGENT_COMMANDS,
description: "Collects package statistics from connected agents only",
schedule: `Every ${settings.update_interval} minutes (Agent-driven)`,
name: "Docker Inventory Cleanup",
queue: QUEUE_NAMES.DOCKER_INVENTORY_CLEANUP,
description:
"Removes Docker containers and images for non-existent hosts",
schedule: "Daily at 4 AM",
lastRun: recentJobs[4][0]?.finishedOn
? new Date(recentJobs[4][0].finishedOn).toLocaleString()
: "Never",
@@ -388,8 +445,40 @@ router.get("/overview", authenticateToken, async (_req, res) => {
: recentJobs[4][0]
? "Success"
: "Never run",
stats: stats[QUEUE_NAMES.DOCKER_INVENTORY_CLEANUP],
},
{
name: "Collect Host Statistics",
queue: QUEUE_NAMES.AGENT_COMMANDS,
description: "Collects package statistics from connected agents only",
schedule: `Every ${settings.update_interval} minutes (Agent-driven)`,
lastRun: recentJobs[5][0]?.finishedOn
? new Date(recentJobs[5][0].finishedOn).toLocaleString()
: "Never",
lastRunTimestamp: recentJobs[5][0]?.finishedOn || 0,
status: recentJobs[5][0]?.failedReason
? "Failed"
: recentJobs[5][0]
? "Success"
: "Never run",
stats: stats[QUEUE_NAMES.AGENT_COMMANDS],
},
{
name: "System Statistics Collection",
queue: QUEUE_NAMES.SYSTEM_STATISTICS,
description: "Collects aggregated system-wide package statistics",
schedule: "Every 30 minutes",
lastRun: recentJobs[6][0]?.finishedOn
? new Date(recentJobs[6][0].finishedOn).toLocaleString()
: "Never",
lastRunTimestamp: recentJobs[6][0]?.finishedOn || 0,
status: recentJobs[6][0]?.failedReason
? "Failed"
: recentJobs[6][0]
? "Success"
: "Never run",
stats: stats[QUEUE_NAMES.SYSTEM_STATISTICS],
},
].sort((a, b) => {
// Sort by last run timestamp (most recent first)
// If both have never run (timestamp 0), maintain original order

View File

@@ -92,58 +92,63 @@ async function createDefaultDashboardPreferences(userId, userRole = "user") {
requiredPermission: "can_view_hosts",
order: 5,
},
{
cardId: "hostsNeedingReboot",
requiredPermission: "can_view_hosts",
order: 6,
},
// Repository-related cards
{ cardId: "totalRepos", requiredPermission: "can_view_hosts", order: 6 },
{ cardId: "totalRepos", requiredPermission: "can_view_hosts", order: 7 },
// User management cards (admin only)
{ cardId: "totalUsers", requiredPermission: "can_view_users", order: 7 },
{ cardId: "totalUsers", requiredPermission: "can_view_users", order: 8 },
// System/Report cards
{
cardId: "osDistribution",
requiredPermission: "can_view_reports",
order: 8,
order: 9,
},
{
cardId: "osDistributionBar",
requiredPermission: "can_view_reports",
order: 9,
order: 10,
},
{
cardId: "osDistributionDoughnut",
requiredPermission: "can_view_reports",
order: 10,
order: 11,
},
{
cardId: "recentCollection",
requiredPermission: "can_view_hosts",
order: 11,
order: 12,
},
{
cardId: "updateStatus",
requiredPermission: "can_view_reports",
order: 12,
order: 13,
},
{
cardId: "packagePriority",
requiredPermission: "can_view_packages",
order: 13,
order: 14,
},
{
cardId: "packageTrends",
requiredPermission: "can_view_packages",
order: 14,
order: 15,
},
{
cardId: "recentUsers",
requiredPermission: "can_view_users",
order: 15,
order: 16,
},
{
cardId: "quickStats",
requiredPermission: "can_view_dashboard",
order: 16,
order: 17,
},
];
@@ -290,26 +295,33 @@ router.get("/defaults", authenticateToken, async (_req, res) => {
enabled: true,
order: 5,
},
{
cardId: "hostsNeedingReboot",
title: "Needs Reboots",
icon: "RotateCcw",
enabled: true,
order: 6,
},
{
cardId: "totalRepos",
title: "Repositories",
icon: "GitBranch",
enabled: true,
order: 6,
order: 7,
},
{
cardId: "totalUsers",
title: "Users",
icon: "Users",
enabled: true,
order: 7,
order: 8,
},
{
cardId: "osDistribution",
title: "OS Distribution",
icon: "BarChart3",
enabled: true,
order: 8,
order: 9,
},
{
cardId: "osDistributionBar",

View File

@@ -41,6 +41,7 @@ router.get(
erroredHosts,
securityUpdates,
offlineHosts,
hostsNeedingReboot,
totalHostGroups,
totalUsers,
totalRepos,
@@ -106,6 +107,13 @@ router.get(
},
}),
// Hosts needing reboot
prisma.hosts.count({
where: {
needs_reboot: true,
},
}),
// Total host groups count
prisma.host_groups.count(),
@@ -174,6 +182,7 @@ router.get(
erroredHosts,
securityUpdates,
offlineHosts,
hostsNeedingReboot,
totalHostGroups,
totalUsers,
totalRepos,
@@ -193,11 +202,16 @@ router.get(
},
);
// Get hosts with their update status
// Get hosts with their update status - OPTIMIZED
router.get("/hosts", authenticateToken, requireViewHosts, async (_req, res) => {
try {
// Get settings once (outside the loop)
const settings = await prisma.settings.findFirst();
const updateIntervalMinutes = settings?.update_interval || 60;
const thresholdMinutes = updateIntervalMinutes * 2;
// Fetch hosts with groups
const hosts = await prisma.hosts.findMany({
// Show all hosts regardless of status
select: {
id: true,
machine_id: true,
@@ -212,6 +226,7 @@ router.get("/hosts", authenticateToken, requireViewHosts, async (_req, res) => {
auto_update: true,
notes: true,
api_id: true,
needs_reboot: true,
host_group_memberships: {
include: {
host_groups: {
@@ -223,61 +238,65 @@ router.get("/hosts", authenticateToken, requireViewHosts, async (_req, res) => {
},
},
},
_count: {
select: {
host_packages: {
where: {
needs_update: true,
},
},
},
},
},
orderBy: { last_update: "desc" },
});
// Get update counts for each host separately
const hostsWithUpdateInfo = await Promise.all(
hosts.map(async (host) => {
const updatesCount = await prisma.host_packages.count({
where: {
host_id: host.id,
needs_update: true,
},
});
// OPTIMIZATION: Get all package counts in 2 batch queries instead of N*2 queries
const hostIds = hosts.map((h) => h.id);
// Get total packages count for this host
const totalPackagesCount = await prisma.host_packages.count({
where: {
host_id: host.id,
},
});
// Get the agent update interval setting for stale calculation
const settings = await prisma.settings.findFirst();
const updateIntervalMinutes = settings?.update_interval || 60;
const thresholdMinutes = updateIntervalMinutes * 2;
// Calculate effective status based on reporting interval
const isStale = moment(host.last_update).isBefore(
moment().subtract(thresholdMinutes, "minutes"),
);
let effectiveStatus = host.status;
// Override status if host hasn't reported within threshold
if (isStale && host.status === "active") {
effectiveStatus = "inactive";
}
return {
...host,
updatesCount,
totalPackagesCount,
isStale,
effectiveStatus,
};
const [updateCounts, totalCounts] = await Promise.all([
// Get update counts for all hosts at once
prisma.host_packages.groupBy({
by: ["host_id"],
where: {
host_id: { in: hostIds },
needs_update: true,
},
_count: { id: true },
}),
// Get total counts for all hosts at once
prisma.host_packages.groupBy({
by: ["host_id"],
where: {
host_id: { in: hostIds },
},
_count: { id: true },
}),
]);
// Create lookup maps for O(1) access
const updateCountMap = new Map(
updateCounts.map((item) => [item.host_id, item._count.id]),
);
const totalCountMap = new Map(
totalCounts.map((item) => [item.host_id, item._count.id]),
);
// Process hosts with counts from maps (no more DB queries!)
const hostsWithUpdateInfo = hosts.map((host) => {
const updatesCount = updateCountMap.get(host.id) || 0;
const totalPackagesCount = totalCountMap.get(host.id) || 0;
// Calculate effective status based on reporting interval
const isStale = moment(host.last_update).isBefore(
moment().subtract(thresholdMinutes, "minutes"),
);
let effectiveStatus = host.status;
// Override status if host hasn't reported within threshold
if (isStale && host.status === "active") {
effectiveStatus = "inactive";
}
return {
...host,
updatesCount,
totalPackagesCount,
isStale,
effectiveStatus,
};
});
res.json(hostsWithUpdateInfo);
} catch (error) {
@@ -555,174 +574,216 @@ router.get(
const startDate = new Date();
startDate.setDate(endDate.getDate() - daysInt);
// Build where clause
const whereClause = {
timestamp: {
gte: startDate,
lte: endDate,
},
};
// Add host filter if specified
if (hostId && hostId !== "all" && hostId !== "undefined") {
whereClause.host_id = hostId;
}
// Get all update history records in the date range
const trendsData = await prisma.update_history.findMany({
where: whereClause,
select: {
timestamp: true,
packages_count: true,
security_count: true,
total_packages: true,
host_id: true,
status: true,
},
orderBy: {
timestamp: "asc",
},
});
// Enhanced data validation and processing
const processedData = trendsData
.filter((record) => {
// Enhanced validation
return (
record.total_packages !== null &&
record.total_packages >= 0 &&
record.packages_count >= 0 &&
record.security_count >= 0 &&
record.security_count <= record.packages_count && // Security can't exceed outdated
record.status === "success"
); // Only include successful reports
})
.map((record) => {
const date = new Date(record.timestamp);
let timeKey;
if (daysInt <= 1) {
// For hourly view, group by hour only (not minutes)
timeKey = date.toISOString().substring(0, 13); // YYYY-MM-DDTHH
} else {
// For daily view, group by day
timeKey = date.toISOString().split("T")[0]; // YYYY-MM-DD
}
return {
timeKey,
total_packages: record.total_packages,
packages_count: record.packages_count || 0,
security_count: record.security_count || 0,
host_id: record.host_id,
timestamp: record.timestamp,
};
});
// Determine if we need aggregation based on host filter
const needsAggregation =
!hostId || hostId === "all" || hostId === "undefined";
let trendsData;
if (needsAggregation) {
// For "All Hosts" mode, use system_statistics table
trendsData = await prisma.system_statistics.findMany({
where: {
timestamp: {
gte: startDate,
lte: endDate,
},
},
select: {
timestamp: true,
unique_packages_count: true,
unique_security_count: true,
total_packages: true,
total_hosts: true,
hosts_needing_updates: true,
},
orderBy: {
timestamp: "asc",
},
});
} else {
// For individual host, use update_history table
trendsData = await prisma.update_history.findMany({
where: {
host_id: hostId,
timestamp: {
gte: startDate,
lte: endDate,
},
},
select: {
timestamp: true,
packages_count: true,
security_count: true,
total_packages: true,
host_id: true,
status: true,
},
orderBy: {
timestamp: "asc",
},
});
}
// Process data based on source
let processedData;
let aggregatedArray;
if (needsAggregation) {
// For "All Hosts" mode, we need to calculate the actual total packages differently
// Instead of aggregating historical data (which is per-host), we'll use the current total
// and show that as a flat line, since total packages don't change much over time
// For "All Hosts" mode, data comes from system_statistics table
// Already aggregated, just need to format it
processedData = trendsData
.filter((record) => {
// Enhanced validation
return (
record.total_packages !== null &&
record.total_packages >= 0 &&
record.unique_packages_count >= 0 &&
record.unique_security_count >= 0 &&
record.unique_security_count <= record.unique_packages_count
);
})
.map((record) => {
const date = new Date(record.timestamp);
let timeKey;
// Get the current total packages count (unique packages across all hosts)
const currentTotalPackages = await prisma.packages.count({
where: {
host_packages: {
some: {}, // At least one host has this package
},
},
});
if (daysInt <= 1) {
// For "Last 24 hours", use full timestamp for each data point
// This allows plotting all individual data points
timeKey = date.toISOString(); // Full ISO timestamp
} else {
// For daily view, group by day
timeKey = date.toISOString().split("T")[0]; // YYYY-MM-DD
}
// Aggregate data by timeKey when looking at "All Hosts" or no specific host
const aggregatedData = processedData.reduce((acc, item) => {
if (!acc[item.timeKey]) {
acc[item.timeKey] = {
timeKey: item.timeKey,
total_packages: currentTotalPackages, // Use current total packages
packages_count: 0,
security_count: 0,
record_count: 0,
host_ids: new Set(),
min_timestamp: item.timestamp,
max_timestamp: item.timestamp,
return {
timeKey,
total_packages: record.total_packages,
packages_count: record.unique_packages_count,
security_count: record.unique_security_count,
timestamp: record.timestamp,
};
}
});
// For outdated and security packages: SUM (these represent counts across hosts)
acc[item.timeKey].packages_count += item.packages_count;
acc[item.timeKey].security_count += item.security_count;
if (daysInt <= 1) {
// For "Last 24 hours", use all individual data points without grouping
// Sort by timestamp
aggregatedArray = processedData.sort(
(a, b) => a.timestamp.getTime() - b.timestamp.getTime(),
);
} else {
// For longer periods, group by timeKey and take the latest value for each period
const aggregatedData = processedData.reduce((acc, item) => {
if (
!acc[item.timeKey] ||
item.timestamp > acc[item.timeKey].timestamp
) {
acc[item.timeKey] = item;
}
return acc;
}, {});
acc[item.timeKey].record_count += 1;
acc[item.timeKey].host_ids.add(item.host_id);
// Track timestamp range
if (item.timestamp < acc[item.timeKey].min_timestamp) {
acc[item.timeKey].min_timestamp = item.timestamp;
}
if (item.timestamp > acc[item.timeKey].max_timestamp) {
acc[item.timeKey].max_timestamp = item.timestamp;
}
return acc;
}, {});
// Convert to array and add metadata
aggregatedArray = Object.values(aggregatedData)
.map((item) => ({
...item,
host_count: item.host_ids.size,
host_ids: Array.from(item.host_ids),
}))
.sort((a, b) => a.timeKey.localeCompare(b.timeKey));
// Convert to array and sort
aggregatedArray = Object.values(aggregatedData).sort((a, b) =>
a.timeKey.localeCompare(b.timeKey),
);
}
} else {
// For specific host, show individual data points without aggregation
// But still group by timeKey to handle multiple reports from same host in same time period
const hostAggregatedData = processedData.reduce((acc, item) => {
if (!acc[item.timeKey]) {
acc[item.timeKey] = {
timeKey: item.timeKey,
total_packages: 0,
packages_count: 0,
security_count: 0,
record_count: 0,
host_ids: new Set([item.host_id]),
min_timestamp: item.timestamp,
max_timestamp: item.timestamp,
// For individual host, data comes from update_history table
processedData = trendsData
.filter((record) => {
// Enhanced validation
return (
record.total_packages !== null &&
record.total_packages >= 0 &&
record.packages_count >= 0 &&
record.security_count >= 0 &&
record.security_count <= record.packages_count &&
record.status === "success"
);
})
.map((record) => {
const date = new Date(record.timestamp);
let timeKey;
if (daysInt <= 1) {
// For "Last 24 hours", use full timestamp for each data point
// This allows plotting all individual data points
timeKey = date.toISOString(); // Full ISO timestamp
} else {
// For daily view, group by day
timeKey = date.toISOString().split("T")[0]; // YYYY-MM-DD
}
return {
timeKey,
total_packages: record.total_packages,
packages_count: record.packages_count || 0,
security_count: record.security_count || 0,
host_id: record.host_id,
timestamp: record.timestamp,
};
}
});
// For same host, take the latest values (not sum)
// This handles cases where a host reports multiple times in the same time period
if (item.timestamp > acc[item.timeKey].max_timestamp) {
acc[item.timeKey].total_packages = item.total_packages;
acc[item.timeKey].packages_count = item.packages_count;
acc[item.timeKey].security_count = item.security_count;
acc[item.timeKey].max_timestamp = item.timestamp;
}
if (daysInt <= 1) {
// For "Last 24 hours", use all individual data points without grouping
// Sort by timestamp
aggregatedArray = processedData.sort(
(a, b) => a.timestamp.getTime() - b.timestamp.getTime(),
);
} else {
// For longer periods, group by timeKey to handle multiple reports from same host in same time period
const hostAggregatedData = processedData.reduce((acc, item) => {
if (!acc[item.timeKey]) {
acc[item.timeKey] = {
timeKey: item.timeKey,
total_packages: 0,
packages_count: 0,
security_count: 0,
record_count: 0,
host_ids: new Set([item.host_id]),
min_timestamp: item.timestamp,
max_timestamp: item.timestamp,
};
}
acc[item.timeKey].record_count += 1;
// For same host, take the latest values (not sum)
// This handles cases where a host reports multiple times in the same time period
if (item.timestamp > acc[item.timeKey].max_timestamp) {
acc[item.timeKey].total_packages = item.total_packages;
acc[item.timeKey].packages_count = item.packages_count;
acc[item.timeKey].security_count = item.security_count;
acc[item.timeKey].max_timestamp = item.timestamp;
}
return acc;
}, {});
acc[item.timeKey].record_count += 1;
// Convert to array
aggregatedArray = Object.values(hostAggregatedData)
.map((item) => ({
...item,
host_count: item.host_ids.size,
host_ids: Array.from(item.host_ids),
}))
.sort((a, b) => a.timeKey.localeCompare(b.timeKey));
return acc;
}, {});
// Convert to array
aggregatedArray = Object.values(hostAggregatedData)
.map((item) => ({
...item,
host_count: item.host_ids.size,
host_ids: Array.from(item.host_ids),
}))
.sort((a, b) => a.timeKey.localeCompare(b.timeKey));
}
}
// Handle sparse data by filling missing time periods
const fillMissingPeriods = (data, daysInt) => {
if (data.length === 0) {
return [];
}
// For "Last 24 hours", return data as-is without filling gaps
// This allows plotting all individual data points
if (daysInt <= 1) {
return data;
}
const filledData = [];
const startDate = new Date();
startDate.setDate(startDate.getDate() - daysInt);
@@ -732,50 +793,58 @@ router.get(
const endDate = new Date();
const currentDate = new Date(startDate);
// Find the last known values for interpolation
// Sort data by timeKey to get chronological order
const sortedData = [...data].sort((a, b) =>
a.timeKey.localeCompare(b.timeKey),
);
// Find the first actual data point (don't fill before this)
const firstDataPoint = sortedData[0];
const firstDataTimeKey = firstDataPoint?.timeKey;
// Track last known values as we iterate forward
let lastKnownValues = null;
if (data.length > 0) {
lastKnownValues = {
total_packages: data[0].total_packages,
packages_count: data[0].packages_count,
security_count: data[0].security_count,
};
}
let hasSeenFirstDataPoint = false;
while (currentDate <= endDate) {
let timeKey;
if (daysInt <= 1) {
timeKey = currentDate.toISOString().substring(0, 13); // Hourly
currentDate.setHours(currentDate.getHours() + 1);
} else {
timeKey = currentDate.toISOString().split("T")[0]; // Daily
currentDate.setDate(currentDate.getDate() + 1);
// For daily view, group by day
timeKey = currentDate.toISOString().split("T")[0]; // YYYY-MM-DD
currentDate.setDate(currentDate.getDate() + 1);
// Skip periods before the first actual data point
if (firstDataTimeKey && timeKey < firstDataTimeKey) {
continue;
}
if (dataMap.has(timeKey)) {
const item = dataMap.get(timeKey);
filledData.push(item);
// Update last known values
// Update last known values with actual data
lastKnownValues = {
total_packages: item.total_packages,
packages_count: item.packages_count,
security_count: item.security_count,
total_packages: item.total_packages || 0,
packages_count: item.packages_count || 0,
security_count: item.security_count || 0,
};
hasSeenFirstDataPoint = true;
} else {
// For missing periods, use the last known values (interpolation)
// This creates a continuous line instead of gaps
filledData.push({
timeKey,
total_packages: lastKnownValues?.total_packages || 0,
packages_count: lastKnownValues?.packages_count || 0,
security_count: lastKnownValues?.security_count || 0,
record_count: 0,
host_count: 0,
host_ids: [],
min_timestamp: null,
max_timestamp: null,
isInterpolated: true, // Mark as interpolated for debugging
});
// For missing periods AFTER the first data point, use forward-fill
// Only fill if we have a last known value and we've seen the first data point
if (lastKnownValues !== null && hasSeenFirstDataPoint) {
filledData.push({
timeKey,
total_packages: lastKnownValues.total_packages,
packages_count: lastKnownValues.packages_count,
security_count: lastKnownValues.security_count,
record_count: 0,
host_count: 0,
host_ids: [],
min_timestamp: null,
max_timestamp: null,
isInterpolated: true, // Mark as interpolated for debugging
});
}
// If we haven't seen the first data point yet, skip this period
}
}
@@ -801,7 +870,7 @@ router.get(
// Get current package state for offline fallback
let currentPackageState = null;
if (hostId && hostId !== "all" && hostId !== "undefined") {
// Get current package counts for specific host
// For individual host, get current package counts from host_packages
const currentState = await prisma.host_packages.aggregate({
where: {
host_id: hostId,
@@ -832,34 +901,64 @@ router.get(
security_count: securityCount,
};
} else {
// Get current package counts for all hosts
// Total packages = count of unique packages installed on at least one host
const totalPackagesCount = await prisma.packages.count({
where: {
host_packages: {
some: {}, // At least one host has this package
// For "All Hosts" mode, use the latest system_statistics record if available
// Otherwise calculate from database
const latestStats = await prisma.system_statistics.findFirst({
orderBy: {
timestamp: "desc",
},
select: {
total_packages: true,
unique_packages_count: true,
unique_security_count: true,
timestamp: true,
},
});
if (latestStats) {
// Use latest system statistics (collected by scheduled job)
currentPackageState = {
total_packages: latestStats.total_packages,
packages_count: latestStats.unique_packages_count,
security_count: latestStats.unique_security_count,
};
} else {
// Fallback: calculate from database if no statistics collected yet
const totalPackagesCount = await prisma.packages.count({
where: {
host_packages: {
some: {}, // At least one host has this package
},
},
},
});
});
// Get counts for boolean fields separately
const outdatedCount = await prisma.host_packages.count({
where: {
needs_update: true,
},
});
const uniqueOutdatedCount = await prisma.packages.count({
where: {
host_packages: {
some: {
needs_update: true,
},
},
},
});
const securityCount = await prisma.host_packages.count({
where: {
is_security_update: true,
},
});
const uniqueSecurityCount = await prisma.packages.count({
where: {
host_packages: {
some: {
needs_update: true,
is_security_update: true,
},
},
},
});
currentPackageState = {
total_packages: totalPackagesCount,
packages_count: outdatedCount,
security_count: securityCount,
};
currentPackageState = {
total_packages: totalPackagesCount,
packages_count: uniqueOutdatedCount,
security_count: uniqueSecurityCount,
};
}
}
// Format data for chart
@@ -914,6 +1013,11 @@ router.get(
chartData.datasets[2].data.push(item.security_count);
});
// Replace the last label with "Now" to indicate current state
if (chartData.labels.length > 0) {
chartData.labels[chartData.labels.length - 1] = "Now";
}
// Calculate data quality metrics
const dataQuality = {
totalRecords: trendsData.length,

View File

@@ -2,6 +2,7 @@ const express = require("express");
const { authenticateToken } = require("../middleware/auth");
const { getPrismaClient } = require("../config/prisma");
const { v4: uuidv4 } = require("uuid");
const { get_current_time, parse_date } = require("../utils/timezone");
const prisma = getPrismaClient();
const router = express.Router();
@@ -522,7 +523,8 @@ router.get("/updates", authenticateToken, async (req, res) => {
}
});
// POST /api/v1/docker/collect - Collect Docker data from agent
// POST /api/v1/docker/collect - Collect Docker data from agent (DEPRECATED - kept for backward compatibility)
// New agents should use POST /api/v1/integrations/docker
router.post("/collect", async (req, res) => {
try {
const { apiId, apiKey, containers, images, updates } = req.body;
@@ -536,14 +538,7 @@ router.post("/collect", async (req, res) => {
return res.status(401).json({ error: "Invalid API credentials" });
}
const now = new Date();
// Helper function to validate and parse dates
const parseDate = (dateString) => {
if (!dateString) return now;
const date = new Date(dateString);
return Number.isNaN(date.getTime()) ? now : date;
};
const now = get_current_time();
// Process containers
if (containers && Array.isArray(containers)) {
@@ -571,7 +566,8 @@ router.post("/collect", async (req, res) => {
tag: containerData.image_tag,
image_id: containerData.image_id || "unknown",
source: containerData.image_source || "docker-hub",
created_at: parseDate(containerData.created_at),
created_at: parse_date(containerData.created_at, now),
last_checked: now,
updated_at: now,
},
});
@@ -595,7 +591,7 @@ router.post("/collect", async (req, res) => {
state: containerData.state,
ports: containerData.ports || null,
started_at: containerData.started_at
? parseDate(containerData.started_at)
? parse_date(containerData.started_at, null)
: null,
updated_at: now,
last_checked: now,
@@ -611,9 +607,9 @@ router.post("/collect", async (req, res) => {
status: containerData.status,
state: containerData.state,
ports: containerData.ports || null,
created_at: parseDate(containerData.created_at),
created_at: parse_date(containerData.created_at, now),
started_at: containerData.started_at
? parseDate(containerData.started_at)
? parse_date(containerData.started_at, null)
: null,
updated_at: now,
},
@@ -649,7 +645,7 @@ router.post("/collect", async (req, res) => {
? BigInt(imageData.size_bytes)
: null,
source: imageData.source || "docker-hub",
created_at: parseDate(imageData.created_at),
created_at: parse_date(imageData.created_at, now),
updated_at: now,
},
});
@@ -745,6 +741,490 @@ router.post("/collect", async (req, res) => {
}
});
// POST /api/v1/integrations/docker - New integration endpoint for Docker data collection
router.post("/../integrations/docker", async (req, res) => {
try {
const apiId = req.headers["x-api-id"];
const apiKey = req.headers["x-api-key"];
const {
containers,
images,
updates,
daemon_info: _daemon_info,
hostname,
machine_id,
agent_version: _agent_version,
} = req.body;
console.log(
`[Docker Integration] Received data from ${hostname || machine_id}`,
);
// Validate API credentials
const host = await prisma.hosts.findFirst({
where: { api_id: apiId, api_key: apiKey },
});
if (!host) {
console.warn("[Docker Integration] Invalid API credentials");
return res.status(401).json({ error: "Invalid API credentials" });
}
console.log(
`[Docker Integration] Processing for host: ${host.friendly_name}`,
);
const now = get_current_time();
let containersProcessed = 0;
let imagesProcessed = 0;
let updatesProcessed = 0;
// Process containers
if (containers && Array.isArray(containers)) {
console.log(
`[Docker Integration] Processing ${containers.length} containers`,
);
for (const containerData of containers) {
const containerId = uuidv4();
// Find or create image
let imageId = null;
if (containerData.image_repository && containerData.image_tag) {
const image = await prisma.docker_images.upsert({
where: {
repository_tag_image_id: {
repository: containerData.image_repository,
tag: containerData.image_tag,
image_id: containerData.image_id || "unknown",
},
},
update: {
last_checked: now,
updated_at: now,
},
create: {
id: uuidv4(),
repository: containerData.image_repository,
tag: containerData.image_tag,
image_id: containerData.image_id || "unknown",
source: containerData.image_source || "docker-hub",
created_at: parse_date(containerData.created_at, now),
last_checked: now,
updated_at: now,
},
});
imageId = image.id;
}
// Upsert container
await prisma.docker_containers.upsert({
where: {
host_id_container_id: {
host_id: host.id,
container_id: containerData.container_id,
},
},
update: {
name: containerData.name,
image_id: imageId,
image_name: containerData.image_name,
image_tag: containerData.image_tag || "latest",
status: containerData.status,
state: containerData.state || containerData.status,
ports: containerData.ports || null,
started_at: containerData.started_at
? parse_date(containerData.started_at, null)
: null,
updated_at: now,
last_checked: now,
},
create: {
id: containerId,
host_id: host.id,
container_id: containerData.container_id,
name: containerData.name,
image_id: imageId,
image_name: containerData.image_name,
image_tag: containerData.image_tag || "latest",
status: containerData.status,
state: containerData.state || containerData.status,
ports: containerData.ports || null,
created_at: parse_date(containerData.created_at, now),
started_at: containerData.started_at
? parse_date(containerData.started_at, null)
: null,
updated_at: now,
},
});
containersProcessed++;
}
}
// Process standalone images
if (images && Array.isArray(images)) {
console.log(`[Docker Integration] Processing ${images.length} images`);
for (const imageData of images) {
// If image has no digest, it's likely locally built - override source to "local"
const imageSource =
!imageData.digest || imageData.digest.trim() === ""
? "local"
: imageData.source || "docker-hub";
await prisma.docker_images.upsert({
where: {
repository_tag_image_id: {
repository: imageData.repository,
tag: imageData.tag,
image_id: imageData.image_id,
},
},
update: {
size_bytes: imageData.size_bytes
? BigInt(imageData.size_bytes)
: null,
digest: imageData.digest || null,
source: imageSource, // Update source in case it changed
last_checked: now,
updated_at: now,
},
create: {
id: uuidv4(),
repository: imageData.repository,
tag: imageData.tag,
image_id: imageData.image_id,
digest: imageData.digest,
size_bytes: imageData.size_bytes
? BigInt(imageData.size_bytes)
: null,
source: imageSource,
created_at: parse_date(imageData.created_at, now),
last_checked: now,
updated_at: now,
},
});
imagesProcessed++;
}
}
// Process updates
if (updates && Array.isArray(updates)) {
console.log(`[Docker Integration] Processing ${updates.length} updates`);
for (const updateData of updates) {
// Find the image by repository and image_id
const image = await prisma.docker_images.findFirst({
where: {
repository: updateData.repository,
tag: updateData.current_tag,
image_id: updateData.image_id,
},
});
if (image) {
// Store digest info in changelog_url field as JSON
const digestInfo = JSON.stringify({
method: "digest_comparison",
current_digest: updateData.current_digest,
available_digest: updateData.available_digest,
});
// Upsert the update record
await prisma.docker_image_updates.upsert({
where: {
image_id_available_tag: {
image_id: image.id,
available_tag: updateData.available_tag,
},
},
update: {
updated_at: now,
changelog_url: digestInfo,
severity: "digest_changed",
},
create: {
id: uuidv4(),
image_id: image.id,
current_tag: updateData.current_tag,
available_tag: updateData.available_tag,
severity: "digest_changed",
changelog_url: digestInfo,
updated_at: now,
},
});
updatesProcessed++;
}
}
}
console.log(
`[Docker Integration] Successfully processed: ${containersProcessed} containers, ${imagesProcessed} images, ${updatesProcessed} updates`,
);
res.json({
message: "Docker data collected successfully",
containers_received: containersProcessed,
images_received: imagesProcessed,
updates_found: updatesProcessed,
});
} catch (error) {
console.error("[Docker Integration] Error collecting Docker data:", error);
console.error("[Docker Integration] Error stack:", error.stack);
res.status(500).json({
error: "Failed to collect Docker data",
message: error.message,
details: process.env.NODE_ENV === "development" ? error.stack : undefined,
});
}
});
// DELETE /api/v1/docker/containers/:id - Delete a container
router.delete("/containers/:id", authenticateToken, async (req, res) => {
try {
const { id } = req.params;
// Check if container exists
const container = await prisma.docker_containers.findUnique({
where: { id },
});
if (!container) {
return res.status(404).json({ error: "Container not found" });
}
// Delete the container
await prisma.docker_containers.delete({
where: { id },
});
console.log(`🗑️ Deleted container: ${container.name} (${id})`);
res.json({
success: true,
message: `Container ${container.name} deleted successfully`,
});
} catch (error) {
console.error("Error deleting container:", error);
res.status(500).json({ error: "Failed to delete container" });
}
});
// DELETE /api/v1/docker/images/:id - Delete an image
router.delete("/images/:id", authenticateToken, async (req, res) => {
try {
const { id } = req.params;
// Check if image exists
const image = await prisma.docker_images.findUnique({
where: { id },
include: {
_count: {
select: {
docker_containers: true,
},
},
},
});
if (!image) {
return res.status(404).json({ error: "Image not found" });
}
// Check if image is in use by containers
if (image._count.docker_containers > 0) {
return res.status(400).json({
error: `Cannot delete image: ${image._count.docker_containers} container(s) are using this image`,
containersCount: image._count.docker_containers,
});
}
// Delete image updates first
await prisma.docker_image_updates.deleteMany({
where: { image_id: id },
});
// Delete the image
await prisma.docker_images.delete({
where: { id },
});
console.log(`🗑️ Deleted image: ${image.repository}:${image.tag} (${id})`);
res.json({
success: true,
message: `Image ${image.repository}:${image.tag} deleted successfully`,
});
} catch (error) {
console.error("Error deleting image:", error);
res.status(500).json({ error: "Failed to delete image" });
}
});
// GET /api/v1/docker/volumes - Get all volumes with filters
router.get("/volumes", authenticateToken, async (req, res) => {
try {
const { driver, search, page = 1, limit = 50 } = req.query;
const where = {};
if (driver) where.driver = driver;
if (search) {
where.OR = [{ name: { contains: search, mode: "insensitive" } }];
}
const skip = (parseInt(page, 10) - 1) * parseInt(limit, 10);
const take = parseInt(limit, 10);
const [volumes, total] = await Promise.all([
prisma.docker_volumes.findMany({
where,
include: {
hosts: {
select: {
id: true,
friendly_name: true,
hostname: true,
ip: true,
},
},
},
orderBy: { updated_at: "desc" },
skip,
take,
}),
prisma.docker_volumes.count({ where }),
]);
res.json(
convertBigIntToString({
volumes,
pagination: {
page: parseInt(page, 10),
limit: parseInt(limit, 10),
total,
totalPages: Math.ceil(total / parseInt(limit, 10)),
},
}),
);
} catch (error) {
console.error("Error fetching volumes:", error);
res.status(500).json({ error: "Failed to fetch volumes" });
}
});
// GET /api/v1/docker/volumes/:id - Get volume detail
router.get("/volumes/:id", authenticateToken, async (req, res) => {
try {
const { id } = req.params;
const volume = await prisma.docker_volumes.findUnique({
where: { id },
include: {
hosts: {
select: {
id: true,
friendly_name: true,
hostname: true,
ip: true,
os_type: true,
os_version: true,
},
},
},
});
if (!volume) {
return res.status(404).json({ error: "Volume not found" });
}
res.json(convertBigIntToString({ volume }));
} catch (error) {
console.error("Error fetching volume detail:", error);
res.status(500).json({ error: "Failed to fetch volume detail" });
}
});
// GET /api/v1/docker/networks - Get all networks with filters
router.get("/networks", authenticateToken, async (req, res) => {
try {
const { driver, search, page = 1, limit = 50 } = req.query;
const where = {};
if (driver) where.driver = driver;
if (search) {
where.OR = [{ name: { contains: search, mode: "insensitive" } }];
}
const skip = (parseInt(page, 10) - 1) * parseInt(limit, 10);
const take = parseInt(limit, 10);
const [networks, total] = await Promise.all([
prisma.docker_networks.findMany({
where,
include: {
hosts: {
select: {
id: true,
friendly_name: true,
hostname: true,
ip: true,
},
},
},
orderBy: { updated_at: "desc" },
skip,
take,
}),
prisma.docker_networks.count({ where }),
]);
res.json(
convertBigIntToString({
networks,
pagination: {
page: parseInt(page, 10),
limit: parseInt(limit, 10),
total,
totalPages: Math.ceil(total / parseInt(limit, 10)),
},
}),
);
} catch (error) {
console.error("Error fetching networks:", error);
res.status(500).json({ error: "Failed to fetch networks" });
}
});
// GET /api/v1/docker/networks/:id - Get network detail
router.get("/networks/:id", authenticateToken, async (req, res) => {
try {
const { id } = req.params;
const network = await prisma.docker_networks.findUnique({
where: { id },
include: {
hosts: {
select: {
id: true,
friendly_name: true,
hostname: true,
ip: true,
os_type: true,
os_version: true,
},
},
},
});
if (!network) {
return res.status(404).json({ error: "Network not found" });
}
res.json(convertBigIntToString({ network }));
} catch (error) {
console.error("Error fetching network detail:", error);
res.status(500).json({ error: "Failed to fetch network detail" });
}
});
// GET /api/v1/docker/agent - Serve the Docker agent installation script
router.get("/agent", async (_req, res) => {
try {
@@ -776,4 +1256,66 @@ router.get("/agent", async (_req, res) => {
}
});
// DELETE /api/v1/docker/volumes/:id - Delete a volume
router.delete("/volumes/:id", authenticateToken, async (req, res) => {
try {
const { id } = req.params;
// Check if volume exists
const volume = await prisma.docker_volumes.findUnique({
where: { id },
});
if (!volume) {
return res.status(404).json({ error: "Volume not found" });
}
// Delete the volume
await prisma.docker_volumes.delete({
where: { id },
});
console.log(`🗑️ Deleted volume: ${volume.name} (${id})`);
res.json({
success: true,
message: `Volume ${volume.name} deleted successfully`,
});
} catch (error) {
console.error("Error deleting volume:", error);
res.status(500).json({ error: "Failed to delete volume" });
}
});
// DELETE /api/v1/docker/networks/:id - Delete a network
router.delete("/networks/:id", authenticateToken, async (req, res) => {
try {
const { id } = req.params;
// Check if network exists
const network = await prisma.docker_networks.findUnique({
where: { id },
});
if (!network) {
return res.status(404).json({ error: "Network not found" });
}
// Delete the network
await prisma.docker_networks.delete({
where: { id },
});
console.log(`🗑️ Deleted network: ${network.name} (${id})`);
res.json({
success: true,
message: `Network ${network.name} deleted successfully`,
});
} catch (error) {
console.error("Error deleting network:", error);
res.status(500).json({ error: "Failed to delete network" });
}
});
module.exports = router;

View File

@@ -1,113 +1,12 @@
const express = require("express");
const { getPrismaClient } = require("../config/prisma");
const bcrypt = require("bcryptjs");
const { authenticateApiToken } = require("../middleware/apiAuth");
const router = express.Router();
const prisma = getPrismaClient();
// Middleware to authenticate API key
const authenticateApiKey = async (req, res, next) => {
try {
const authHeader = req.headers.authorization;
if (!authHeader || !authHeader.startsWith("Basic ")) {
return res
.status(401)
.json({ error: "Missing or invalid authorization header" });
}
// Decode base64 credentials
const base64Credentials = authHeader.split(" ")[1];
const credentials = Buffer.from(base64Credentials, "base64").toString(
"ascii",
);
const [apiKey, apiSecret] = credentials.split(":");
if (!apiKey || !apiSecret) {
return res.status(401).json({ error: "Invalid credentials format" });
}
// Find the token in database
const token = await prisma.auto_enrollment_tokens.findUnique({
where: { token_key: apiKey },
include: {
users: {
select: {
id: true,
username: true,
role: true,
},
},
},
});
if (!token) {
console.log(`API key not found: ${apiKey}`);
return res.status(401).json({ error: "Invalid API key" });
}
// Check if token is active
if (!token.is_active) {
return res.status(401).json({ error: "API key is disabled" });
}
// Check if token has expired
if (token.expires_at && new Date(token.expires_at) < new Date()) {
return res.status(401).json({ error: "API key has expired" });
}
// Check if token is for gethomepage integration
if (token.metadata?.integration_type !== "gethomepage") {
return res.status(401).json({ error: "Invalid API key type" });
}
// Verify the secret
const isValidSecret = await bcrypt.compare(apiSecret, token.token_secret);
if (!isValidSecret) {
return res.status(401).json({ error: "Invalid API secret" });
}
// Check IP restrictions if any
if (token.allowed_ip_ranges && token.allowed_ip_ranges.length > 0) {
const clientIp = req.ip || req.connection.remoteAddress;
const forwardedFor = req.headers["x-forwarded-for"];
const realIp = req.headers["x-real-ip"];
// Get the actual client IP (considering proxies)
const actualClientIp = forwardedFor
? forwardedFor.split(",")[0].trim()
: realIp || clientIp;
const isAllowedIp = token.allowed_ip_ranges.some((range) => {
// Simple IP range check (can be enhanced for CIDR support)
return actualClientIp.startsWith(range) || actualClientIp === range;
});
if (!isAllowedIp) {
console.log(
`IP validation failed. Client IP: ${actualClientIp}, Allowed ranges: ${token.allowed_ip_ranges.join(", ")}`,
);
return res.status(403).json({ error: "IP address not allowed" });
}
}
// Update last used timestamp
await prisma.auto_enrollment_tokens.update({
where: { id: token.id },
data: { last_used_at: new Date() },
});
// Attach token info to request
req.apiToken = token;
next();
} catch (error) {
console.error("API key authentication error:", error);
res.status(500).json({ error: "Authentication failed" });
}
};
// Get homepage widget statistics
router.get("/stats", authenticateApiKey, async (_req, res) => {
router.get("/stats", authenticateApiToken("gethomepage"), async (_req, res) => {
try {
// Get total hosts count
const totalHosts = await prisma.hosts.count({
@@ -235,7 +134,7 @@ router.get("/stats", authenticateApiKey, async (_req, res) => {
});
// Health check endpoint for the API
router.get("/health", authenticateApiKey, async (req, res) => {
router.get("/health", authenticateApiToken("gethomepage"), async (req, res) => {
res.json({
status: "ok",
timestamp: new Date().toISOString(),

View File

@@ -24,7 +24,15 @@ router.get("/", authenticateToken, async (_req, res) => {
},
});
res.json(hostGroups);
// Transform the count field to match frontend expectations
const transformedGroups = hostGroups.map((group) => ({
...group,
_count: {
hosts: group._count.host_group_memberships,
},
}));
res.json(transformedGroups);
} catch (error) {
console.error("Error fetching host groups:", error);
res.status(500).json({ error: "Failed to fetch host groups" });
@@ -51,6 +59,7 @@ router.get("/:id", authenticateToken, async (req, res) => {
os_version: true,
status: true,
last_update: true,
needs_reboot: true,
},
},
},
@@ -251,6 +260,7 @@ router.get("/:id/hosts", authenticateToken, async (req, res) => {
status: true,
last_update: true,
created_at: true,
needs_reboot: true,
},
orderBy: {
friendly_name: "asc",

View File

@@ -10,10 +10,18 @@ const {
requireManageHosts,
requireManageSettings,
} = require("../middleware/permissions");
const { queueManager, QUEUE_NAMES } = require("../services/automation");
const { pushIntegrationToggle, isConnected } = require("../services/agentWs");
const agentVersionService = require("../services/agentVersionService");
const { compareVersions } = require("../services/automation/shared/utils");
const router = express.Router();
const prisma = getPrismaClient();
// In-memory cache for integration states (api_id -> { integration_name -> enabled })
// This stores the last known state from successful toggles
const integrationStateCache = new Map();
// Secure endpoint to download the agent script/binary (requires API authentication)
router.get("/agent/download", async (req, res) => {
try {
@@ -127,9 +135,6 @@ router.get("/agent/version", async (req, res) => {
try {
const fs = require("node:fs");
const path = require("node:path");
const { exec } = require("node:child_process");
const { promisify } = require("node:util");
const execAsync = promisify(exec);
// Get architecture parameter (default to amd64 for Go agents)
const architecture = req.query.arch || "amd64";
@@ -164,53 +169,110 @@ router.get("/agent/version", async (req, res) => {
minServerVersion: null,
});
} else {
// Go agent version check (binary)
const binaryName = `patchmon-agent-linux-${architecture}`;
const binaryPath = path.join(__dirname, "../../../agents", binaryName);
// Go agent version check
// Detect server architecture and map to Go architecture names
const os = require("node:os");
const { exec } = require("node:child_process");
const { promisify } = require("node:util");
const execAsync = promisify(exec);
if (!fs.existsSync(binaryPath)) {
return res.status(404).json({
error: `Go agent binary not found for architecture: ${architecture}`,
});
const serverArch = os.arch();
// Map Node.js architecture to Go architecture names
const archMap = {
x64: "amd64",
ia32: "386",
arm64: "arm64",
arm: "arm",
};
const serverGoArch = archMap[serverArch] || serverArch;
// If requested architecture matches server architecture, execute the binary
if (architecture === serverGoArch) {
const binaryName = `patchmon-agent-linux-${architecture}`;
const binaryPath = path.join(__dirname, "../../../agents", binaryName);
if (!fs.existsSync(binaryPath)) {
// Binary doesn't exist, fall back to GitHub
console.log(`Binary ${binaryName} not found, falling back to GitHub`);
} else {
// Execute the binary to get its version
try {
const { stdout } = await execAsync(`${binaryPath} --help`, {
timeout: 10000,
});
// Parse version from help output (e.g., "PatchMon Agent v1.3.1")
const versionMatch = stdout.match(
/PatchMon Agent v([0-9]+\.[0-9]+\.[0-9]+)/i,
);
if (versionMatch) {
const serverVersion = versionMatch[1];
const agentVersion = req.query.currentVersion || serverVersion;
// Proper semantic version comparison: only update if server version is NEWER
const hasUpdate =
compareVersions(serverVersion, agentVersion) > 0;
return res.json({
currentVersion: agentVersion,
latestVersion: serverVersion,
hasUpdate: hasUpdate,
downloadUrl: `/api/v1/hosts/agent/download?arch=${architecture}`,
releaseNotes: `PatchMon Agent v${serverVersion}`,
minServerVersion: null,
architecture: architecture,
agentType: "go",
});
}
} catch (execError) {
// Execution failed, fall back to GitHub
console.log(
`Failed to execute binary ${binaryName}: ${execError.message}, falling back to GitHub`,
);
}
}
}
// Execute the binary to get its version
// Fall back to GitHub if architecture doesn't match or binary execution failed
try {
const { stdout } = await execAsync(`${binaryPath} --help`, {
timeout: 10000,
});
const versionInfo = await agentVersionService.getVersionInfo();
const latestVersion = versionInfo.latestVersion;
const agentVersion =
req.query.currentVersion || latestVersion || "unknown";
// Parse version from help output (e.g., "PatchMon Agent v1.3.1")
const versionMatch = stdout.match(
/PatchMon Agent v([0-9]+\.[0-9]+\.[0-9]+)/i,
);
if (!versionMatch) {
return res.status(500).json({
error: "Could not extract version from agent binary",
if (!latestVersion) {
return res.status(503).json({
error: "Unable to determine latest version from GitHub releases",
currentVersion: agentVersion,
latestVersion: null,
hasUpdate: false,
});
}
const serverVersion = versionMatch[1];
const agentVersion = req.query.currentVersion || serverVersion;
// Simple version comparison (assuming semantic versioning)
const hasUpdate = agentVersion !== serverVersion;
// Proper semantic version comparison: only update if latest version is NEWER
const hasUpdate =
latestVersion !== null &&
compareVersions(latestVersion, agentVersion) > 0;
res.json({
currentVersion: agentVersion,
latestVersion: serverVersion,
latestVersion: latestVersion,
hasUpdate: hasUpdate,
downloadUrl: `/api/v1/hosts/agent/download?arch=${architecture}`,
releaseNotes: `PatchMon Agent v${serverVersion}`,
releaseNotes: `PatchMon Agent v${latestVersion}`,
minServerVersion: null,
architecture: architecture,
agentType: "go",
});
} catch (execError) {
console.error("Failed to execute agent binary:", execError.message);
} catch (serviceError) {
console.error(
"Failed to get version from agentVersionService:",
serviceError.message,
);
return res.status(500).json({
error: "Failed to get version from agent binary",
error: "Failed to get agent version from service",
details: serviceError.message,
});
}
}
@@ -356,6 +418,26 @@ router.post(
});
} catch (error) {
console.error("Host creation error:", error);
// Check if error is related to connection pool exhaustion
if (
error.message &&
(error.message.includes("connection pool") ||
error.message.includes("Timed out fetching") ||
error.message.includes("pool timeout"))
) {
console.error("⚠️ DATABASE CONNECTION POOL EXHAUSTED!");
console.error(
`⚠️ Current limit: DB_CONNECTION_LIMIT=${process.env.DB_CONNECTION_LIMIT || "30"}`,
);
console.error(
`⚠️ Pool timeout: DB_POOL_TIMEOUT=${process.env.DB_POOL_TIMEOUT || "20"}s`,
);
console.error(
"⚠️ Suggestion: Increase DB_CONNECTION_LIMIT in your .env file",
);
}
res.status(500).json({ error: "Failed to create host" });
}
},
@@ -451,6 +533,14 @@ router.post(
.optional()
.isString()
.withMessage("Machine ID must be a string"),
body("needsReboot")
.optional()
.isBoolean()
.withMessage("Needs reboot must be a boolean"),
body("rebootReason")
.optional()
.isString()
.withMessage("Reboot reason must be a string"),
],
async (req, res) => {
try {
@@ -472,8 +562,11 @@ router.post(
updated_at: new Date(),
};
// Update machine_id if provided and current one is a placeholder
if (req.body.machineId && host.machine_id.startsWith("pending-")) {
// Update machine_id if provided and current one is a placeholder or null
if (
req.body.machineId &&
(host.machine_id === null || host.machine_id.startsWith("pending-"))
) {
updateData.machine_id = req.body.machineId;
}
@@ -511,6 +604,10 @@ router.post(
updateData.system_uptime = req.body.systemUptime;
if (req.body.loadAverage) updateData.load_average = req.body.loadAverage;
// Reboot Status
if (req.body.needsReboot !== undefined)
updateData.needs_reboot = req.body.needsReboot;
// If this is the first update (status is 'pending'), change to 'active'
if (host.status === "pending") {
updateData.status = "active";
@@ -786,19 +883,41 @@ router.get("/info", validateApiCredentials, async (req, res) => {
// Ping endpoint for health checks (now uses API credentials)
router.post("/ping", validateApiCredentials, async (req, res) => {
try {
// Update last update timestamp
const now = new Date();
const lastUpdate = req.hostRecord.last_update;
// Detect if this is an agent startup (first ping or after long absence)
const timeSinceLastUpdate = lastUpdate ? now - lastUpdate : null;
const isStartup =
!timeSinceLastUpdate || timeSinceLastUpdate > 5 * 60 * 1000; // 5 minutes
// Log agent startup
if (isStartup) {
console.log(
`🚀 Agent startup detected: ${req.hostRecord.friendly_name} (${req.hostRecord.hostname || req.hostRecord.api_id})`,
);
// Check if status was previously offline
if (req.hostRecord.status === "offline") {
console.log(`✅ Agent back online: ${req.hostRecord.friendly_name}`);
}
}
// Update last update timestamp and set status to active
await prisma.hosts.update({
where: { id: req.hostRecord.id },
data: {
last_update: new Date(),
updated_at: new Date(),
last_update: now,
updated_at: now,
status: "active",
},
});
const response = {
message: "Ping successful",
timestamp: new Date().toISOString(),
timestamp: now.toISOString(),
friendlyName: req.hostRecord.friendly_name,
agentStartup: isStartup,
};
// Check if this is a crontab update trigger
@@ -1345,6 +1464,66 @@ router.delete(
},
);
// Force immediate report from agent
router.post(
"/:hostId/fetch-report",
authenticateToken,
requireManageHosts,
async (req, res) => {
try {
const { hostId } = req.params;
// Get host to verify it exists
const host = await prisma.hosts.findUnique({
where: { id: hostId },
});
if (!host) {
return res.status(404).json({ error: "Host not found" });
}
// Get the agent-commands queue
const queue = queueManager.queues[QUEUE_NAMES.AGENT_COMMANDS];
if (!queue) {
return res.status(500).json({
error: "Queue not available",
});
}
// Add job to queue
const job = await queue.add(
"report_now",
{
api_id: host.api_id,
type: "report_now",
},
{
attempts: 3,
backoff: {
type: "exponential",
delay: 2000,
},
},
);
res.json({
success: true,
message: "Report fetch queued successfully",
jobId: job.id,
host: {
id: host.id,
friendlyName: host.friendly_name,
apiId: host.api_id,
},
});
} catch (error) {
console.error("Force fetch report error:", error);
res.status(500).json({ error: "Failed to fetch report" });
}
},
);
// Toggle agent auto-update setting
router.patch(
"/:hostId/auto-update",
@@ -1388,6 +1567,66 @@ router.patch(
},
);
// Force agent update for specific host
router.post(
"/:hostId/force-agent-update",
authenticateToken,
requireManageHosts,
async (req, res) => {
try {
const { hostId } = req.params;
// Get host to verify it exists
const host = await prisma.hosts.findUnique({
where: { id: hostId },
});
if (!host) {
return res.status(404).json({ error: "Host not found" });
}
// Get the agent-commands queue
const queue = queueManager.queues[QUEUE_NAMES.AGENT_COMMANDS];
if (!queue) {
return res.status(500).json({
error: "Queue not available",
});
}
// Add job to queue
const job = await queue.add(
"update_agent",
{
api_id: host.api_id,
type: "update_agent",
},
{
attempts: 3,
backoff: {
type: "exponential",
delay: 2000,
},
},
);
res.json({
success: true,
message: "Agent update queued successfully",
jobId: job.id,
host: {
id: host.id,
friendlyName: host.friendly_name,
apiId: host.api_id,
},
});
} catch (error) {
console.error("Force agent update error:", error);
res.status(500).json({ error: "Failed to force agent update" });
}
},
);
// Serve the installation script (requires API authentication)
router.get("/install", async (req, res) => {
try {
@@ -1441,28 +1680,34 @@ router.get("/install", async (req, res) => {
// Determine curl flags dynamically from settings (ignore self-signed)
let curlFlags = "-s";
let skipSSLVerify = "false";
try {
const settings = await prisma.settings.findFirst();
if (settings && settings.ignore_ssl_self_signed === true) {
curlFlags = "-sk";
skipSSLVerify = "true";
}
} catch (_) {}
// Check for --force parameter
const forceInstall = req.query.force === "true" || req.query.force === "1";
// Get architecture parameter (default to amd64)
const architecture = req.query.arch || "amd64";
// Get architecture parameter (only set if explicitly provided, otherwise let script auto-detect)
const architecture = req.query.arch;
// Inject the API credentials, server URL, curl flags, force flag, and architecture into the script
const envVars = `#!/bin/bash
// Inject the API credentials, server URL, curl flags, SSL verify flag, force flag, and architecture into the script
// Only set ARCHITECTURE if explicitly provided, otherwise let the script auto-detect
const archExport = architecture
? `export ARCHITECTURE="${architecture}"\n`
: "";
const envVars = `#!/bin/sh
export PATCHMON_URL="${serverUrl}"
export API_ID="${host.api_id}"
export API_KEY="${host.api_key}"
export CURL_FLAGS="${curlFlags}"
export SKIP_SSL_VERIFY="${skipSSLVerify}"
export FORCE_INSTALL="${forceInstall ? "true" : "false"}"
export ARCHITECTURE="${architecture}"
${archExport}
`;
// Remove the shebang from the original script and prepend our env vars
@@ -1481,47 +1726,7 @@ export ARCHITECTURE="${architecture}"
}
});
// Check if machine_id already exists (requires auth)
router.post("/check-machine-id", validateApiCredentials, async (req, res) => {
try {
const { machine_id } = req.body;
if (!machine_id) {
return res.status(400).json({
error: "machine_id is required",
});
}
// Check if a host with this machine_id exists
const existing_host = await prisma.hosts.findUnique({
where: { machine_id },
select: {
id: true,
friendly_name: true,
machine_id: true,
api_id: true,
status: true,
created_at: true,
},
});
if (existing_host) {
return res.status(200).json({
exists: true,
host: existing_host,
message: "This machine is already enrolled",
});
}
return res.status(200).json({
exists: false,
message: "Machine not yet enrolled",
});
} catch (error) {
console.error("Error checking machine_id:", error);
res.status(500).json({ error: "Failed to check machine_id" });
}
});
// Note: /check-machine-id endpoint removed - using config.yml checking method instead
// Serve the removal script (public endpoint - no authentication required)
router.get("/remove", async (_req, res) => {
@@ -1554,7 +1759,7 @@ router.get("/remove", async (_req, res) => {
} catch (_) {}
// Prepend environment for CURL_FLAGS so script can use it if needed
const envPrefix = `#!/bin/bash\nexport CURL_FLAGS="${curlFlags}"\n\n`;
const envPrefix = `#!/bin/sh\nexport CURL_FLAGS="${curlFlags}"\n\n`;
script = script.replace(/^#!/, "#");
script = envPrefix + script;
@@ -1937,4 +2142,137 @@ router.patch(
},
);
// Get integration status for a host
router.get(
"/:hostId/integrations",
authenticateToken,
requireManageHosts,
async (req, res) => {
try {
const { hostId } = req.params;
// Get host to verify it exists
const host = await prisma.hosts.findUnique({
where: { id: hostId },
select: { id: true, api_id: true, friendly_name: true },
});
if (!host) {
return res.status(404).json({ error: "Host not found" });
}
// Check if agent is connected
const connected = isConnected(host.api_id);
// Get integration states from cache (or defaults if not cached)
// Default: all integrations are disabled
const cachedState = integrationStateCache.get(host.api_id) || {};
const integrations = {
docker: cachedState.docker || false, // Default: disabled
// Future integrations can be added here
};
res.json({
success: true,
data: {
integrations,
connected,
host: {
id: host.id,
friendlyName: host.friendly_name,
apiId: host.api_id,
},
},
});
} catch (error) {
console.error("Get integration status error:", error);
res.status(500).json({ error: "Failed to get integration status" });
}
},
);
// Toggle integration status for a host
router.post(
"/:hostId/integrations/:integrationName/toggle",
authenticateToken,
requireManageHosts,
[body("enabled").isBoolean().withMessage("Enabled status must be a boolean")],
async (req, res) => {
try {
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
}
const { hostId, integrationName } = req.params;
const { enabled } = req.body;
// Validate integration name
const validIntegrations = ["docker"]; // Add more as they're implemented
if (!validIntegrations.includes(integrationName)) {
return res.status(400).json({
error: "Invalid integration name",
validIntegrations,
});
}
// Get host to verify it exists
const host = await prisma.hosts.findUnique({
where: { id: hostId },
select: { id: true, api_id: true, friendly_name: true },
});
if (!host) {
return res.status(404).json({ error: "Host not found" });
}
// Check if agent is connected
if (!isConnected(host.api_id)) {
return res.status(503).json({
error: "Agent is not connected",
message:
"The agent must be connected via WebSocket to toggle integrations",
});
}
// Send WebSocket message to agent
const success = pushIntegrationToggle(
host.api_id,
integrationName,
enabled,
);
if (!success) {
return res.status(503).json({
error: "Failed to send integration toggle",
message: "Agent connection may have been lost",
});
}
// Update cache with new state
if (!integrationStateCache.has(host.api_id)) {
integrationStateCache.set(host.api_id, {});
}
integrationStateCache.get(host.api_id)[integrationName] = enabled;
res.json({
success: true,
message: `Integration ${integrationName} ${enabled ? "enabled" : "disabled"} successfully`,
data: {
integration: integrationName,
enabled,
host: {
id: host.id,
friendlyName: host.friendly_name,
apiId: host.api_id,
},
},
});
} catch (error) {
console.error("Toggle integration error:", error);
res.status(500).json({ error: "Failed to toggle integration" });
}
},
);
module.exports = router;

View File

@@ -0,0 +1,356 @@
const express = require("express");
const { getPrismaClient } = require("../config/prisma");
const { v4: uuidv4 } = require("uuid");
const prisma = getPrismaClient();
const router = express.Router();
// POST /api/v1/integrations/docker - Docker data collection endpoint
router.post("/docker", async (req, res) => {
try {
const apiId = req.headers["x-api-id"];
const apiKey = req.headers["x-api-key"];
const {
containers,
images,
volumes,
networks,
updates,
daemon_info: _daemon_info,
hostname,
machine_id,
agent_version: _agent_version,
} = req.body;
console.log(
`[Docker Integration] Received data from ${hostname || machine_id}`,
);
// Validate API credentials
const host = await prisma.hosts.findFirst({
where: { api_id: apiId, api_key: apiKey },
});
if (!host) {
console.warn("[Docker Integration] Invalid API credentials");
return res.status(401).json({ error: "Invalid API credentials" });
}
console.log(
`[Docker Integration] Processing for host: ${host.friendly_name}`,
);
const now = new Date();
// Helper function to validate and parse dates
const parseDate = (dateString) => {
if (!dateString) return now;
const date = new Date(dateString);
return Number.isNaN(date.getTime()) ? now : date;
};
let containersProcessed = 0;
let imagesProcessed = 0;
let volumesProcessed = 0;
let networksProcessed = 0;
let updatesProcessed = 0;
// Process containers
if (containers && Array.isArray(containers)) {
console.log(
`[Docker Integration] Processing ${containers.length} containers`,
);
for (const containerData of containers) {
const containerId = uuidv4();
// Find or create image
let imageId = null;
if (containerData.image_repository && containerData.image_tag) {
const image = await prisma.docker_images.upsert({
where: {
repository_tag_image_id: {
repository: containerData.image_repository,
tag: containerData.image_tag,
image_id: containerData.image_id || "unknown",
},
},
update: {
last_checked: now,
updated_at: now,
},
create: {
id: uuidv4(),
repository: containerData.image_repository,
tag: containerData.image_tag,
image_id: containerData.image_id || "unknown",
source: containerData.image_source || "docker-hub",
created_at: parseDate(containerData.created_at),
updated_at: now,
},
});
imageId = image.id;
}
// Upsert container
await prisma.docker_containers.upsert({
where: {
host_id_container_id: {
host_id: host.id,
container_id: containerData.container_id,
},
},
update: {
name: containerData.name,
image_id: imageId,
image_name: containerData.image_name,
image_tag: containerData.image_tag || "latest",
status: containerData.status,
state: containerData.state || containerData.status,
ports: containerData.ports || null,
started_at: containerData.started_at
? parseDate(containerData.started_at)
: null,
updated_at: now,
last_checked: now,
},
create: {
id: containerId,
host_id: host.id,
container_id: containerData.container_id,
name: containerData.name,
image_id: imageId,
image_name: containerData.image_name,
image_tag: containerData.image_tag || "latest",
status: containerData.status,
state: containerData.state || containerData.status,
ports: containerData.ports || null,
created_at: parseDate(containerData.created_at),
started_at: containerData.started_at
? parseDate(containerData.started_at)
: null,
updated_at: now,
},
});
containersProcessed++;
}
}
// Process standalone images
if (images && Array.isArray(images)) {
console.log(`[Docker Integration] Processing ${images.length} images`);
for (const imageData of images) {
await prisma.docker_images.upsert({
where: {
repository_tag_image_id: {
repository: imageData.repository,
tag: imageData.tag,
image_id: imageData.image_id,
},
},
update: {
size_bytes: imageData.size_bytes
? BigInt(imageData.size_bytes)
: null,
digest: imageData.digest || null,
last_checked: now,
updated_at: now,
},
create: {
id: uuidv4(),
repository: imageData.repository,
tag: imageData.tag,
image_id: imageData.image_id,
digest: imageData.digest,
size_bytes: imageData.size_bytes
? BigInt(imageData.size_bytes)
: null,
source: imageData.source || "docker-hub",
created_at: parseDate(imageData.created_at),
updated_at: now,
},
});
imagesProcessed++;
}
}
// Process volumes
if (volumes && Array.isArray(volumes)) {
console.log(`[Docker Integration] Processing ${volumes.length} volumes`);
for (const volumeData of volumes) {
await prisma.docker_volumes.upsert({
where: {
host_id_volume_id: {
host_id: host.id,
volume_id: volumeData.volume_id,
},
},
update: {
name: volumeData.name,
driver: volumeData.driver || "local",
mountpoint: volumeData.mountpoint || null,
renderer: volumeData.renderer || null,
scope: volumeData.scope || "local",
labels: volumeData.labels || null,
options: volumeData.options || null,
size_bytes: volumeData.size_bytes
? BigInt(volumeData.size_bytes)
: null,
ref_count: volumeData.ref_count || 0,
updated_at: now,
last_checked: now,
},
create: {
id: uuidv4(),
host_id: host.id,
volume_id: volumeData.volume_id,
name: volumeData.name,
driver: volumeData.driver || "local",
mountpoint: volumeData.mountpoint || null,
renderer: volumeData.renderer || null,
scope: volumeData.scope || "local",
labels: volumeData.labels || null,
options: volumeData.options || null,
size_bytes: volumeData.size_bytes
? BigInt(volumeData.size_bytes)
: null,
ref_count: volumeData.ref_count || 0,
created_at: parseDate(volumeData.created_at),
updated_at: now,
},
});
volumesProcessed++;
}
}
// Process networks
if (networks && Array.isArray(networks)) {
console.log(
`[Docker Integration] Processing ${networks.length} networks`,
);
for (const networkData of networks) {
await prisma.docker_networks.upsert({
where: {
host_id_network_id: {
host_id: host.id,
network_id: networkData.network_id,
},
},
update: {
name: networkData.name,
driver: networkData.driver,
scope: networkData.scope || "local",
ipv6_enabled: networkData.ipv6_enabled || false,
internal: networkData.internal || false,
attachable:
networkData.attachable !== undefined
? networkData.attachable
: true,
ingress: networkData.ingress || false,
config_only: networkData.config_only || false,
labels: networkData.labels || null,
ipam: networkData.ipam || null,
container_count: networkData.container_count || 0,
updated_at: now,
last_checked: now,
},
create: {
id: uuidv4(),
host_id: host.id,
network_id: networkData.network_id,
name: networkData.name,
driver: networkData.driver,
scope: networkData.scope || "local",
ipv6_enabled: networkData.ipv6_enabled || false,
internal: networkData.internal || false,
attachable:
networkData.attachable !== undefined
? networkData.attachable
: true,
ingress: networkData.ingress || false,
config_only: networkData.config_only || false,
labels: networkData.labels || null,
ipam: networkData.ipam || null,
container_count: networkData.container_count || 0,
created_at: networkData.created_at
? parseDate(networkData.created_at)
: null,
updated_at: now,
},
});
networksProcessed++;
}
}
// Process updates
if (updates && Array.isArray(updates)) {
console.log(`[Docker Integration] Processing ${updates.length} updates`);
for (const updateData of updates) {
// Find the image by repository and image_id
const image = await prisma.docker_images.findFirst({
where: {
repository: updateData.repository,
tag: updateData.current_tag,
image_id: updateData.image_id,
},
});
if (image) {
// Store digest info in changelog_url field as JSON
const digestInfo = JSON.stringify({
method: "digest_comparison",
current_digest: updateData.current_digest,
available_digest: updateData.available_digest,
});
// Upsert the update record
await prisma.docker_image_updates.upsert({
where: {
image_id_available_tag: {
image_id: image.id,
available_tag: updateData.available_tag,
},
},
update: {
updated_at: now,
changelog_url: digestInfo,
severity: "digest_changed",
},
create: {
id: uuidv4(),
image_id: image.id,
current_tag: updateData.current_tag,
available_tag: updateData.available_tag,
severity: "digest_changed",
changelog_url: digestInfo,
updated_at: now,
},
});
updatesProcessed++;
}
}
}
console.log(
`[Docker Integration] Successfully processed: ${containersProcessed} containers, ${imagesProcessed} images, ${volumesProcessed} volumes, ${networksProcessed} networks, ${updatesProcessed} updates`,
);
res.json({
message: "Docker data collected successfully",
containers_received: containersProcessed,
images_received: imagesProcessed,
volumes_received: volumesProcessed,
networks_received: networksProcessed,
updates_found: updatesProcessed,
});
} catch (error) {
console.error("[Docker Integration] Error collecting Docker data:", error);
console.error("[Docker Integration] Error stack:", error.stack);
res.status(500).json({
error: "Failed to collect Docker data",
message: error.message,
details: process.env.NODE_ENV === "development" ? error.stack : undefined,
});
}
});
module.exports = router;

View File

@@ -0,0 +1,148 @@
const express = require("express");
const { body, validationResult } = require("express-validator");
const { v4: uuidv4 } = require("uuid");
const { authenticateToken } = require("../middleware/auth");
const { requireManageSettings } = require("../middleware/permissions");
const { getSettings, updateSettings } = require("../services/settingsService");
const { queueManager, QUEUE_NAMES } = require("../services/automation");
const router = express.Router();
// Get metrics settings
router.get("/", authenticateToken, requireManageSettings, async (_req, res) => {
try {
const settings = await getSettings();
// Generate anonymous ID if it doesn't exist
if (!settings.metrics_anonymous_id) {
const anonymousId = uuidv4();
await updateSettings(settings.id, {
metrics_anonymous_id: anonymousId,
});
settings.metrics_anonymous_id = anonymousId;
}
res.json({
metrics_enabled: settings.metrics_enabled ?? true,
metrics_anonymous_id: settings.metrics_anonymous_id,
metrics_last_sent: settings.metrics_last_sent,
});
} catch (error) {
console.error("Metrics settings fetch error:", error);
res.status(500).json({ error: "Failed to fetch metrics settings" });
}
});
// Update metrics settings
router.put(
"/",
authenticateToken,
requireManageSettings,
[
body("metrics_enabled")
.isBoolean()
.withMessage("Metrics enabled must be a boolean"),
],
async (req, res) => {
try {
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
}
const { metrics_enabled } = req.body;
const settings = await getSettings();
await updateSettings(settings.id, {
metrics_enabled,
});
console.log(
`Metrics ${metrics_enabled ? "enabled" : "disabled"} by user`,
);
res.json({
message: "Metrics settings updated successfully",
metrics_enabled,
});
} catch (error) {
console.error("Metrics settings update error:", error);
res.status(500).json({ error: "Failed to update metrics settings" });
}
},
);
// Regenerate anonymous ID
router.post(
"/regenerate-id",
authenticateToken,
requireManageSettings,
async (_req, res) => {
try {
const settings = await getSettings();
const newAnonymousId = uuidv4();
await updateSettings(settings.id, {
metrics_anonymous_id: newAnonymousId,
});
console.log("Anonymous ID regenerated");
res.json({
message: "Anonymous ID regenerated successfully",
metrics_anonymous_id: newAnonymousId,
});
} catch (error) {
console.error("Anonymous ID regeneration error:", error);
res.status(500).json({ error: "Failed to regenerate anonymous ID" });
}
},
);
// Manually send metrics now
router.post(
"/send-now",
authenticateToken,
requireManageSettings,
async (_req, res) => {
try {
const settings = await getSettings();
if (!settings.metrics_enabled) {
return res.status(400).json({
error: "Metrics are disabled. Please enable metrics first.",
});
}
// Trigger metrics directly (no queue delay for manual trigger)
const metricsReporting =
queueManager.automations[QUEUE_NAMES.METRICS_REPORTING];
const result = await metricsReporting.process(
{ name: "manual-send" },
false,
);
if (result.success) {
console.log("✅ Manual metrics sent successfully");
res.json({
message: "Metrics sent successfully",
data: result,
});
} else {
console.error("❌ Failed to send metrics:", result);
res.status(500).json({
error: "Failed to send metrics",
details: result.reason || result.error,
});
}
} catch (error) {
console.error("Send metrics error:", error);
res.status(500).json({
error: "Failed to send metrics",
details: error.message,
});
}
},
);
module.exports = router;

View File

@@ -101,74 +101,108 @@ router.get("/", async (req, res) => {
prisma.packages.count({ where }),
]);
// Get additional stats for each package
const packagesWithStats = await Promise.all(
packages.map(async (pkg) => {
// Build base where clause for this package
const baseWhere = { package_id: pkg.id };
// OPTIMIZATION: Batch query all stats instead of N individual queries
const packageIds = packages.map((pkg) => pkg.id);
// If host filter is specified, add host filter to all queries
const hostWhere = host ? { ...baseWhere, host_id: host } : baseWhere;
const [updatesCount, securityCount, packageHosts] = await Promise.all([
prisma.host_packages.count({
where: {
...hostWhere,
needs_update: true,
},
}),
prisma.host_packages.count({
where: {
...hostWhere,
needs_update: true,
is_security_update: true,
},
}),
prisma.host_packages.findMany({
where: {
...hostWhere,
// If host filter is specified, include all packages for that host
// Otherwise, only include packages that need updates
...(host ? {} : { needs_update: true }),
},
select: {
hosts: {
select: {
id: true,
friendly_name: true,
hostname: true,
os_type: true,
},
},
current_version: true,
available_version: true,
needs_update: true,
is_security_update: true,
},
take: 10, // Limit to first 10 for performance
}),
]);
return {
...pkg,
packageHostsCount: pkg._count.host_packages,
packageHosts: packageHosts.map((hp) => ({
hostId: hp.hosts.id,
friendlyName: hp.hosts.friendly_name,
osType: hp.hosts.os_type,
currentVersion: hp.current_version,
availableVersion: hp.available_version,
needsUpdate: hp.needs_update,
isSecurityUpdate: hp.is_security_update,
})),
stats: {
totalInstalls: pkg._count.host_packages,
updatesNeeded: updatesCount,
securityUpdates: securityCount,
// Get all counts and host data in 3 batch queries instead of N*3 queries
const [allUpdatesCounts, allSecurityCounts, allPackageHostsData] =
await Promise.all([
// Batch count all packages that need updates
prisma.host_packages.groupBy({
by: ["package_id"],
where: {
package_id: { in: packageIds },
needs_update: true,
...(host ? { host_id: host } : {}),
},
};
}),
_count: { id: true },
}),
// Batch count all packages with security updates
prisma.host_packages.groupBy({
by: ["package_id"],
where: {
package_id: { in: packageIds },
needs_update: true,
is_security_update: true,
...(host ? { host_id: host } : {}),
},
_count: { id: true },
}),
// Batch fetch all host data for packages
prisma.host_packages.findMany({
where: {
package_id: { in: packageIds },
...(host ? { host_id: host } : { needs_update: true }),
},
select: {
package_id: true,
hosts: {
select: {
id: true,
friendly_name: true,
hostname: true,
os_type: true,
needs_reboot: true,
},
},
current_version: true,
available_version: true,
needs_update: true,
is_security_update: true,
},
// Limit to first 10 per package
take: 100, // Increased from package-based limit
}),
]);
// Create lookup maps for O(1) access
const updatesCountMap = new Map(
allUpdatesCounts.map((item) => [item.package_id, item._count.id]),
);
const securityCountMap = new Map(
allSecurityCounts.map((item) => [item.package_id, item._count.id]),
);
const packageHostsMap = new Map();
// Group host data by package_id
for (const hp of allPackageHostsData) {
if (!packageHostsMap.has(hp.package_id)) {
packageHostsMap.set(hp.package_id, []);
}
const hosts = packageHostsMap.get(hp.package_id);
hosts.push({
hostId: hp.hosts.id,
friendlyName: hp.hosts.friendly_name,
osType: hp.hosts.os_type,
currentVersion: hp.current_version,
availableVersion: hp.available_version,
needsUpdate: hp.needs_update,
isSecurityUpdate: hp.is_security_update,
});
// Limit to 10 hosts per package
if (hosts.length > 10) {
packageHostsMap.set(hp.package_id, hosts.slice(0, 10));
}
}
// Map packages with stats from lookup maps (no more DB queries!)
const packagesWithStats = packages.map((pkg) => {
const updatesCount = updatesCountMap.get(pkg.id) || 0;
const securityCount = securityCountMap.get(pkg.id) || 0;
const packageHosts = packageHostsMap.get(pkg.id) || [];
return {
...pkg,
packageHostsCount: pkg._count.host_packages,
packageHosts,
stats: {
totalInstalls: pkg._count.host_packages,
updatesNeeded: updatesCount,
securityUpdates: securityCount,
},
};
});
res.json({
packages: packagesWithStats,
@@ -203,6 +237,7 @@ router.get("/:packageId", async (req, res) => {
os_type: true,
os_version: true,
last_update: true,
needs_reboot: true,
},
},
},
@@ -332,6 +367,7 @@ router.get("/:packageId/hosts", async (req, res) => {
os_type: true,
os_version: true,
last_update: true,
needs_reboot: true,
},
},
},
@@ -353,6 +389,7 @@ router.get("/:packageId/hosts", async (req, res) => {
needsUpdate: hp.needs_update,
isSecurityUpdate: hp.is_security_update,
lastChecked: hp.last_checked,
needsReboot: hp.hosts.needs_reboot,
}));
res.json({

View File

@@ -119,6 +119,7 @@ router.get(
os_version: true,
status: true,
last_update: true,
needs_reboot: true,
},
},
},

View File

@@ -158,6 +158,7 @@ router.put(
logoDark,
logoLight,
favicon,
colorTheme,
} = req.body;
// Get current settings to check for update interval changes
@@ -189,6 +190,7 @@ router.put(
if (logoDark !== undefined) updateData.logo_dark = logoDark;
if (logoLight !== undefined) updateData.logo_light = logoLight;
if (favicon !== undefined) updateData.favicon = favicon;
if (colorTheme !== undefined) updateData.color_theme = colorTheme;
const updatedSettings = await updateSettings(
currentSettings.id,

View File

@@ -60,9 +60,14 @@ router.post(
authenticateToken,
[
body("token")
.notEmpty()
.withMessage("Token is required")
.isString()
.withMessage("Token must be a string")
.isLength({ min: 6, max: 6 })
.withMessage("Token must be 6 digits"),
body("token").isNumeric().withMessage("Token must contain only numbers"),
.withMessage("Token must be exactly 6 digits")
.matches(/^\d{6}$/)
.withMessage("Token must contain only numbers"),
],
async (req, res) => {
try {
@@ -71,7 +76,11 @@ router.post(
return res.status(400).json({ errors: errors.array() });
}
const { token } = req.body;
// Ensure token is a string (convert if needed)
let { token } = req.body;
if (typeof token !== "string") {
token = String(token);
}
const userId = req.user.id;
// Get user's TFA secret
@@ -261,8 +270,10 @@ router.post(
body("username").notEmpty().withMessage("Username is required"),
body("token")
.isLength({ min: 6, max: 6 })
.withMessage("Token must be 6 digits"),
body("token").isNumeric().withMessage("Token must contain only numbers"),
.withMessage("Token must be 6 characters"),
body("token")
.matches(/^[A-Z0-9]{6}$/)
.withMessage("Token must be 6 alphanumeric characters"),
],
async (req, res) => {
try {

View File

@@ -0,0 +1,105 @@
const express = require("express");
const { getPrismaClient } = require("../config/prisma");
const { authenticateToken } = require("../middleware/auth");
const router = express.Router();
const prisma = getPrismaClient();
/**
* GET /api/v1/user/preferences
* Get current user's preferences (theme and color theme)
*/
router.get("/", authenticateToken, async (req, res) => {
try {
const userId = req.user.id;
const user = await prisma.users.findUnique({
where: { id: userId },
select: {
theme_preference: true,
color_theme: true,
},
});
if (!user) {
return res.status(404).json({ error: "User not found" });
}
res.json({
theme_preference: user.theme_preference || "dark",
color_theme: user.color_theme || "cyber_blue",
});
} catch (error) {
console.error("Error fetching user preferences:", error);
res.status(500).json({ error: "Failed to fetch user preferences" });
}
});
/**
* PATCH /api/v1/user/preferences
* Update current user's preferences
*/
router.patch("/", authenticateToken, async (req, res) => {
try {
const userId = req.user.id;
const { theme_preference, color_theme } = req.body;
// Validate inputs
const updateData = {};
if (theme_preference !== undefined) {
if (!["light", "dark"].includes(theme_preference)) {
return res.status(400).json({
error: "Invalid theme preference. Must be 'light' or 'dark'",
});
}
updateData.theme_preference = theme_preference;
}
if (color_theme !== undefined) {
const validColorThemes = [
"default",
"cyber_blue",
"neon_purple",
"matrix_green",
"ocean_blue",
"sunset_gradient",
];
if (!validColorThemes.includes(color_theme)) {
return res.status(400).json({
error: `Invalid color theme. Must be one of: ${validColorThemes.join(", ")}`,
});
}
updateData.color_theme = color_theme;
}
if (Object.keys(updateData).length === 0) {
return res
.status(400)
.json({ error: "No preferences provided to update" });
}
updateData.updated_at = new Date();
const updatedUser = await prisma.users.update({
where: { id: userId },
data: updateData,
select: {
theme_preference: true,
color_theme: true,
},
});
res.json({
message: "Preferences updated successfully",
preferences: {
theme_preference: updatedUser.theme_preference,
color_theme: updatedUser.color_theme,
},
});
} catch (error) {
console.error("Error updating user preferences:", error);
res.status(500).json({ error: "Failed to update user preferences" });
}
});
module.exports = router;

View File

@@ -14,13 +14,16 @@ const router = express.Router();
function getCurrentVersion() {
try {
const packageJson = require("../../package.json");
return packageJson?.version || "1.3.0";
if (!packageJson?.version) {
throw new Error("Version not found in package.json");
}
return packageJson.version;
} catch (packageError) {
console.warn(
"Could not read version from package.json, using fallback:",
console.error(
"Could not read version from package.json:",
packageError.message,
);
return "1.3.0";
return "unknown";
}
}

View File

@@ -11,7 +11,31 @@ const {
const router = express.Router();
// Get WebSocket connection status by api_id (no database access - pure memory lookup)
// Get WebSocket connection status for multiple hosts at once (bulk endpoint)
router.get("/status", authenticateToken, async (req, res) => {
try {
const { apiIds } = req.query; // Comma-separated list of api_ids
const idArray = apiIds ? apiIds.split(",").filter((id) => id.trim()) : [];
const statusMap = {};
idArray.forEach((apiId) => {
statusMap[apiId] = getConnectionInfo(apiId);
});
res.json({
success: true,
data: statusMap,
});
} catch (error) {
console.error("Error fetching bulk WebSocket status:", error);
res.status(500).json({
success: false,
error: "Failed to fetch WebSocket status",
});
}
});
// Get WebSocket connection status by api_id (single endpoint)
router.get("/status/:apiId", authenticateToken, async (req, res) => {
try {
const { apiId } = req.params;

View File

@@ -66,8 +66,12 @@ const autoEnrollmentRoutes = require("./routes/autoEnrollmentRoutes");
const gethomepageRoutes = require("./routes/gethomepageRoutes");
const automationRoutes = require("./routes/automationRoutes");
const dockerRoutes = require("./routes/dockerRoutes");
const integrationRoutes = require("./routes/integrationRoutes");
const wsRoutes = require("./routes/wsRoutes");
const agentVersionRoutes = require("./routes/agentVersionRoutes");
const metricsRoutes = require("./routes/metricsRoutes");
const userPreferencesRoutes = require("./routes/userPreferencesRoutes");
const apiHostsRoutes = require("./routes/apiHostsRoutes");
const { initSettings } = require("./services/settingsService");
const { queueManager } = require("./services/automation");
const { authenticateToken, requireAdmin } = require("./middleware/auth");
@@ -295,7 +299,7 @@ app.disable("x-powered-by");
// Rate limiting with monitoring
const limiter = rateLimit({
windowMs: parseInt(process.env.RATE_LIMIT_WINDOW_MS, 10) || 15 * 60 * 1000,
max: parseInt(process.env.RATE_LIMIT_MAX, 10) || 100,
max: parseInt(process.env.RATE_LIMIT_MAX, 10) || 5000,
message: {
error: "Too many requests from this IP, please try again later.",
retryAfter: Math.ceil(
@@ -384,6 +388,7 @@ app.use(
"Authorization",
"Cookie",
"X-Requested-With",
"X-Device-ID", // Allow device ID header for TFA remember-me functionality
],
}),
);
@@ -424,7 +429,7 @@ const apiVersion = process.env.API_VERSION || "v1";
const authLimiter = rateLimit({
windowMs:
parseInt(process.env.AUTH_RATE_LIMIT_WINDOW_MS, 10) || 10 * 60 * 1000,
max: parseInt(process.env.AUTH_RATE_LIMIT_MAX, 10) || 20,
max: parseInt(process.env.AUTH_RATE_LIMIT_MAX, 10) || 500,
message: {
error: "Too many authentication requests, please try again later.",
retryAfter: Math.ceil(
@@ -438,7 +443,7 @@ const authLimiter = rateLimit({
});
const agentLimiter = rateLimit({
windowMs: parseInt(process.env.AGENT_RATE_LIMIT_WINDOW_MS, 10) || 60 * 1000,
max: parseInt(process.env.AGENT_RATE_LIMIT_MAX, 10) || 120,
max: parseInt(process.env.AGENT_RATE_LIMIT_MAX, 10) || 1000,
message: {
error: "Too many agent requests, please try again later.",
retryAfter: Math.ceil(
@@ -471,8 +476,12 @@ app.use(
app.use(`/api/${apiVersion}/gethomepage`, gethomepageRoutes);
app.use(`/api/${apiVersion}/automation`, automationRoutes);
app.use(`/api/${apiVersion}/docker`, dockerRoutes);
app.use(`/api/${apiVersion}/integrations`, integrationRoutes);
app.use(`/api/${apiVersion}/ws`, wsRoutes);
app.use(`/api/${apiVersion}/agent`, agentVersionRoutes);
app.use(`/api/${apiVersion}/metrics`, metricsRoutes);
app.use(`/api/${apiVersion}/user/preferences`, userPreferencesRoutes);
app.use(`/api/${apiVersion}/api`, authLimiter, apiHostsRoutes);
// Bull Board - will be populated after queue manager initializes
let bullBoardRouter = null;
@@ -552,299 +561,6 @@ app.use(`/bullboard`, (req, res, next) => {
return res.status(503).json({ error: "Bull Board not initialized yet" });
});
/*
// OLD MIDDLEWARE - REMOVED FOR SIMPLIFICATION - DO NOT USE
if (false) {
const sessionId = req.cookies["bull-board-session"];
console.log("Bull Board API call - Session ID:", sessionId ? "present" : "missing");
console.log("Bull Board API call - Cookies:", req.cookies);
console.log("Bull Board API call - Bull Board token cookie:", req.cookies["bull-board-token"] ? "present" : "missing");
console.log("Bull Board API call - Query token:", req.query.token ? "present" : "missing");
console.log("Bull Board API call - Auth header:", req.headers.authorization ? "present" : "missing");
console.log("Bull Board API call - Origin:", req.headers.origin || "missing");
console.log("Bull Board API call - Referer:", req.headers.referer || "missing");
// Check if we have any authentication method available
const hasSession = !!sessionId;
const hasTokenCookie = !!req.cookies["bull-board-token"];
const hasQueryToken = !!req.query.token;
const hasAuthHeader = !!req.headers.authorization;
const hasReferer = !!req.headers.referer;
console.log("Bull Board API call - Auth methods available:", {
session: hasSession,
tokenCookie: hasTokenCookie,
queryToken: hasQueryToken,
authHeader: hasAuthHeader,
referer: hasReferer
});
// Check for valid session first
if (sessionId) {
const session = bullBoardSessions.get(sessionId);
console.log("Bull Board API call - Session found:", !!session);
if (session && Date.now() - session.timestamp < 3600000) {
// Valid session, extend it
session.timestamp = Date.now();
console.log("Bull Board API call - Using existing session, proceeding");
return next();
} else if (session) {
// Expired session, remove it
console.log("Bull Board API call - Session expired, removing");
bullBoardSessions.delete(sessionId);
}
}
// No valid session, check for token as fallback
let token = req.query.token;
if (!token && req.headers.authorization) {
token = req.headers.authorization.replace("Bearer ", "");
}
if (!token && req.cookies["bull-board-token"]) {
token = req.cookies["bull-board-token"];
}
// For API calls, also check if the token is in the referer URL
// This handles cases where the main page hasn't set the cookie yet
if (!token && req.headers.referer) {
try {
const refererUrl = new URL(req.headers.referer);
const refererToken = refererUrl.searchParams.get('token');
if (refererToken) {
token = refererToken;
console.log("Bull Board API call - Token found in referer URL:", refererToken.substring(0, 20) + "...");
} else {
console.log("Bull Board API call - No token found in referer URL");
// If no token in referer and no session, return 401 with redirect info
if (!sessionId) {
console.log("Bull Board API call - No authentication available, returning 401");
return res.status(401).json({
error: "Authentication required",
message: "Please refresh the page to re-authenticate"
});
}
}
} catch (error) {
console.log("Bull Board API call - Error parsing referer URL:", error.message);
}
}
if (token) {
console.log("Bull Board API call - Token found, authenticating");
// Add token to headers for authentication
req.headers.authorization = `Bearer ${token}`;
// Authenticate the user
return authenticateToken(req, res, (err) => {
if (err) {
console.log("Bull Board API call - Token authentication failed");
return res.status(401).json({ error: "Authentication failed" });
}
return requireAdmin(req, res, (adminErr) => {
if (adminErr) {
console.log("Bull Board API call - Admin access required");
return res.status(403).json({ error: "Admin access required" });
}
console.log("Bull Board API call - Token authentication successful");
return next();
});
});
}
// No valid session or token for API calls, deny access
console.log("Bull Board API call - No valid session or token, denying access");
return res.status(401).json({ error: "Valid Bull Board session or token required" });
}
// Check for bull-board-session cookie first
const sessionId = req.cookies["bull-board-session"];
if (sessionId) {
const session = bullBoardSessions.get(sessionId);
if (session && Date.now() - session.timestamp < 3600000) {
// 1 hour
// Valid session, extend it
session.timestamp = Date.now();
return next();
} else if (session) {
// Expired session, remove it
bullBoardSessions.delete(sessionId);
}
}
// No valid session, check for token
let token = req.query.token;
if (!token && req.headers.authorization) {
token = req.headers.authorization.replace("Bearer ", "");
}
if (!token && req.cookies["bull-board-token"]) {
token = req.cookies["bull-board-token"];
}
// If no token, deny access
if (!token) {
return res.status(401).json({ error: "Access token required" });
}
// Add token to headers for authentication
req.headers.authorization = `Bearer ${token}`;
// Authenticate the user
return authenticateToken(req, res, (err) => {
if (err) {
return res.status(401).json({ error: "Authentication failed" });
}
return requireAdmin(req, res, (adminErr) => {
if (adminErr) {
return res.status(403).json({ error: "Admin access required" });
}
// Authentication successful - create a session
const newSessionId = require("node:crypto")
.randomBytes(32)
.toString("hex");
bullBoardSessions.set(newSessionId, {
timestamp: Date.now(),
userId: req.user.id,
});
// Set session cookie with proper configuration for domain access
const isHttps = process.env.NODE_ENV === "production" || process.env.SERVER_PROTOCOL === "https";
const cookieOptions = {
httpOnly: true,
secure: isHttps,
maxAge: 3600000, // 1 hour
path: "/", // Set path to root so it's available for all Bull Board requests
};
// Configure sameSite based on protocol and environment
if (isHttps) {
cookieOptions.sameSite = "none"; // Required for HTTPS cross-origin
} else {
cookieOptions.sameSite = "lax"; // Better for HTTP same-origin
}
res.cookie("bull-board-session", newSessionId, cookieOptions);
// Clean up old sessions periodically
if (bullBoardSessions.size > 100) {
const now = Date.now();
for (const [sid, session] of bullBoardSessions.entries()) {
if (now - session.timestamp > 3600000) {
bullBoardSessions.delete(sid);
}
}
}
return next();
});
});
});
*/
// Second middleware block - COMMENTED OUT - using simplified version above instead
/*
app.use(`/bullboard`, (req, res, next) => {
if (bullBoardRouter) {
// If this is the main Bull Board page (not an API call), inject the token and create session
if (!req.path.includes("/api/") && !req.path.includes("/static/") && req.path === "/bullboard") {
const token = req.query.token;
console.log("Bull Board main page - Token:", token ? "present" : "missing");
console.log("Bull Board main page - Query params:", req.query);
console.log("Bull Board main page - Origin:", req.headers.origin || "missing");
console.log("Bull Board main page - Referer:", req.headers.referer || "missing");
console.log("Bull Board main page - Cookies:", req.cookies);
if (token) {
// Authenticate the user and create a session immediately on page load
req.headers.authorization = `Bearer ${token}`;
return authenticateToken(req, res, (err) => {
if (err) {
console.log("Bull Board main page - Token authentication failed");
return res.status(401).json({ error: "Authentication failed" });
}
return requireAdmin(req, res, (adminErr) => {
if (adminErr) {
console.log("Bull Board main page - Admin access required");
return res.status(403).json({ error: "Admin access required" });
}
console.log("Bull Board main page - Token authentication successful, creating session");
// Create a Bull Board session immediately
const newSessionId = require("node:crypto")
.randomBytes(32)
.toString("hex");
bullBoardSessions.set(newSessionId, {
timestamp: Date.now(),
userId: req.user.id,
});
// Set session cookie with proper configuration for domain access
const sessionCookieOptions = {
httpOnly: true,
secure: false, // Always false for HTTP
maxAge: 3600000, // 1 hour
path: "/", // Set path to root so it's available for all Bull Board requests
sameSite: "lax", // Always lax for HTTP
};
res.cookie("bull-board-session", newSessionId, sessionCookieOptions);
console.log("Bull Board main page - Session created:", newSessionId);
console.log("Bull Board main page - Cookie options:", sessionCookieOptions);
// Also set a token cookie for API calls as a fallback
const tokenCookieOptions = {
httpOnly: false, // Allow JavaScript to access it
secure: false, // Always false for HTTP
maxAge: 3600000, // 1 hour
path: "/", // Set path to root for broader compatibility
sameSite: "lax", // Always lax for HTTP
};
res.cookie("bull-board-token", token, tokenCookieOptions);
console.log("Bull Board main page - Token cookie also set for API fallback");
// Clean up old sessions periodically
if (bullBoardSessions.size > 100) {
const now = Date.now();
for (const [sid, session] of bullBoardSessions.entries()) {
if (now - session.timestamp > 3600000) {
bullBoardSessions.delete(sid);
}
}
}
// Now proceed to serve the Bull Board page
return bullBoardRouter(req, res, next);
});
});
} else {
console.log("Bull Board main page - No token provided, checking for existing session");
// Check if we have an existing session
const sessionId = req.cookies["bull-board-session"];
if (sessionId) {
const session = bullBoardSessions.get(sessionId);
if (session && Date.now() - session.timestamp < 3600000) {
console.log("Bull Board main page - Using existing session");
// Extend session
session.timestamp = Date.now();
return bullBoardRouter(req, res, next);
} else if (session) {
console.log("Bull Board main page - Session expired, removing");
bullBoardSessions.delete(sessionId);
}
}
console.log("Bull Board main page - No valid session, denying access");
return res.status(401).json({ error: "Access token required" });
}
}
return bullBoardRouter(req, res, next);
}
return res.status(503).json({ error: "Bull Board not initialized yet" });
});
*/
// Error handler specifically for Bull Board routes
app.use("/bullboard", (err, req, res, _next) => {
console.error("Bull Board error on", req.method, req.url);
@@ -1198,6 +914,15 @@ async function startServer() {
initAgentWs(server, prisma);
await agentVersionService.initialize();
// Send metrics on startup (silent - no console output)
try {
const metricsReporting =
queueManager.automations[QUEUE_NAMES.METRICS_REPORTING];
await metricsReporting.sendSilent();
} catch (_error) {
// Silent failure - don't block server startup if metrics fail
}
server.listen(PORT, () => {
if (process.env.ENABLE_LOGGING === "true") {
logger.info(`Server running on port ${PORT}`);

View File

@@ -1,6 +1,7 @@
const axios = require("axios");
const fs = require("node:fs").promises;
const path = require("node:path");
const os = require("node:os");
const { exec, spawn } = require("node:child_process");
const { promisify } = require("node:util");
const _execAsync = promisify(exec);
@@ -106,10 +107,26 @@ class AgentVersionService {
try {
console.log("🔍 Getting current agent version...");
// Try to find the agent binary in agents/ folder only (what gets distributed)
// Detect server architecture and map to Go architecture names
const serverArch = os.arch();
// Map Node.js architecture to Go architecture names
const archMap = {
x64: "amd64",
ia32: "386",
arm64: "arm64",
arm: "arm",
};
const serverGoArch = archMap[serverArch] || serverArch;
console.log(
`🔍 Detected server architecture: ${serverArch} -> ${serverGoArch}`,
);
// Try to find the agent binary in agents/ folder based on server architecture
const possiblePaths = [
path.join(this.agentsDir, "patchmon-agent-linux-amd64"),
path.join(this.agentsDir, "patchmon-agent"),
path.join(this.agentsDir, `patchmon-agent-linux-${serverGoArch}`),
path.join(this.agentsDir, "patchmon-agent-linux-amd64"), // Fallback
path.join(this.agentsDir, "patchmon-agent"), // Legacy fallback
];
let agentPath = null;
@@ -126,7 +143,7 @@ class AgentVersionService {
if (!agentPath) {
console.log(
"⚠️ No agent binary found in agents/ folder, current version will be unknown",
`⚠️ No agent binary found in agents/ folder for architecture ${serverGoArch}, current version will be unknown`,
);
console.log("💡 Use the Download Updates button to get agent binaries");
this.currentVersion = null;
@@ -428,26 +445,29 @@ class AgentVersionService {
async getVersionInfo() {
let hasUpdate = false;
let updateStatus = "unknown";
let effectiveLatestVersion = this.currentVersion; // Always use local version if available
// If we have a local version, use it as the latest regardless of GitHub
if (this.currentVersion) {
effectiveLatestVersion = this.currentVersion;
// Latest version should ALWAYS come from GitHub, not from local binaries
// currentVersion = what's installed locally
// latestVersion = what's available on GitHub
if (this.latestVersion) {
console.log(`📦 Latest version from GitHub: ${this.latestVersion}`);
} else {
console.log(
`🔄 Using local agent version ${this.currentVersion} as latest`,
);
} else if (this.latestVersion) {
// Fallback to GitHub version only if no local version
effectiveLatestVersion = this.latestVersion;
console.log(
`🔄 No local version found, using GitHub version ${this.latestVersion}`,
`⚠️ No GitHub release version available (API may be unavailable)`,
);
}
if (this.currentVersion && effectiveLatestVersion) {
if (this.currentVersion) {
console.log(`💾 Current local agent version: ${this.currentVersion}`);
} else {
console.log(`⚠️ No local agent binary found`);
}
// Determine update status by comparing current vs latest (from GitHub)
if (this.currentVersion && this.latestVersion) {
const comparison = compareVersions(
this.currentVersion,
effectiveLatestVersion,
this.latestVersion,
);
if (comparison < 0) {
hasUpdate = true;
@@ -459,25 +479,25 @@ class AgentVersionService {
hasUpdate = false;
updateStatus = "up-to-date";
}
} else if (effectiveLatestVersion && !this.currentVersion) {
} else if (this.latestVersion && !this.currentVersion) {
hasUpdate = true;
updateStatus = "no-agent";
} else if (this.currentVersion && !effectiveLatestVersion) {
} else if (this.currentVersion && !this.latestVersion) {
// We have a current version but no latest version (GitHub API unavailable)
hasUpdate = false;
updateStatus = "github-unavailable";
} else if (!this.currentVersion && !effectiveLatestVersion) {
} else if (!this.currentVersion && !this.latestVersion) {
updateStatus = "no-data";
}
return {
currentVersion: this.currentVersion,
latestVersion: effectiveLatestVersion,
latestVersion: this.latestVersion, // Always return GitHub version, not local
hasUpdate: hasUpdate,
updateStatus: updateStatus,
lastChecked: this.lastChecked,
supportedArchitectures: this.supportedArchitectures,
status: effectiveLatestVersion ? "ready" : "no-releases",
status: this.latestVersion ? "ready" : "no-releases",
};
}

View File

@@ -3,6 +3,7 @@
const WebSocket = require("ws");
const url = require("node:url");
const { get_current_time } = require("../utils/timezone");
// Connection registry by api_id
const apiIdToSocket = new Map();
@@ -49,7 +50,29 @@ function init(server, prismaClient) {
wss.handleUpgrade(request, socket, head, (ws) => {
ws.on("message", (message) => {
// Echo back for Bull Board WebSocket
ws.send(message);
try {
ws.send(message);
} catch (_err) {
// Ignore send errors (connection may be closed)
}
});
ws.on("error", (err) => {
// Handle WebSocket errors gracefully for Bull Board
if (
err.code === "WS_ERR_INVALID_CLOSE_CODE" ||
err.code === "ECONNRESET" ||
err.code === "EPIPE"
) {
// These are expected errors, just log quietly
console.log("[bullboard-ws] connection error:", err.code);
} else {
console.error("[bullboard-ws] error:", err.message || err);
}
});
ws.on("close", () => {
// Connection closed, no action needed
});
});
return;
@@ -99,11 +122,76 @@ function init(server, prismaClient) {
// Notify subscribers of connection
notifyConnectionChange(apiId, true);
ws.on("message", () => {
// Currently we don't need to handle agent->server messages
ws.on("message", async (data) => {
// Handle incoming messages from agent (e.g., Docker status updates)
try {
const message = JSON.parse(data.toString());
if (message.type === "docker_status") {
// Handle Docker container status events
await handleDockerStatusEvent(apiId, message);
}
// Add more message types here as needed
} catch (err) {
console.error(
`[agent-ws] error parsing message from ${apiId}:`,
err,
);
}
});
ws.on("close", () => {
ws.on("error", (err) => {
// Handle WebSocket errors gracefully without crashing
// Common errors: invalid close codes (1006), connection resets, etc.
if (
err.code === "WS_ERR_INVALID_CLOSE_CODE" ||
err.message?.includes("invalid status code 1006") ||
err.message?.includes("Invalid WebSocket frame")
) {
// 1006 is a special close code indicating abnormal closure
// It cannot be sent in a close frame, but can occur when connection is lost
console.log(
`[agent-ws] connection error for ${apiId} (abnormal closure):`,
err.message || err.code,
);
} else if (
err.code === "ECONNRESET" ||
err.code === "EPIPE" ||
err.message?.includes("read ECONNRESET")
) {
// Connection reset errors are common and expected
console.log(`[agent-ws] connection reset for ${apiId}`);
} else {
// Log other errors for debugging
console.error(
`[agent-ws] error for ${apiId}:`,
err.message || err.code || err,
);
}
// Clean up connection on error
const existing = apiIdToSocket.get(apiId);
if (existing === ws) {
apiIdToSocket.delete(apiId);
connectionMetadata.delete(apiId);
// Notify subscribers of disconnection
notifyConnectionChange(apiId, false);
}
// Try to close the connection gracefully if still open
if (
ws.readyState === WebSocket.OPEN ||
ws.readyState === WebSocket.CONNECTING
) {
try {
ws.close(1000); // Normal closure
} catch {
// Ignore errors when closing
}
}
});
ws.on("close", (code, reason) => {
const existing = apiIdToSocket.get(apiId);
if (existing === ws) {
apiIdToSocket.delete(apiId);
@@ -112,7 +200,7 @@ function init(server, prismaClient) {
notifyConnectionChange(apiId, false);
}
console.log(
`[agent-ws] disconnected api_id=${apiId} total=${apiIdToSocket.size}`,
`[agent-ws] disconnected api_id=${apiId} code=${code} reason=${reason || "none"} total=${apiIdToSocket.size}`,
);
});
@@ -162,6 +250,38 @@ function pushSettingsUpdate(apiId, newInterval) {
);
}
function pushUpdateAgent(apiId) {
const ws = apiIdToSocket.get(apiId);
safeSend(ws, JSON.stringify({ type: "update_agent" }));
}
function pushIntegrationToggle(apiId, integrationName, enabled) {
const ws = apiIdToSocket.get(apiId);
if (ws && ws.readyState === WebSocket.OPEN) {
safeSend(
ws,
JSON.stringify({
type: "integration_toggle",
integration: integrationName,
enabled: enabled,
}),
);
console.log(
`📤 Pushed integration toggle to agent ${apiId}: ${integrationName} = ${enabled}`,
);
return true;
} else {
console.log(
`⚠️ Agent ${apiId} not connected, cannot push integration toggle, please edit config.yml manually`,
);
return false;
}
}
function getConnectionByApiId(apiId) {
return apiIdToSocket.get(apiId);
}
function pushUpdateNotification(apiId, updateInfo) {
const ws = apiIdToSocket.get(apiId);
if (ws && ws.readyState === WebSocket.OPEN) {
@@ -255,15 +375,74 @@ function subscribeToConnectionChanges(apiId, callback) {
};
}
// Handle Docker container status events from agent
async function handleDockerStatusEvent(apiId, message) {
try {
const { event: _event, container_id, name, status, timestamp } = message;
console.log(
`[Docker Event] ${apiId}: Container ${name} (${container_id}) - ${status}`,
);
// Find the host
const host = await prisma.hosts.findUnique({
where: { api_id: apiId },
});
if (!host) {
console.error(`[Docker Event] Host not found for api_id: ${apiId}`);
return;
}
// Update container status in database
const container = await prisma.docker_containers.findUnique({
where: {
host_id_container_id: {
host_id: host.id,
container_id: container_id,
},
},
});
if (container) {
await prisma.docker_containers.update({
where: { id: container.id },
data: {
status: status,
state: status,
updated_at: new Date(timestamp || Date.now()),
last_checked: get_current_time(),
},
});
console.log(
`[Docker Event] Updated container ${name} status to ${status}`,
);
} else {
console.log(
`[Docker Event] Container ${name} not found in database (may be new)`,
);
}
// TODO: Broadcast to connected dashboard clients via SSE or WebSocket
// This would notify the frontend UI in real-time
} catch (error) {
console.error(`[Docker Event] Error handling Docker status event:`, error);
}
}
module.exports = {
init,
broadcastSettingsUpdate,
pushReportNow,
pushSettingsUpdate,
pushUpdateAgent,
pushIntegrationToggle,
pushUpdateNotification,
pushUpdateNotificationToAll,
// Expose read-only view of connected agents
getConnectedApiIds: () => Array.from(apiIdToSocket.keys()),
getConnectionByApiId,
isConnected: (apiId) => {
const ws = apiIdToSocket.get(apiId);
return !!ws && ws.readyState === WebSocket.OPEN;

View File

@@ -0,0 +1,341 @@
const { prisma } = require("./shared/prisma");
const https = require("node:https");
const http = require("node:http");
const { v4: uuidv4 } = require("uuid");
/**
* Docker Image Update Check Automation
* Checks for Docker image updates by comparing local digests with remote registry digests
*/
class DockerImageUpdateCheck {
constructor(queueManager) {
this.queueManager = queueManager;
this.queueName = "docker-image-update-check";
}
/**
* Get remote digest from Docker registry using HEAD request
* Supports Docker Hub, GHCR, and other OCI-compliant registries
*/
async getRemoteDigest(imageName, tag = "latest") {
return new Promise((resolve, reject) => {
// Parse image name to determine registry
const registryInfo = this.parseImageName(imageName);
// Construct manifest URL
const manifestPath = `/v2/${registryInfo.repository}/manifests/${tag}`;
const options = {
hostname: registryInfo.registry,
path: manifestPath,
method: "HEAD",
headers: {
Accept:
"application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json",
"User-Agent": "PatchMon/1.0",
},
};
// Add authentication token for Docker Hub if needed
if (
registryInfo.registry === "registry-1.docker.io" &&
registryInfo.isPublic
) {
// For anonymous public images, we may need to get an auth token first
// For now, try without auth (works for public images)
}
// Choose HTTP or HTTPS
const client = registryInfo.isSecure ? https : http;
const req = client.request(options, (res) => {
if (res.statusCode === 401 || res.statusCode === 403) {
// Authentication required - skip for now (would need to implement auth)
return reject(
new Error(`Authentication required for ${imageName}:${tag}`),
);
}
if (res.statusCode !== 200) {
return reject(
new Error(
`Registry returned status ${res.statusCode} for ${imageName}:${tag}`,
),
);
}
// Get digest from Docker-Content-Digest header
const digest = res.headers["docker-content-digest"];
if (!digest) {
return reject(
new Error(
`No Docker-Content-Digest header for ${imageName}:${tag}`,
),
);
}
// Clean up digest (remove sha256: prefix if present)
const cleanDigest = digest.startsWith("sha256:")
? digest.substring(7)
: digest;
resolve(cleanDigest);
});
req.on("error", (error) => {
reject(error);
});
req.setTimeout(10000, () => {
req.destroy();
reject(new Error(`Timeout getting digest for ${imageName}:${tag}`));
});
req.end();
});
}
/**
* Parse image name to extract registry, repository, and determine if secure
*/
parseImageName(imageName) {
let registry = "registry-1.docker.io";
let repository = imageName;
const isSecure = true;
let isPublic = true;
// Handle explicit registries (ghcr.io, quay.io, etc.)
if (imageName.includes("/")) {
const parts = imageName.split("/");
const firstPart = parts[0];
// Check for known registries
if (firstPart.includes(".") || firstPart === "localhost") {
registry = firstPart;
repository = parts.slice(1).join("/");
isPublic = false; // Assume private registries need auth for now
} else {
// Docker Hub - registry-1.docker.io
repository = imageName;
}
}
// Docker Hub official images (no namespace)
if (!repository.includes("/")) {
repository = `library/${repository}`;
}
return {
registry,
repository,
isSecure,
isPublic,
};
}
/**
* Process Docker image update check job
*/
async process(_job) {
const startTime = Date.now();
console.log("🐳 Starting Docker image update check...");
try {
// Get all Docker images that have a digest
// Note: repository is required (non-nullable) in schema, so we don't need to check it
const images = await prisma.docker_images.findMany({
where: {
digest: {
not: null,
},
},
include: {
docker_image_updates: true,
},
});
console.log(`📦 Found ${images.length} images to check for updates`);
let checkedCount = 0;
let updateCount = 0;
let errorCount = 0;
const errors = [];
// Process images in batches to avoid overwhelming the API
const batchSize = 10;
for (let i = 0; i < images.length; i += batchSize) {
const batch = images.slice(i, i + batchSize);
// Process batch concurrently with Promise.allSettled for error tolerance
const _results = await Promise.allSettled(
batch.map(async (image) => {
try {
checkedCount++;
// Skip local images (no digest means they're local)
if (!image.digest || image.digest.trim() === "") {
return { image, skipped: true, reason: "No digest" };
}
// Get clean digest (remove sha256: prefix if present)
const localDigest = image.digest.startsWith("sha256:")
? image.digest.substring(7)
: image.digest;
// Get remote digest from registry
const remoteDigest = await this.getRemoteDigest(
image.repository,
image.tag || "latest",
);
// Compare digests
if (localDigest !== remoteDigest) {
console.log(
`🔄 Update found: ${image.repository}:${image.tag} (local: ${localDigest.substring(0, 12)}..., remote: ${remoteDigest.substring(0, 12)}...)`,
);
// Store digest info in changelog_url field as JSON
const digestInfo = JSON.stringify({
method: "digest_comparison",
current_digest: localDigest,
available_digest: remoteDigest,
checked_at: new Date().toISOString(),
});
// Upsert the update record
await prisma.docker_image_updates.upsert({
where: {
image_id_available_tag: {
image_id: image.id,
available_tag: image.tag || "latest",
},
},
update: {
updated_at: new Date(),
changelog_url: digestInfo,
severity: "digest_changed",
},
create: {
id: uuidv4(),
image_id: image.id,
current_tag: image.tag || "latest",
available_tag: image.tag || "latest",
severity: "digest_changed",
changelog_url: digestInfo,
updated_at: new Date(),
},
});
// Update last_checked timestamp on image
await prisma.docker_images.update({
where: { id: image.id },
data: { last_checked: new Date() },
});
updateCount++;
return { image, updated: true };
} else {
// No update - still update last_checked
await prisma.docker_images.update({
where: { id: image.id },
data: { last_checked: new Date() },
});
// Remove existing update record if digest matches now
const existingUpdate = image.docker_image_updates?.find(
(u) => u.available_tag === (image.tag || "latest"),
);
if (existingUpdate) {
await prisma.docker_image_updates.delete({
where: { id: existingUpdate.id },
});
}
return { image, updated: false };
}
} catch (error) {
errorCount++;
const errorMsg = `Error checking ${image.repository}:${image.tag}: ${error.message}`;
errors.push(errorMsg);
console.error(`${errorMsg}`);
// Still update last_checked even on error
try {
await prisma.docker_images.update({
where: { id: image.id },
data: { last_checked: new Date() },
});
} catch (_updateError) {
// Ignore update errors
}
return { image, error: error.message };
}
}),
);
// Log batch progress
if (i + batchSize < images.length) {
console.log(
`⏳ Processed ${Math.min(i + batchSize, images.length)}/${images.length} images...`,
);
}
// Small delay between batches to be respectful to registries
if (i + batchSize < images.length) {
await new Promise((resolve) => setTimeout(resolve, 500));
}
}
const executionTime = Date.now() - startTime;
console.log(
`✅ Docker image update check completed in ${executionTime}ms - Checked: ${checkedCount}, Updates: ${updateCount}, Errors: ${errorCount}`,
);
return {
success: true,
checked: checkedCount,
updates: updateCount,
errors: errorCount,
executionTime,
errorDetails: errors,
};
} catch (error) {
const executionTime = Date.now() - startTime;
console.error(
`❌ Docker image update check failed after ${executionTime}ms:`,
error.message,
);
throw error;
}
}
/**
* Schedule recurring Docker image update check (daily at 2 AM)
*/
async schedule() {
const job = await this.queueManager.queues[this.queueName].add(
"docker-image-update-check",
{},
{
repeat: { cron: "0 2 * * *" }, // Daily at 2 AM
jobId: "docker-image-update-check-recurring",
},
);
console.log("✅ Docker image update check scheduled");
return job;
}
/**
* Trigger manual Docker image update check
*/
async triggerManual() {
const job = await this.queueManager.queues[this.queueName].add(
"docker-image-update-check-manual",
{},
{ priority: 1 },
);
console.log("✅ Manual Docker image update check triggered");
return job;
}
}
module.exports = DockerImageUpdateCheck;

View File

@@ -0,0 +1,164 @@
const { prisma } = require("./shared/prisma");
/**
* Docker Inventory Cleanup Automation
* Removes Docker containers and images for hosts that no longer exist
*/
class DockerInventoryCleanup {
constructor(queueManager) {
this.queueManager = queueManager;
this.queueName = "docker-inventory-cleanup";
}
/**
* Process Docker inventory cleanup job
*/
async process(_job) {
const startTime = Date.now();
console.log("🧹 Starting Docker inventory cleanup...");
try {
// Step 1: Find and delete orphaned containers (containers for non-existent hosts)
const orphanedContainers = await prisma.docker_containers.findMany({
where: {
host_id: {
// Find containers where the host doesn't exist
notIn: await prisma.hosts
.findMany({ select: { id: true } })
.then((hosts) => hosts.map((h) => h.id)),
},
},
});
let deletedContainersCount = 0;
const deletedContainers = [];
for (const container of orphanedContainers) {
try {
await prisma.docker_containers.delete({
where: { id: container.id },
});
deletedContainersCount++;
deletedContainers.push({
id: container.id,
container_id: container.container_id,
name: container.name,
image_name: container.image_name,
host_id: container.host_id,
});
console.log(
`🗑️ Deleted orphaned container: ${container.name} (host_id: ${container.host_id})`,
);
} catch (deleteError) {
console.error(
`❌ Failed to delete container ${container.id}:`,
deleteError.message,
);
}
}
// Step 2: Find and delete orphaned images (images with no containers using them)
const orphanedImages = await prisma.docker_images.findMany({
where: {
docker_containers: {
none: {},
},
},
include: {
_count: {
select: {
docker_containers: true,
docker_image_updates: true,
},
},
},
});
let deletedImagesCount = 0;
const deletedImages = [];
for (const image of orphanedImages) {
try {
// First delete any image updates associated with this image
if (image._count.docker_image_updates > 0) {
await prisma.docker_image_updates.deleteMany({
where: { image_id: image.id },
});
}
// Then delete the image itself
await prisma.docker_images.delete({
where: { id: image.id },
});
deletedImagesCount++;
deletedImages.push({
id: image.id,
repository: image.repository,
tag: image.tag,
image_id: image.image_id,
});
console.log(
`🗑️ Deleted orphaned image: ${image.repository}:${image.tag}`,
);
} catch (deleteError) {
console.error(
`❌ Failed to delete image ${image.id}:`,
deleteError.message,
);
}
}
const executionTime = Date.now() - startTime;
console.log(
`✅ Docker inventory cleanup completed in ${executionTime}ms - Deleted ${deletedContainersCount} containers and ${deletedImagesCount} images`,
);
return {
success: true,
deletedContainersCount,
deletedImagesCount,
deletedContainers,
deletedImages,
executionTime,
};
} catch (error) {
const executionTime = Date.now() - startTime;
console.error(
`❌ Docker inventory cleanup failed after ${executionTime}ms:`,
error.message,
);
throw error;
}
}
/**
* Schedule recurring Docker inventory cleanup (daily at 4 AM)
*/
async schedule() {
const job = await this.queueManager.queues[this.queueName].add(
"docker-inventory-cleanup",
{},
{
repeat: { cron: "0 4 * * *" }, // Daily at 4 AM
jobId: "docker-inventory-cleanup-recurring",
},
);
console.log("✅ Docker inventory cleanup scheduled");
return job;
}
/**
* Trigger manual Docker inventory cleanup
*/
async triggerManual() {
const job = await this.queueManager.queues[this.queueName].add(
"docker-inventory-cleanup-manual",
{},
{ priority: 1 },
);
console.log("✅ Manual Docker inventory cleanup triggered");
return job;
}
}
module.exports = DockerInventoryCleanup;

View File

@@ -52,17 +52,24 @@ class GitHubUpdateCheck {
}
// Read version from package.json
let currentVersion = "1.3.0"; // fallback
let currentVersion = null;
try {
const packageJson = require("../../../package.json");
if (packageJson?.version) {
currentVersion = packageJson.version;
}
} catch (packageError) {
console.warn(
console.error(
"Could not read version from package.json:",
packageError.message,
);
throw new Error(
"Could not determine current version from package.json",
);
}
if (!currentVersion) {
throw new Error("Version not found in package.json");
}
const isUpdateAvailable =

View File

@@ -2,12 +2,18 @@ const { Queue, Worker } = require("bullmq");
const { redis, redisConnection } = require("./shared/redis");
const { prisma } = require("./shared/prisma");
const agentWs = require("../agentWs");
const { v4: uuidv4 } = require("uuid");
const { get_current_time } = require("../../utils/timezone");
// Import automation classes
const GitHubUpdateCheck = require("./githubUpdateCheck");
const SessionCleanup = require("./sessionCleanup");
const OrphanedRepoCleanup = require("./orphanedRepoCleanup");
const OrphanedPackageCleanup = require("./orphanedPackageCleanup");
const DockerInventoryCleanup = require("./dockerInventoryCleanup");
const DockerImageUpdateCheck = require("./dockerImageUpdateCheck");
const MetricsReporting = require("./metricsReporting");
const SystemStatistics = require("./systemStatistics");
// Queue names
const QUEUE_NAMES = {
@@ -15,6 +21,10 @@ const QUEUE_NAMES = {
SESSION_CLEANUP: "session-cleanup",
ORPHANED_REPO_CLEANUP: "orphaned-repo-cleanup",
ORPHANED_PACKAGE_CLEANUP: "orphaned-package-cleanup",
DOCKER_INVENTORY_CLEANUP: "docker-inventory-cleanup",
DOCKER_IMAGE_UPDATE_CHECK: "docker-image-update-check",
METRICS_REPORTING: "metrics-reporting",
SYSTEM_STATISTICS: "system-statistics",
AGENT_COMMANDS: "agent-commands",
};
@@ -91,6 +101,16 @@ class QueueManager {
new OrphanedRepoCleanup(this);
this.automations[QUEUE_NAMES.ORPHANED_PACKAGE_CLEANUP] =
new OrphanedPackageCleanup(this);
this.automations[QUEUE_NAMES.DOCKER_INVENTORY_CLEANUP] =
new DockerInventoryCleanup(this);
this.automations[QUEUE_NAMES.DOCKER_IMAGE_UPDATE_CHECK] =
new DockerImageUpdateCheck(this);
this.automations[QUEUE_NAMES.METRICS_REPORTING] = new MetricsReporting(
this,
);
this.automations[QUEUE_NAMES.SYSTEM_STATISTICS] = new SystemStatistics(
this,
);
console.log("✅ All automation classes initialized");
}
@@ -149,6 +169,42 @@ class QueueManager {
workerOptions,
);
// Docker Inventory Cleanup Worker
this.workers[QUEUE_NAMES.DOCKER_INVENTORY_CLEANUP] = new Worker(
QUEUE_NAMES.DOCKER_INVENTORY_CLEANUP,
this.automations[QUEUE_NAMES.DOCKER_INVENTORY_CLEANUP].process.bind(
this.automations[QUEUE_NAMES.DOCKER_INVENTORY_CLEANUP],
),
workerOptions,
);
// Docker Image Update Check Worker
this.workers[QUEUE_NAMES.DOCKER_IMAGE_UPDATE_CHECK] = new Worker(
QUEUE_NAMES.DOCKER_IMAGE_UPDATE_CHECK,
this.automations[QUEUE_NAMES.DOCKER_IMAGE_UPDATE_CHECK].process.bind(
this.automations[QUEUE_NAMES.DOCKER_IMAGE_UPDATE_CHECK],
),
workerOptions,
);
// Metrics Reporting Worker
this.workers[QUEUE_NAMES.METRICS_REPORTING] = new Worker(
QUEUE_NAMES.METRICS_REPORTING,
this.automations[QUEUE_NAMES.METRICS_REPORTING].process.bind(
this.automations[QUEUE_NAMES.METRICS_REPORTING],
),
workerOptions,
);
// System Statistics Worker
this.workers[QUEUE_NAMES.SYSTEM_STATISTICS] = new Worker(
QUEUE_NAMES.SYSTEM_STATISTICS,
this.automations[QUEUE_NAMES.SYSTEM_STATISTICS].process.bind(
this.automations[QUEUE_NAMES.SYSTEM_STATISTICS],
),
workerOptions,
);
// Agent Commands Worker
this.workers[QUEUE_NAMES.AGENT_COMMANDS] = new Worker(
QUEUE_NAMES.AGENT_COMMANDS,
@@ -156,15 +212,87 @@ class QueueManager {
const { api_id, type } = job.data;
console.log(`Processing agent command: ${type} for ${api_id}`);
// Send command via WebSocket based on type
if (type === "report_now") {
agentWs.pushReportNow(api_id);
} else if (type === "settings_update") {
// For settings update, we need additional data
const { update_interval } = job.data;
agentWs.pushSettingsUpdate(api_id, update_interval);
} else {
console.error(`Unknown agent command type: ${type}`);
// Log job to job_history
let historyRecord = null;
try {
const host = await prisma.hosts.findUnique({
where: { api_id },
select: { id: true },
});
if (host) {
historyRecord = await prisma.job_history.create({
data: {
id: uuidv4(),
job_id: job.id,
queue_name: QUEUE_NAMES.AGENT_COMMANDS,
job_name: type,
host_id: host.id,
api_id: api_id,
status: "active",
attempt_number: job.attemptsMade + 1,
created_at: get_current_time(),
updated_at: get_current_time(),
},
});
console.log(`📝 Logged job to job_history: ${job.id} (${type})`);
}
} catch (error) {
console.error("Failed to log job to job_history:", error);
}
try {
// Send command via WebSocket based on type
if (type === "report_now") {
agentWs.pushReportNow(api_id);
} else if (type === "settings_update") {
// For settings update, we need additional data
const { update_interval } = job.data;
agentWs.pushSettingsUpdate(api_id, update_interval);
} else if (type === "update_agent") {
// Force agent to update by sending WebSocket command
const ws = agentWs.getConnectionByApiId(api_id);
if (ws && ws.readyState === 1) {
// WebSocket.OPEN
agentWs.pushUpdateAgent(api_id);
console.log(`✅ Update command sent to agent ${api_id}`);
} else {
console.error(`❌ Agent ${api_id} is not connected`);
throw new Error(
`Agent ${api_id} is not connected. Cannot send update command.`,
);
}
} else {
console.error(`Unknown agent command type: ${type}`);
}
// Update job history to completed
if (historyRecord) {
await prisma.job_history.updateMany({
where: { job_id: job.id },
data: {
status: "completed",
completed_at: get_current_time(),
updated_at: get_current_time(),
},
});
console.log(`✅ Marked job as completed in job_history: ${job.id}`);
}
} catch (error) {
// Update job history to failed
if (historyRecord) {
await prisma.job_history.updateMany({
where: { job_id: job.id },
data: {
status: "failed",
error_message: error.message,
completed_at: get_current_time(),
updated_at: get_current_time(),
},
});
console.log(`❌ Marked job as failed in job_history: ${job.id}`);
}
throw error;
}
},
workerOptions,
@@ -194,6 +322,7 @@ class QueueManager {
console.log(`✅ Job '${job.id}' in queue '${queueName}' completed.`);
});
}
console.log("✅ Queue events initialized");
}
@@ -205,6 +334,10 @@ class QueueManager {
await this.automations[QUEUE_NAMES.SESSION_CLEANUP].schedule();
await this.automations[QUEUE_NAMES.ORPHANED_REPO_CLEANUP].schedule();
await this.automations[QUEUE_NAMES.ORPHANED_PACKAGE_CLEANUP].schedule();
await this.automations[QUEUE_NAMES.DOCKER_INVENTORY_CLEANUP].schedule();
await this.automations[QUEUE_NAMES.DOCKER_IMAGE_UPDATE_CHECK].schedule();
await this.automations[QUEUE_NAMES.METRICS_REPORTING].schedule();
await this.automations[QUEUE_NAMES.SYSTEM_STATISTICS].schedule();
}
/**
@@ -228,6 +361,26 @@ class QueueManager {
].triggerManual();
}
async triggerDockerInventoryCleanup() {
return this.automations[
QUEUE_NAMES.DOCKER_INVENTORY_CLEANUP
].triggerManual();
}
async triggerDockerImageUpdateCheck() {
return this.automations[
QUEUE_NAMES.DOCKER_IMAGE_UPDATE_CHECK
].triggerManual();
}
async triggerSystemStatistics() {
return this.automations[QUEUE_NAMES.SYSTEM_STATISTICS].triggerManual();
}
async triggerMetricsReporting() {
return this.automations[QUEUE_NAMES.METRICS_REPORTING].triggerManual();
}
/**
* Get queue statistics
*/

View File

@@ -0,0 +1,172 @@
const axios = require("axios");
const { prisma } = require("./shared/prisma");
const { updateSettings } = require("../../services/settingsService");
const METRICS_API_URL =
process.env.METRICS_API_URL || "https://metrics.patchmon.cloud";
/**
* Metrics Reporting Automation
* Sends anonymous usage metrics every 24 hours
*/
class MetricsReporting {
constructor(queueManager) {
this.queueManager = queueManager;
this.queueName = "metrics-reporting";
}
/**
* Process metrics reporting job
*/
async process(_job, silent = false) {
const startTime = Date.now();
if (!silent) console.log("📊 Starting metrics reporting...");
try {
// Fetch fresh settings directly from database (bypass cache)
const settings = await prisma.settings.findFirst({
orderBy: { updated_at: "desc" },
});
// Check if metrics are enabled
if (settings.metrics_enabled !== true) {
if (!silent) console.log("📊 Metrics reporting is disabled");
return { success: false, reason: "disabled" };
}
// Check if we have an anonymous ID
if (!settings.metrics_anonymous_id) {
if (!silent) console.log("📊 No anonymous ID found, skipping metrics");
return { success: false, reason: "no_id" };
}
// Get host count
const hostCount = await prisma.hosts.count();
// Get version
const packageJson = require("../../../package.json");
const version = packageJson.version;
// Prepare metrics data
const metricsData = {
anonymous_id: settings.metrics_anonymous_id,
host_count: hostCount,
version,
};
if (!silent)
console.log(
`📊 Sending metrics: ${hostCount} hosts, version ${version}`,
);
// Send to metrics API
try {
const response = await axios.post(
`${METRICS_API_URL}/metrics/submit`,
metricsData,
{
timeout: 10000,
headers: {
"Content-Type": "application/json",
},
},
);
// Update last sent timestamp
await updateSettings(settings.id, {
metrics_last_sent: new Date(),
});
const executionTime = Date.now() - startTime;
if (!silent)
console.log(
`✅ Metrics sent successfully in ${executionTime}ms:`,
response.data,
);
return {
success: true,
data: response.data,
hostCount,
version,
executionTime,
};
} catch (apiError) {
const executionTime = Date.now() - startTime;
if (!silent)
console.error(
`❌ Failed to send metrics to API after ${executionTime}ms:`,
apiError.message,
);
return {
success: false,
reason: "api_error",
error: apiError.message,
executionTime,
};
}
} catch (error) {
const executionTime = Date.now() - startTime;
if (!silent)
console.error(
`❌ Error in metrics reporting after ${executionTime}ms:`,
error.message,
);
// Don't throw on silent mode, just return failure
if (silent) {
return {
success: false,
reason: "error",
error: error.message,
executionTime,
};
}
throw error;
}
}
/**
* Schedule recurring metrics reporting (daily at 2 AM)
*/
async schedule() {
const job = await this.queueManager.queues[this.queueName].add(
"metrics-reporting",
{},
{
repeat: { cron: "0 2 * * *" }, // Daily at 2 AM
jobId: "metrics-reporting-recurring",
},
);
console.log("✅ Metrics reporting scheduled (daily at 2 AM)");
return job;
}
/**
* Trigger manual metrics reporting
*/
async triggerManual() {
const job = await this.queueManager.queues[this.queueName].add(
"metrics-reporting-manual",
{},
{ priority: 1 },
);
console.log("✅ Manual metrics reporting triggered");
return job;
}
/**
* Send metrics immediately (silent mode)
* Used for automatic sending on server startup
*/
async sendSilent() {
try {
const result = await this.process({ name: "startup-silent" }, true);
return result;
} catch (error) {
// Silent failure on startup
return { success: false, reason: "error", error: error.message };
}
}
}
module.exports = MetricsReporting;

View File

@@ -33,7 +33,8 @@ async function checkPublicRepo(owner, repo) {
try {
const httpsRepoUrl = `https://api.github.com/repos/${owner}/${repo}/releases/latest`;
let currentVersion = "1.3.0"; // fallback
// Get current version for User-Agent (or use generic if unavailable)
let currentVersion = "unknown";
try {
const packageJson = require("../../../package.json");
if (packageJson?.version) {
@@ -41,7 +42,7 @@ async function checkPublicRepo(owner, repo) {
}
} catch (packageError) {
console.warn(
"Could not read version from package.json for User-Agent, using fallback:",
"Could not read version from package.json for User-Agent:",
packageError.message,
);
}

View File

@@ -0,0 +1,140 @@
const { prisma } = require("./shared/prisma");
const { v4: uuidv4 } = require("uuid");
/**
* System Statistics Collection Automation
* Collects aggregated system-wide statistics every 30 minutes
* for use in package trends charts
*/
class SystemStatistics {
constructor(queueManager) {
this.queueManager = queueManager;
this.queueName = "system-statistics";
}
/**
* Process system statistics collection job
*/
async process(_job) {
const startTime = Date.now();
console.log("📊 Starting system statistics collection...");
try {
// Calculate unique package counts across all hosts
const uniquePackagesCount = await prisma.packages.count({
where: {
host_packages: {
some: {
needs_update: true,
},
},
},
});
const uniqueSecurityCount = await prisma.packages.count({
where: {
host_packages: {
some: {
needs_update: true,
is_security_update: true,
},
},
},
});
// Calculate total unique packages installed on at least one host
const totalPackages = await prisma.packages.count({
where: {
host_packages: {
some: {}, // At least one host has this package
},
},
});
// Calculate total hosts
const totalHosts = await prisma.hosts.count({
where: {
status: "active",
},
});
// Calculate hosts needing updates (distinct hosts with packages needing updates)
const hostsNeedingUpdates = await prisma.hosts.count({
where: {
status: "active",
host_packages: {
some: {
needs_update: true,
},
},
},
});
// Store statistics in database
await prisma.system_statistics.create({
data: {
id: uuidv4(),
unique_packages_count: uniquePackagesCount,
unique_security_count: uniqueSecurityCount,
total_packages: totalPackages,
total_hosts: totalHosts,
hosts_needing_updates: hostsNeedingUpdates,
timestamp: new Date(),
},
});
const executionTime = Date.now() - startTime;
console.log(
`✅ System statistics collection completed in ${executionTime}ms - Unique packages: ${uniquePackagesCount}, Security: ${uniqueSecurityCount}, Total hosts: ${totalHosts}`,
);
return {
success: true,
uniquePackagesCount,
uniqueSecurityCount,
totalPackages,
totalHosts,
hostsNeedingUpdates,
executionTime,
};
} catch (error) {
const executionTime = Date.now() - startTime;
console.error(
`❌ System statistics collection failed after ${executionTime}ms:`,
error.message,
);
throw error;
}
}
/**
* Schedule recurring system statistics collection (every 30 minutes)
*/
async schedule() {
const job = await this.queueManager.queues[this.queueName].add(
"system-statistics",
{},
{
repeat: { pattern: "*/30 * * * *" }, // Every 30 minutes
jobId: "system-statistics-recurring",
},
);
console.log("✅ System statistics collection scheduled (every 30 minutes)");
return job;
}
/**
* Trigger manual system statistics collection
*/
async triggerManual() {
const job = await this.queueManager.queues[this.queueName].add(
"system-statistics-manual",
{},
{ priority: 1 },
);
console.log("✅ Manual system statistics collection triggered");
return job;
}
}
module.exports = SystemStatistics;

179
backend/src/utils/docker.js Normal file
View File

@@ -0,0 +1,179 @@
/**
* Docker-related utility functions
*/
/**
* Generate a registry link for a Docker image based on its repository and source
* Inspired by diun's registry link generation
* @param {string} repository - The full repository name (e.g., "ghcr.io/owner/repo")
* @param {string} source - The detected source (github, gitlab, docker-hub, etc.)
* @returns {string|null} - The URL to the registry page, or null if unknown
*/
function generateRegistryLink(repository, source) {
if (!repository) {
return null;
}
// Parse the domain and path from the repository
const parts = repository.split("/");
let domain = "";
let path = "";
// Check if repository has a domain (contains a dot)
if (parts[0].includes(".") || parts[0].includes(":")) {
domain = parts[0];
path = parts.slice(1).join("/");
} else {
// No domain means Docker Hub
domain = "docker.io";
path = repository;
}
switch (source) {
case "docker-hub":
case "docker.io": {
// Docker Hub: https://hub.docker.com/r/{path} or https://hub.docker.com/_/{path} for official images
// Official images are those without a namespace (e.g., "postgres" not "user/postgres")
// or explicitly prefixed with "library/"
if (path.startsWith("library/")) {
const cleanPath = path.replace("library/", "");
return `https://hub.docker.com/_/${cleanPath}`;
}
// Check if it's an official image (single part, no slash after removing library/)
if (!path.includes("/")) {
return `https://hub.docker.com/_/${path}`;
}
// Regular user/org image
return `https://hub.docker.com/r/${path}`;
}
case "github":
case "ghcr.io": {
// GitHub Container Registry
// Format: ghcr.io/{owner}/{package} or ghcr.io/{owner}/{repo}/{package}
// URL format: https://github.com/{owner}/{repo}/pkgs/container/{package}
if (domain === "ghcr.io" && path) {
const pathParts = path.split("/");
if (pathParts.length === 2) {
// Simple case: ghcr.io/owner/package -> github.com/owner/owner/pkgs/container/package
// OR: ghcr.io/owner/repo -> github.com/owner/repo/pkgs/container/{package}
// Actually, for 2 parts it's owner/package, and repo is same as owner typically
const owner = pathParts[0];
const packageName = pathParts[1];
return `https://github.com/${owner}/${owner}/pkgs/container/${packageName}`;
} else if (pathParts.length >= 3) {
// Extended case: ghcr.io/owner/repo/package -> github.com/owner/repo/pkgs/container/package
const owner = pathParts[0];
const repo = pathParts[1];
const packageName = pathParts.slice(2).join("/");
return `https://github.com/${owner}/${repo}/pkgs/container/${packageName}`;
}
}
// Legacy GitHub Packages
if (domain === "docker.pkg.github.com" && path) {
const pathParts = path.split("/");
if (pathParts.length >= 1) {
return `https://github.com/${pathParts[0]}/packages`;
}
}
return null;
}
case "gitlab":
case "registry.gitlab.com": {
// GitLab Container Registry: https://gitlab.com/{path}/container_registry
if (path) {
return `https://gitlab.com/${path}/container_registry`;
}
return null;
}
case "google":
case "gcr.io": {
// Google Container Registry: https://gcr.io/{path}
if (domain.includes("gcr.io") || domain.includes("pkg.dev")) {
return `https://console.cloud.google.com/gcr/images/${path}`;
}
return null;
}
case "quay":
case "quay.io": {
// Quay.io: https://quay.io/repository/{path}
if (path) {
return `https://quay.io/repository/${path}`;
}
return null;
}
case "redhat":
case "registry.access.redhat.com": {
// Red Hat: https://access.redhat.com/containers/#/registry.access.redhat.com/{path}
if (path) {
return `https://access.redhat.com/containers/#/registry.access.redhat.com/${path}`;
}
return null;
}
case "azure":
case "azurecr.io": {
// Azure Container Registry - link to portal
// Format: {registry}.azurecr.io/{repository}
if (domain.includes("azurecr.io")) {
const registryName = domain.split(".")[0];
return `https://portal.azure.com/#view/Microsoft_Azure_ContainerRegistries/RepositoryBlade/registryName/${registryName}/repositoryName/${path}`;
}
return null;
}
case "aws":
case "amazonaws.com": {
// AWS ECR - link to console
// Format: {account}.dkr.ecr.{region}.amazonaws.com/{repository}
if (domain.includes("amazonaws.com")) {
const domainParts = domain.split(".");
const region = domainParts[3]; // Extract region
return `https://${region}.console.aws.amazon.com/ecr/repositories/private/${path}`;
}
return null;
}
case "private":
// For private registries, try to construct a basic URL
if (domain) {
return `https://${domain}`;
}
return null;
default:
return null;
}
}
/**
* Get a user-friendly display name for a registry source
* @param {string} source - The source identifier
* @returns {string} - Human-readable source name
*/
function getSourceDisplayName(source) {
const sourceNames = {
"docker-hub": "Docker Hub",
github: "GitHub",
gitlab: "GitLab",
google: "Google",
quay: "Quay.io",
redhat: "Red Hat",
azure: "Azure",
aws: "AWS ECR",
private: "Private Registry",
local: "Local",
unknown: "Unknown",
};
return sourceNames[source] || source;
}
module.exports = {
generateRegistryLink,
getSourceDisplayName,
};

View File

@@ -84,21 +84,20 @@ function parse_expiration(expiration_string) {
* Generate device fingerprint from request data
*/
function generate_device_fingerprint(req) {
const components = [
req.get("user-agent") || "",
req.get("accept-language") || "",
req.get("accept-encoding") || "",
req.ip || "",
];
// Use the X-Device-ID header from frontend (unique per browser profile/localStorage)
const deviceId = req.get("x-device-id");
// Create a simple hash of device characteristics
const fingerprint = crypto
.createHash("sha256")
.update(components.join("|"))
.digest("hex")
.substring(0, 32); // Use first 32 chars for storage efficiency
if (deviceId) {
// Hash the device ID for consistent storage format
return crypto
.createHash("sha256")
.update(deviceId)
.digest("hex")
.substring(0, 32);
}
return fingerprint;
// No device ID - return null (user needs to provide device ID for remember-me)
return null;
}
/**

View File

@@ -0,0 +1,107 @@
/**
* Timezone utility functions for consistent timestamp handling
*
* This module provides timezone-aware timestamp functions that use
* the TZ environment variable for consistent timezone handling across
* the application. If TZ is not set, defaults to UTC.
*/
/**
* Get the configured timezone from environment variable
* Defaults to UTC if not set
* @returns {string} Timezone string (e.g., 'UTC', 'America/New_York', 'Europe/London')
*/
function get_timezone() {
return process.env.TZ || process.env.TIMEZONE || "UTC";
}
/**
* Get current date/time in the configured timezone
* Returns a Date object that represents the current time in the configured timezone
* @returns {Date} Current date/time
*/
function get_current_time() {
const tz = get_timezone();
// If UTC, use Date.now() which is always UTC
if (tz === "UTC" || tz === "Etc/UTC") {
return new Date();
}
// For other timezones, we need to create a date string with timezone info
// and parse it. This ensures the date represents the correct time in that timezone.
// For database storage, we always store UTC timestamps
// The timezone is primarily used for display purposes
return new Date();
}
/**
* Get current timestamp in milliseconds (UTC)
* This is always UTC for database storage consistency
* @returns {number} Current timestamp in milliseconds
*/
function get_current_timestamp() {
return Date.now();
}
/**
* Format a date to ISO string in the configured timezone
* @param {Date} date - Date to format (defaults to now)
* @returns {string} ISO formatted date string
*/
function format_date_iso(date = null) {
const d = date || get_current_time();
return d.toISOString();
}
/**
* Parse a date string and return a Date object
* Handles various date formats and timezone conversions
* @param {string} date_string - Date string to parse
* @param {Date} fallback - Fallback date if parsing fails (defaults to now)
* @returns {Date} Parsed date or fallback
*/
function parse_date(date_string, fallback = null) {
if (!date_string) {
return fallback || get_current_time();
}
try {
const date = new Date(date_string);
if (Number.isNaN(date.getTime())) {
return fallback || get_current_time();
}
return date;
} catch (_error) {
return fallback || get_current_time();
}
}
/**
* Convert a date to the configured timezone for display
* @param {Date} date - Date to convert
* @returns {string} Formatted date string in configured timezone
*/
function format_date_for_display(date) {
const tz = get_timezone();
const formatter = new Intl.DateTimeFormat("en-US", {
timeZone: tz,
year: "numeric",
month: "2-digit",
day: "2-digit",
hour: "2-digit",
minute: "2-digit",
second: "2-digit",
hour12: false,
});
return formatter.format(date);
}
module.exports = {
get_timezone,
get_current_time,
get_current_timestamp,
format_date_iso,
parse_date,
format_date_for_display,
};

View File

@@ -1,10 +1,13 @@
{
"$schema": "https://biomejs.dev/schemas/2.2.4/schema.json",
"$schema": "https://biomejs.dev/schemas/2.3.4/schema.json",
"vcs": {
"enabled": true,
"clientKind": "git",
"useIgnoreFile": true
},
"files": {
"includes": ["**", "!**/*.css"]
},
"formatter": {
"enabled": true
},

View File

@@ -136,6 +136,24 @@ When you do this, updating to a new version requires manually updating the image
| `PM_DB_CONN_MAX_ATTEMPTS` | Maximum database connection attempts | `30` |
| `PM_DB_CONN_WAIT_INTERVAL` | Wait interval between connection attempts in seconds | `2` |
##### Database Connection Pool Configuration (Prisma)
| Variable | Description | Default |
| --------------------- | ---------------------------------------------------------- | ------- |
| `DB_CONNECTION_LIMIT` | Maximum number of database connections per instance | `30` |
| `DB_POOL_TIMEOUT` | Seconds to wait for an available connection before timeout | `20` |
| `DB_CONNECT_TIMEOUT` | Seconds to wait for initial database connection | `10` |
| `DB_IDLE_TIMEOUT` | Seconds before closing idle connections | `300` |
| `DB_MAX_LIFETIME` | Maximum lifetime of a connection in seconds | `1800` |
> [!TIP]
> The connection pool limit should be adjusted based on your deployment size:
> - **Small deployment (1-10 hosts)**: `DB_CONNECTION_LIMIT=15` is sufficient
> - **Medium deployment (10-50 hosts)**: `DB_CONNECTION_LIMIT=30` (default)
> - **Large deployment (50+ hosts)**: `DB_CONNECTION_LIMIT=50` or higher
>
> Each connection pool serves one backend instance. If you have concurrent operations (multiple users, background jobs, agent checkins), increase the pool size accordingly.
##### Redis Configuration
| Variable | Description | Default |

View File

@@ -20,9 +20,7 @@ COPY --chown=node:node agents ./agents_backup
COPY --chown=node:node agents ./agents
COPY --chmod=755 docker/backend.docker-entrypoint.sh ./entrypoint.sh
WORKDIR /app/backend
RUN npm ci --ignore-scripts && npx prisma generate
RUN npm install --workspace=backend --ignore-scripts && cd backend && npx prisma generate
EXPOSE 3001
@@ -35,20 +33,22 @@ ENTRYPOINT ["/sbin/tini", "--"]
CMD ["/app/entrypoint.sh"]
# Builder stage for production
FROM node:lts-alpine AS builder
# Use Debian-based Node for better QEMU ARM64 compatibility
FROM node:lts-slim AS builder
RUN apk add --no-cache openssl
# Install OpenSSL for Prisma
RUN apt-get update -y && apt-get install -y openssl && rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY --chown=node:node package*.json ./
COPY --chown=node:node backend/ ./backend/
WORKDIR /app/backend
RUN npm ci --ignore-scripts &&\
npx prisma generate &&\
npm prune --omit=dev &&\
RUN npm cache clean --force &&\
rm -rf node_modules ~/.npm /root/.npm &&\
npm install --workspace=backend --ignore-scripts --legacy-peer-deps --no-audit --prefer-online --fetch-retries=3 --fetch-retry-mintimeout=20000 --fetch-retry-maxtimeout=120000 &&\
cd backend && npx prisma generate &&\
cd .. && npm prune --omit=dev --workspace=backend &&\
npm cache clean --force
# Production stage
@@ -70,8 +70,8 @@ USER node
WORKDIR /app
COPY --from=builder /app/backend ./backend
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder --chown=node:node /app/backend ./backend
COPY --from=builder --chown=node:node /app/node_modules ./node_modules
COPY --chown=node:node agents ./agents_backup
COPY --chown=node:node agents ./agents
COPY --chmod=755 docker/backend.docker-entrypoint.sh ./entrypoint.sh

View File

@@ -50,6 +50,19 @@ services:
SERVER_HOST: localhost
SERVER_PORT: 3000
CORS_ORIGIN: http://localhost:3000
# Database Connection Pool Configuration (Prisma)
DB_CONNECTION_LIMIT: 30
DB_POOL_TIMEOUT: 20
DB_CONNECT_TIMEOUT: 10
DB_IDLE_TIMEOUT: 300
DB_MAX_LIFETIME: 1800
# Rate Limiting (times in milliseconds)
RATE_LIMIT_WINDOW_MS: 900000
RATE_LIMIT_MAX: 5000
AUTH_RATE_LIMIT_WINDOW_MS: 600000
AUTH_RATE_LIMIT_MAX: 500
AGENT_RATE_LIMIT_WINDOW_MS: 60000
AGENT_RATE_LIMIT_MAX: 1000
# Redis Configuration
REDIS_HOST: redis
REDIS_PORT: 6379

View File

@@ -56,6 +56,19 @@ services:
SERVER_HOST: localhost
SERVER_PORT: 3000
CORS_ORIGIN: http://localhost:3000
# Database Connection Pool Configuration (Prisma)
DB_CONNECTION_LIMIT: 30
DB_POOL_TIMEOUT: 20
DB_CONNECT_TIMEOUT: 10
DB_IDLE_TIMEOUT: 300
DB_MAX_LIFETIME: 1800
# Rate Limiting (times in milliseconds)
RATE_LIMIT_WINDOW_MS: 900000
RATE_LIMIT_MAX: 5000
AUTH_RATE_LIMIT_WINDOW_MS: 600000
AUTH_RATE_LIMIT_MAX: 500
AGENT_RATE_LIMIT_WINDOW_MS: 60000
AGENT_RATE_LIMIT_MAX: 1000
# Redis Configuration
REDIS_HOST: redis
REDIS_PORT: 6379

View File

@@ -6,7 +6,7 @@ WORKDIR /app
COPY package*.json ./
COPY frontend/ ./frontend/
RUN npm ci --ignore-scripts
RUN npm install --workspace=frontend --ignore-scripts
WORKDIR /app/frontend
@@ -15,18 +15,24 @@ EXPOSE 3000
CMD ["npm", "run", "dev", "--", "--host", "0.0.0.0", "--port", "3000"]
# Builder stage for production
FROM node:lts-alpine AS builder
# Use Debian-based Node for better QEMU ARM64 compatibility
FROM node:lts-slim AS builder
WORKDIR /app
WORKDIR /app/frontend
COPY package*.json ./
COPY frontend/package*.json ./frontend/
COPY frontend/package*.json ./
RUN npm ci --ignore-scripts
RUN echo "=== Starting npm install ===" &&\
npm cache clean --force &&\
rm -rf node_modules ~/.npm /root/.npm &&\
echo "=== npm install ===" &&\
npm install --ignore-scripts --legacy-peer-deps --no-audit --prefer-online --fetch-retries=3 --fetch-retry-mintimeout=20000 --fetch-retry-maxtimeout=120000 &&\
echo "=== npm install completed ===" &&\
npm cache clean --force
COPY frontend/ ./frontend/
COPY frontend/ ./
RUN npm run build:frontend
RUN npm run build
# Production stage
FROM nginxinc/nginx-unprivileged:alpine

View File

@@ -1,5 +1,6 @@
server {
listen 3000;
listen [::]:3000;
server_name localhost;
root /usr/share/nginx/html;
index index.html;
@@ -23,6 +24,9 @@ server {
add_header X-Content-Type-Options nosniff always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
# Prevent search engine indexing
add_header X-Robots-Tag "noindex, nofollow, noarchive, nosnippet" always;
# Bull Board proxy - must come before the root location to avoid conflicts
location /bullboard {

10
frontend/env.example Normal file
View File

@@ -0,0 +1,10 @@
# Frontend Environment Configuration
# This file is used by Vite during build and runtime
# API URL - Update this to match your backend server
VITE_API_URL=http://localhost:3001/api/v1
# Application Metadata
VITE_APP_NAME=PatchMon
VITE_APP_VERSION=1.3.4

View File

@@ -4,6 +4,18 @@
<meta charset="UTF-8" />
<link rel="icon" type="image/svg+xml" href="/assets/favicon.svg" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<!-- Prevent search engine indexing -->
<meta name="robots" content="noindex, nofollow, noarchive, nosnippet" />
<meta name="googlebot" content="noindex, nofollow, noarchive, nosnippet" />
<meta name="bingbot" content="noindex, nofollow, noarchive, nosnippet" />
<meta name="description" content="" />
<!-- Prevent caching -->
<meta http-equiv="cache-control" content="no-cache, no-store, must-revalidate" />
<meta http-equiv="pragma" content="no-cache" />
<meta http-equiv="expires" content="0" />
<title>PatchMon - Linux Patch Monitoring Dashboard</title>
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>

View File

@@ -1,7 +1,7 @@
{
"name": "patchmon-frontend",
"private": true,
"version": "1.3.0",
"version": "1.3.4",
"license": "AGPL-3.0",
"type": "module",
"scripts": {

View File

@@ -0,0 +1,26 @@
# Disallow all search engine crawlers
User-agent: *
Disallow: /
# Specifically target major search engines
User-agent: Googlebot
Disallow: /
User-agent: Bingbot
Disallow: /
User-agent: Slurp
Disallow: /
User-agent: DuckDuckBot
Disallow: /
User-agent: Baiduspider
Disallow: /
User-agent: YandexBot
Disallow: /
# Prevent indexing of all content
Disallow: /*

View File

@@ -11,6 +11,12 @@ const app = express();
const PORT = process.env.PORT || 3000;
const BACKEND_URL = process.env.BACKEND_URL || "http://backend:3001";
// Add security headers to prevent search engine indexing
app.use((_req, res, next) => {
res.setHeader("X-Robots-Tag", "noindex, nofollow, noarchive, nosnippet");
next();
});
// Enable CORS for API calls
app.use(
cors({

View File

@@ -7,6 +7,8 @@ import ProtectedRoute from "./components/ProtectedRoute";
import SettingsLayout from "./components/SettingsLayout";
import { isAuthPhase } from "./constants/authPhases";
import { AuthProvider, useAuth } from "./contexts/AuthContext";
import { ColorThemeProvider } from "./contexts/ColorThemeContext";
import { SettingsProvider } from "./contexts/SettingsContext";
import { ThemeProvider } from "./contexts/ThemeContext";
import { UpdateNotificationProvider } from "./contexts/UpdateNotificationContext";
@@ -27,6 +29,8 @@ const DockerContainerDetail = lazy(
);
const DockerImageDetail = lazy(() => import("./pages/docker/ImageDetail"));
const DockerHostDetail = lazy(() => import("./pages/docker/HostDetail"));
const DockerVolumeDetail = lazy(() => import("./pages/docker/VolumeDetail"));
const DockerNetworkDetail = lazy(() => import("./pages/docker/NetworkDetail"));
const AlertChannels = lazy(() => import("./pages/settings/AlertChannels"));
const Integrations = lazy(() => import("./pages/settings/Integrations"));
const Notifications = lazy(() => import("./pages/settings/Notifications"));
@@ -41,6 +45,7 @@ const SettingsServerConfig = lazy(
() => import("./pages/settings/SettingsServerConfig"),
);
const SettingsUsers = lazy(() => import("./pages/settings/SettingsUsers"));
const SettingsMetrics = lazy(() => import("./pages/settings/SettingsMetrics"));
// Loading fallback component
const LoadingFallback = () => (
@@ -192,6 +197,26 @@ function AppRoutes() {
</ProtectedRoute>
}
/>
<Route
path="/docker/volumes/:id"
element={
<ProtectedRoute requirePermission="can_view_reports">
<Layout>
<DockerVolumeDetail />
</Layout>
</ProtectedRoute>
}
/>
<Route
path="/docker/networks/:id"
element={
<ProtectedRoute requirePermission="can_view_reports">
<Layout>
<DockerNetworkDetail />
</Layout>
</ProtectedRoute>
}
/>
<Route
path="/users"
element={
@@ -388,6 +413,16 @@ function AppRoutes() {
</ProtectedRoute>
}
/>
<Route
path="/settings/metrics"
element={
<ProtectedRoute requirePermission="can_manage_settings">
<Layout>
<SettingsMetrics />
</Layout>
</ProtectedRoute>
}
/>
<Route
path="/options"
element={
@@ -415,15 +450,19 @@ function AppRoutes() {
function App() {
return (
<ThemeProvider>
<AuthProvider>
<UpdateNotificationProvider>
<LogoProvider>
<AppRoutes />
</LogoProvider>
</UpdateNotificationProvider>
</AuthProvider>
</ThemeProvider>
<AuthProvider>
<ThemeProvider>
<SettingsProvider>
<ColorThemeProvider>
<UpdateNotificationProvider>
<LogoProvider>
<AppRoutes />
</LogoProvider>
</UpdateNotificationProvider>
</ColorThemeProvider>
</SettingsProvider>
</ThemeProvider>
</AuthProvider>
);
}

View File

@@ -203,7 +203,7 @@ const InlineMultiGroupEdit = ({
className="mr-2 h-4 w-4 text-primary-600 focus:ring-primary-500 border-secondary-300 rounded"
/>
<span
className="inline-flex items-center px-2.5 py-0.5 rounded-full text-xs font-medium text-white"
className="inline-flex items-center px-2.5 py-0.5 rounded-md text-xs font-medium text-white"
style={{ backgroundColor: option.color }}
>
{option.name}
@@ -250,7 +250,7 @@ const InlineMultiGroupEdit = ({
return (
<div className={`flex items-center gap-1 group ${className}`}>
{displayGroups.length === 0 ? (
<span className="inline-flex items-center px-2.5 py-0.5 rounded-full text-xs font-medium bg-secondary-100 text-secondary-800">
<span className="inline-flex items-center px-2.5 py-0.5 rounded-md text-xs font-medium bg-secondary-100 text-secondary-800">
Ungrouped
</span>
) : (
@@ -258,7 +258,7 @@ const InlineMultiGroupEdit = ({
{displayGroups.map((group) => (
<span
key={group.id}
className="inline-flex items-center px-2.5 py-0.5 rounded-full text-xs font-medium text-white"
className="inline-flex items-center px-2.5 py-0.5 rounded-md text-xs font-medium text-white"
style={{ backgroundColor: group.color }}
>
{group.name}

View File

@@ -56,7 +56,7 @@ const InlineToggle = ({
type="button"
onClick={handleToggle}
disabled={isLoading}
className={`relative inline-flex h-5 w-9 items-center rounded-full transition-colors focus:outline-none focus:ring-2 focus:ring-primary-500 focus:ring-offset-2 disabled:opacity-50 disabled:cursor-not-allowed ${
className={`relative inline-flex h-5 w-9 items-center rounded-md transition-colors focus:outline-none focus:ring-2 focus:ring-primary-500 focus:ring-offset-2 disabled:opacity-50 disabled:cursor-not-allowed ${
value
? "bg-primary-600 dark:bg-primary-500"
: "bg-secondary-200 dark:bg-secondary-600"

View File

@@ -26,9 +26,10 @@ import {
Zap,
} from "lucide-react";
import { useCallback, useEffect, useRef, useState } from "react";
import { FaYoutube } from "react-icons/fa";
import { FaReddit, FaYoutube } from "react-icons/fa";
import { Link, useLocation, useNavigate } from "react-router-dom";
import { useAuth } from "../contexts/AuthContext";
import { useColorTheme } from "../contexts/ColorThemeContext";
import { useUpdateNotification } from "../contexts/UpdateNotificationContext";
import { dashboardAPI, versionAPI } from "../utils/api";
import DiscordIcon from "./DiscordIcon";
@@ -61,7 +62,9 @@ const Layout = ({ children }) => {
canManageSettings,
} = useAuth();
const { updateAvailable } = useUpdateNotification();
const { themeConfig } = useColorTheme();
const userMenuRef = useRef(null);
const bgCanvasRef = useRef(null);
// Fetch dashboard stats for the "Last updated" info
const {
@@ -117,7 +120,6 @@ const Layout = ({ children }) => {
name: "Automation",
href: "/automation",
icon: RefreshCw,
new: true,
});
if (canViewReports()) {
@@ -233,27 +235,165 @@ const Layout = ({ children }) => {
navigate("/hosts?action=add");
};
// Generate clean radial gradient background with subtle triangular accents for dark mode
useEffect(() => {
const generateBackground = () => {
if (
!bgCanvasRef.current ||
!themeConfig?.login ||
!document.documentElement.classList.contains("dark")
) {
return;
}
const canvas = bgCanvasRef.current;
canvas.width = window.innerWidth;
canvas.height = window.innerHeight;
const ctx = canvas.getContext("2d");
// Get theme colors - pick first color from each palette
const xColors = themeConfig.login.xColors || [
"#667eea",
"#764ba2",
"#f093fb",
"#4facfe",
];
const yColors = themeConfig.login.yColors || [
"#667eea",
"#764ba2",
"#f093fb",
"#4facfe",
];
// Use date for daily color rotation
const today = new Date();
const seed =
today.getFullYear() * 10000 + today.getMonth() * 100 + today.getDate();
const random = (s) => {
const x = Math.sin(s) * 10000;
return x - Math.floor(x);
};
const color1 = xColors[Math.floor(random(seed) * xColors.length)];
const color2 = yColors[Math.floor(random(seed + 1000) * yColors.length)];
// Create clean radial gradient from center to bottom-right corner
const gradient = ctx.createRadialGradient(
canvas.width * 0.3, // Center slightly left
canvas.height * 0.3, // Center slightly up
0,
canvas.width * 0.5, // Expand to cover screen
canvas.height * 0.5,
Math.max(canvas.width, canvas.height) * 1.2,
);
// Subtle gradient with darker corners
gradient.addColorStop(0, color1);
gradient.addColorStop(0.6, color2);
gradient.addColorStop(1, "#0a0a0a"); // Very dark edges
ctx.fillStyle = gradient;
ctx.fillRect(0, 0, canvas.width, canvas.height);
// Add subtle triangular shapes as accents across entire background
const cellSize = 180;
const cols = Math.ceil(canvas.width / cellSize) + 1;
const rows = Math.ceil(canvas.height / cellSize) + 1;
for (let y = 0; y < rows; y++) {
for (let x = 0; x < cols; x++) {
const idx = y * cols + x;
// Draw more triangles (less sparse)
if (random(seed + idx + 5000) > 0.4) {
const baseX =
x * cellSize + random(seed + idx * 3) * cellSize * 0.8;
const baseY =
y * cellSize + random(seed + idx * 3 + 100) * cellSize * 0.8;
const size = 50 + random(seed + idx * 4) * 100;
ctx.beginPath();
ctx.moveTo(baseX, baseY);
ctx.lineTo(baseX + size, baseY);
ctx.lineTo(baseX + size / 2, baseY - size * 0.866);
ctx.closePath();
// More visible white with slightly higher opacity
ctx.fillStyle = `rgba(255, 255, 255, ${0.05 + random(seed + idx * 5) * 0.08})`;
ctx.fill();
}
}
}
};
generateBackground();
// Regenerate on window resize or theme change
const handleResize = () => {
generateBackground();
};
window.addEventListener("resize", handleResize);
// Watch for dark mode changes
const observer = new MutationObserver((mutations) => {
mutations.forEach((mutation) => {
if (mutation.attributeName === "class") {
generateBackground();
}
});
});
observer.observe(document.documentElement, {
attributes: true,
attributeFilter: ["class"],
});
return () => {
window.removeEventListener("resize", handleResize);
observer.disconnect();
};
}, [themeConfig]);
// Fetch GitHub stars count
const fetchGitHubStars = useCallback(async () => {
// Skip if already fetched recently
// Try to load cached star count first
const cachedStars = localStorage.getItem("githubStarsCount");
if (cachedStars) {
setGithubStars(parseInt(cachedStars, 10));
}
// Skip API call if fetched recently
const lastFetch = localStorage.getItem("githubStarsFetchTime");
const now = Date.now();
if (lastFetch && now - parseInt(lastFetch, 15) < 600000) {
// 15 minute cache
if (lastFetch && now - parseInt(lastFetch, 10) < 600000) {
// 10 minute cache
return;
}
try {
const response = await fetch(
"https://api.github.com/repos/9technologygroup/patchmon.net",
{
headers: {
Accept: "application/vnd.github.v3+json",
},
},
);
if (response.ok) {
const data = await response.json();
setGithubStars(data.stargazers_count);
localStorage.setItem(
"githubStarsCount",
data.stargazers_count.toString(),
);
localStorage.setItem("githubStarsFetchTime", now.toString());
} else if (response.status === 403 || response.status === 429) {
console.warn("GitHub API rate limit exceeded, using cached value");
}
} catch (error) {
console.error("Failed to fetch GitHub stars:", error);
// Keep using cached value if available
}
}, []);
@@ -303,11 +443,76 @@ const Layout = ({ children }) => {
fetchGitHubStars();
}, [fetchGitHubStars]);
// Set CSS custom properties for glassmorphism and theme colors in dark mode
useEffect(() => {
const updateThemeStyles = () => {
const isDark = document.documentElement.classList.contains("dark");
const root = document.documentElement;
if (isDark && themeConfig?.app) {
// Glass navigation bars - very light for pattern visibility
root.style.setProperty("--sidebar-bg", "rgba(0, 0, 0, 0.15)");
root.style.setProperty("--sidebar-blur", "blur(12px)");
root.style.setProperty("--topbar-bg", "rgba(0, 0, 0, 0.15)");
root.style.setProperty("--topbar-blur", "blur(12px)");
root.style.setProperty("--button-bg", "rgba(255, 255, 255, 0.15)");
root.style.setProperty("--button-blur", "blur(8px)");
// Theme-colored cards and buttons - darker to stand out
root.style.setProperty("--card-bg", themeConfig.app.cardBg);
root.style.setProperty("--card-border", themeConfig.app.cardBorder);
root.style.setProperty("--card-bg-hover", themeConfig.app.bgTertiary);
root.style.setProperty("--theme-button-bg", themeConfig.app.buttonBg);
root.style.setProperty(
"--theme-button-hover",
themeConfig.app.buttonHover,
);
} else {
// Light mode - standard colors
root.style.setProperty("--sidebar-bg", "white");
root.style.setProperty("--sidebar-blur", "none");
root.style.setProperty("--topbar-bg", "white");
root.style.setProperty("--topbar-blur", "none");
root.style.setProperty("--button-bg", "white");
root.style.setProperty("--button-blur", "none");
root.style.setProperty("--card-bg", "white");
root.style.setProperty("--card-border", "#e5e7eb");
root.style.setProperty("--card-bg-hover", "#f9fafb");
root.style.setProperty("--theme-button-bg", "#f3f4f6");
root.style.setProperty("--theme-button-hover", "#e5e7eb");
}
};
updateThemeStyles();
// Watch for dark mode changes
const observer = new MutationObserver(() => {
updateThemeStyles();
});
observer.observe(document.documentElement, {
attributes: true,
attributeFilter: ["class"],
});
return () => observer.disconnect();
}, [themeConfig]);
return (
<div className="min-h-screen bg-secondary-50">
<div className="min-h-screen bg-secondary-50 dark:bg-black relative overflow-hidden">
{/* Full-screen Trianglify Background (Dark Mode Only) */}
<canvas
ref={bgCanvasRef}
className="fixed inset-0 w-full h-full hidden dark:block"
style={{ zIndex: 0 }}
/>
<div
className="fixed inset-0 bg-gradient-to-br from-black/10 to-black/20 hidden dark:block pointer-events-none"
style={{ zIndex: 1 }}
/>
{/* Mobile sidebar */}
<div
className={`fixed inset-0 z-50 lg:hidden ${sidebarOpen ? "block" : "hidden"}`}
className={`fixed inset-0 z-[60] lg:hidden ${sidebarOpen ? "block" : "hidden"}`}
>
<button
type="button"
@@ -315,7 +520,14 @@ const Layout = ({ children }) => {
onClick={() => setSidebarOpen(false)}
aria-label="Close sidebar"
/>
<div className="relative flex w-full max-w-[280px] flex-col bg-white dark:bg-secondary-800 pb-4 pt-5 shadow-xl">
<div
className="relative flex w-full max-w-[280px] flex-col bg-white dark:border-r dark:border-white/10 pb-4 pt-5 shadow-xl"
style={{
backgroundColor: "var(--sidebar-bg, white)",
backdropFilter: "var(--sidebar-blur, none)",
WebkitBackdropFilter: "var(--sidebar-blur, none)",
}}
>
<div className="absolute right-0 top-0 -mr-12 pt-2">
<button
type="button"
@@ -534,17 +746,43 @@ const Layout = ({ children }) => {
{/* Desktop sidebar */}
<div
className={`hidden lg:fixed lg:inset-y-0 lg:z-50 lg:flex lg:flex-col transition-all duration-300 relative ${
className={`hidden lg:fixed lg:inset-y-0 z-[100] lg:flex lg:flex-col transition-all duration-300 relative ${
sidebarCollapsed ? "lg:w-16" : "lg:w-56"
} bg-white dark:bg-secondary-800`}
} bg-white dark:bg-transparent`}
>
{/* Collapse/Expand button on border */}
<button
type="button"
onClick={() => setSidebarCollapsed(!sidebarCollapsed)}
className="absolute top-5 -right-3 z-[200] flex items-center justify-center w-6 h-6 rounded-full bg-white border border-secondary-300 dark:border-white/20 shadow-md hover:bg-secondary-50 transition-colors"
style={{
backgroundColor: "var(--button-bg, white)",
backdropFilter: "var(--button-blur, none)",
WebkitBackdropFilter: "var(--button-blur, none)",
}}
title={sidebarCollapsed ? "Expand sidebar" : "Collapse sidebar"}
>
{sidebarCollapsed ? (
<ChevronRight className="h-4 w-4 text-secondary-700 dark:text-white" />
) : (
<ChevronLeft className="h-4 w-4 text-secondary-700 dark:text-white" />
)}
</button>
<div
className={`flex grow flex-col gap-y-5 overflow-y-auto border-r border-secondary-200 dark:border-secondary-600 bg-white dark:bg-secondary-800 ${
className={`flex grow flex-col gap-y-5 border-r border-secondary-200 dark:border-white/10 bg-white ${
sidebarCollapsed ? "px-2 shadow-lg" : "px-6"
}`}
style={{
backgroundColor: "var(--sidebar-bg, white)",
backdropFilter: "var(--sidebar-blur, none)",
WebkitBackdropFilter: "var(--sidebar-blur, none)",
overflowY: "auto",
overflowX: "visible",
}}
>
<div
className={`flex h-16 shrink-0 items-center border-b border-secondary-200 dark:border-secondary-600 ${
className={`flex h-16 shrink-0 items-center border-b border-secondary-200 dark:border-white/10 ${
sidebarCollapsed ? "justify-center" : "justify-center"
}`}
>
@@ -562,19 +800,6 @@ const Layout = ({ children }) => {
</Link>
)}
</div>
{/* Collapse/Expand button on border */}
<button
type="button"
onClick={() => setSidebarCollapsed(!sidebarCollapsed)}
className="absolute top-5 -right-3 z-10 flex items-center justify-center w-6 h-6 rounded-full bg-white dark:bg-secondary-800 border border-secondary-300 dark:border-secondary-600 shadow-md hover:bg-secondary-50 dark:hover:bg-secondary-700 transition-colors"
title={sidebarCollapsed ? "Expand sidebar" : "Collapse sidebar"}
>
{sidebarCollapsed ? (
<ChevronRight className="h-4 w-4 text-secondary-700 dark:text-white" />
) : (
<ChevronLeft className="h-4 w-4 text-secondary-700 dark:text-white" />
)}
</button>
<nav className="flex flex-1 flex-col">
<ul className="flex flex-1 flex-col gap-y-6">
{/* Show message for users with very limited permissions */}
@@ -930,12 +1155,19 @@ const Layout = ({ children }) => {
{/* Main content */}
<div
className={`flex flex-col min-h-screen transition-all duration-300 ${
className={`flex flex-col min-h-screen transition-all duration-300 relative z-10 ${
sidebarCollapsed ? "lg:pl-16" : "lg:pl-56"
}`}
>
{/* Top bar */}
<div className="sticky top-0 z-40 flex h-16 shrink-0 items-center gap-x-4 border-b border-secondary-200 dark:border-secondary-600 bg-white dark:bg-secondary-800 px-4 shadow-sm sm:gap-x-6 sm:px-6 lg:px-8">
<div
className="sticky top-0 z-[90] flex h-16 shrink-0 items-center gap-x-4 border-b border-secondary-200 dark:border-white/10 bg-white px-4 shadow-sm sm:gap-x-6 sm:px-6 lg:px-8"
style={{
backgroundColor: "var(--topbar-bg, white)",
backdropFilter: "var(--topbar-blur, none)",
WebkitBackdropFilter: "var(--topbar-blur, none)",
}}
>
<button
type="button"
className="-m-2.5 p-2.5 text-secondary-700 dark:text-white lg:hidden"
@@ -987,8 +1219,8 @@ const Layout = ({ children }) => {
>
<Github className="h-5 w-5 flex-shrink-0" />
{githubStars !== null && (
<div className="flex items-center gap-0.5">
<Star className="h-3 w-3 fill-current text-yellow-500" />
<div className="flex items-center gap-1">
<Star className="h-4 w-4 fill-current text-yellow-500" />
<span className="text-sm font-medium">{githubStars}</span>
</div>
)}
@@ -1059,7 +1291,17 @@ const Layout = ({ children }) => {
>
<FaYoutube className="h-5 w-5" />
</a>
{/* 7) Web */}
{/* 8) Reddit */}
<a
href="https://www.reddit.com/r/patchmon"
target="_blank"
rel="noopener noreferrer"
className="flex items-center justify-center w-10 h-10 bg-gray-50 dark:bg-gray-800 text-secondary-600 dark:text-secondary-300 hover:bg-gray-100 dark:hover:bg-gray-700 rounded-lg transition-colors shadow-sm"
title="Reddit Community"
>
<FaReddit className="h-5 w-5" />
</a>
{/* 9) Web */}
<a
href="https://patchmon.net"
target="_blank"
@@ -1074,7 +1316,7 @@ const Layout = ({ children }) => {
</div>
</div>
<main className="flex-1 py-6 bg-secondary-50 dark:bg-secondary-800">
<main className="flex-1 py-6 bg-secondary-50 dark:bg-transparent">
<div className="px-4 sm:px-6 lg:px-8">{children}</div>
</main>
</div>

View File

@@ -1,17 +1,8 @@
import { useQuery } from "@tanstack/react-query";
import { useEffect } from "react";
import { isAuthReady } from "../constants/authPhases";
import { useAuth } from "../contexts/AuthContext";
import { settingsAPI } from "../utils/api";
import { useSettings } from "../contexts/SettingsContext";
const LogoProvider = ({ children }) => {
const { authPhase, isAuthenticated } = useAuth();
const { data: settings } = useQuery({
queryKey: ["settings"],
queryFn: () => settingsAPI.get().then((res) => res.data),
enabled: isAuthReady(authPhase, isAuthenticated()),
});
const { settings } = useSettings();
useEffect(() => {
// Use custom favicon or fallback to default

View File

@@ -1,4 +1,5 @@
import {
BarChart3,
Bell,
ChevronLeft,
ChevronRight,
@@ -141,6 +142,11 @@ const SettingsLayout = ({ children }) => {
href: "/settings/server-version",
icon: Code,
},
{
name: "Metrics",
href: "/settings/metrics",
icon: BarChart3,
},
],
});
}

View File

@@ -1,5 +1,5 @@
import { useMutation, useQuery, useQueryClient } from "@tanstack/react-query";
import { AlertCircle, CheckCircle, Save, Shield } from "lucide-react";
import { AlertCircle, CheckCircle, Save, Shield, X } from "lucide-react";
import { useEffect, useId, useState } from "react";
import { permissionsAPI, settingsAPI } from "../../utils/api";
@@ -18,9 +18,60 @@ const AgentUpdatesTab = () => {
});
const [errors, setErrors] = useState({});
const [isDirty, setIsDirty] = useState(false);
const [toast, setToast] = useState(null);
const queryClient = useQueryClient();
// Auto-hide toast after 3 seconds
useEffect(() => {
if (toast) {
const timer = setTimeout(() => {
setToast(null);
}, 3000);
return () => clearTimeout(timer);
}
}, [toast]);
const showToast = (message, type = "success") => {
setToast({ message, type });
};
// Fallback clipboard copy function for HTTP and older browsers
const copyToClipboard = async (text) => {
// Try modern clipboard API first
if (navigator.clipboard && navigator.clipboard.writeText) {
try {
await navigator.clipboard.writeText(text);
return true;
} catch (err) {
console.warn("Clipboard API failed, using fallback:", err);
}
}
// Fallback for HTTP or unsupported browsers
try {
const textArea = document.createElement("textarea");
textArea.value = text;
textArea.style.position = "fixed";
textArea.style.left = "-999999px";
textArea.style.top = "-999999px";
document.body.appendChild(textArea);
textArea.focus();
textArea.select();
const successful = document.execCommand("copy");
document.body.removeChild(textArea);
if (successful) {
return true;
}
throw new Error("execCommand failed");
} catch (err) {
console.error("Fallback copy failed:", err);
throw err;
}
};
// Fetch current settings
const {
data: settings,
@@ -167,6 +218,53 @@ const AgentUpdatesTab = () => {
return (
<div className="space-y-6">
{/* Toast Notification */}
{toast && (
<div
className={`fixed top-4 right-4 z-50 max-w-md rounded-lg shadow-lg border-2 p-4 flex items-start space-x-3 animate-in slide-in-from-top-5 ${
toast.type === "success"
? "bg-green-50 dark:bg-green-900/90 border-green-500 dark:border-green-600"
: "bg-red-50 dark:bg-red-900/90 border-red-500 dark:border-red-600"
}`}
>
<div
className={`flex-shrink-0 rounded-full p-1 ${
toast.type === "success"
? "bg-green-100 dark:bg-green-800"
: "bg-red-100 dark:bg-red-800"
}`}
>
{toast.type === "success" ? (
<CheckCircle className="h-5 w-5 text-green-600 dark:text-green-400" />
) : (
<AlertCircle className="h-5 w-5 text-red-600 dark:text-red-400" />
)}
</div>
<div className="flex-1">
<p
className={`text-sm font-medium ${
toast.type === "success"
? "text-green-800 dark:text-green-100"
: "text-red-800 dark:text-red-100"
}`}
>
{toast.message}
</p>
</div>
<button
type="button"
onClick={() => setToast(null)}
className={`flex-shrink-0 rounded-lg p-1 transition-colors ${
toast.type === "success"
? "hover:bg-green-100 dark:hover:bg-green-800 text-green-600 dark:text-green-400"
: "hover:bg-red-100 dark:hover:bg-red-800 text-red-600 dark:text-red-400"
}`}
>
<X className="h-4 w-4" />
</button>
</div>
)}
{errors.general && (
<div className="bg-red-50 dark:bg-red-900 border border-red-200 dark:border-red-700 rounded-md p-4">
<div className="flex">
@@ -226,7 +324,7 @@ const AgentUpdatesTab = () => {
key={m}
type="button"
onClick={() => handleInputChange("updateInterval", m)}
className={`px-3 py-1.5 rounded-full text-xs font-medium border ${
className={`px-3 py-1.5 rounded-md text-xs font-medium border ${
formData.updateInterval === m
? "bg-primary-600 text-white border-primary-600"
: "bg-white dark:bg-secondary-700 text-secondary-700 dark:text-secondary-200 border-secondary-300 dark:border-secondary-600 hover:bg-secondary-50 dark:hover:bg-secondary-600"
@@ -458,19 +556,74 @@ const AgentUpdatesTab = () => {
<div className="mt-2 text-sm text-red-700 dark:text-red-300">
<p className="mb-3">To completely remove PatchMon from a host:</p>
{/* Go Agent Uninstall */}
{/* Agent Removal Script - Standard */}
<div className="mb-3">
<div className="space-y-2">
<div className="text-xs font-semibold text-red-700 dark:text-red-300 mb-1">
Standard Removal (preserves backups):
</div>
<div className="flex items-center gap-2">
<div className="bg-red-100 dark:bg-red-800 rounded p-2 font-mono text-xs flex-1">
sudo patchmon-agent uninstall
curl {formData.ignoreSslSelfSigned ? "-sk" : "-s"}{" "}
{window.location.origin}/api/v1/hosts/remove | sudo sh
</div>
<button
type="button"
onClick={() => {
navigator.clipboard.writeText(
"sudo patchmon-agent uninstall",
);
onClick={async () => {
try {
const curlFlags = formData.ignoreSslSelfSigned
? "-sk"
: "-s";
await copyToClipboard(
`curl ${curlFlags} ${window.location.origin}/api/v1/hosts/remove | sudo sh`,
);
showToast(
"Standard removal command copied!",
"success",
);
} catch (err) {
console.error("Failed to copy:", err);
showToast("Failed to copy to clipboard", "error");
}
}}
className="px-2 py-1 bg-red-200 dark:bg-red-700 text-red-800 dark:text-red-200 rounded text-xs hover:bg-red-300 dark:hover:bg-red-600 transition-colors"
>
Copy
</button>
</div>
</div>
</div>
{/* Agent Removal Script - Complete */}
<div className="mb-3">
<div className="space-y-2">
<div className="text-xs font-semibold text-red-700 dark:text-red-300 mb-1">
Complete Removal (includes backups):
</div>
<div className="flex items-center gap-2">
<div className="bg-red-100 dark:bg-red-800 rounded p-2 font-mono text-xs flex-1">
curl {formData.ignoreSslSelfSigned ? "-sk" : "-s"}{" "}
{window.location.origin}/api/v1/hosts/remove | sudo
REMOVE_BACKUPS=1 sh
</div>
<button
type="button"
onClick={async () => {
try {
const curlFlags = formData.ignoreSslSelfSigned
? "-sk"
: "-s";
await copyToClipboard(
`curl ${curlFlags} ${window.location.origin}/api/v1/hosts/remove | sudo REMOVE_BACKUPS=1 sh`,
);
showToast(
"Complete removal command copied!",
"success",
);
} catch (err) {
console.error("Failed to copy:", err);
showToast("Failed to copy to clipboard", "error");
}
}}
className="px-2 py-1 bg-red-200 dark:bg-red-700 text-red-800 dark:text-red-200 rounded text-xs hover:bg-red-300 dark:hover:bg-red-600 transition-colors"
>
@@ -478,16 +631,15 @@ const AgentUpdatesTab = () => {
</button>
</div>
<div className="text-xs text-red-600 dark:text-red-400">
Options: <code>--remove-config</code>,{" "}
<code>--remove-logs</code>, <code>--remove-all</code>,{" "}
<code>--force</code>
This removes: binaries, systemd/OpenRC services,
configuration files, logs, crontab entries, and backup files
</div>
</div>
</div>
<p className="mt-2 text-xs">
This command will remove all PatchMon files, configuration,
and crontab entries
<p className="mt-2 text-xs text-red-700 dark:text-red-400">
Standard removal preserves backup files for safety. Use
complete removal to delete everything.
</p>
</div>
</div>

View File

@@ -102,17 +102,28 @@ const BrandingTab = () => {
}
return (
<div className="space-y-6">
<div className="flex items-center mb-6">
<Image className="h-6 w-6 text-primary-600 mr-3" />
<h2 className="text-xl font-semibold text-secondary-900 dark:text-white">
Logo & Branding
</h2>
<div className="space-y-8">
{/* Header */}
<div>
<div className="flex items-center mb-6">
<Image className="h-6 w-6 text-primary-600 mr-3" />
<h2 className="text-xl font-semibold text-secondary-900 dark:text-white">
Logo & Branding
</h2>
</div>
<p className="text-sm text-secondary-500 dark:text-secondary-300 mb-6">
Customize your PatchMon installation with custom logos and favicon.
These will be displayed throughout the application.
</p>
</div>
{/* Logo Section Header */}
<div className="flex items-center mb-4">
<Image className="h-5 w-5 text-primary-600 mr-2" />
<h3 className="text-lg font-medium text-secondary-900 dark:text-white">
Logos
</h3>
</div>
<p className="text-sm text-secondary-500 dark:text-secondary-300 mb-6">
Customize your PatchMon installation with custom logos and favicon.
These will be displayed throughout the application.
</p>
<div className="grid grid-cols-1 md:grid-cols-3 gap-6">
{/* Dark Logo */}

View File

@@ -318,7 +318,7 @@ const RolePermissionsCard = ({
{role.role}
</h3>
{isBuiltInRole && (
<span className="ml-2 inline-flex items-center px-2.5 py-0.5 rounded-full text-xs font-medium bg-primary-100 text-primary-800">
<span className="ml-2 inline-flex items-center px-2.5 py-0.5 rounded-md text-xs font-medium bg-primary-100 text-primary-800">
Built-in Role
</span>
)}

View File

@@ -54,7 +54,7 @@ const UsersTab = () => {
});
// Update user mutation
const _updateUserMutation = useMutation({
const updateUserMutation = useMutation({
mutationFn: ({ id, data }) => adminUsersAPI.update(id, data),
onSuccess: () => {
queryClient.invalidateQueries(["users"]);
@@ -179,7 +179,7 @@ const UsersTab = () => {
{user.username}
</div>
{user.id === currentUser?.id && (
<span className="ml-2 inline-flex items-center px-2.5 py-0.5 rounded-full text-xs font-medium bg-blue-100 text-blue-800">
<span className="ml-2 inline-flex items-center px-2.5 py-0.5 rounded-md text-xs font-medium bg-blue-100 text-blue-800">
You
</span>
)}
@@ -195,7 +195,7 @@ const UsersTab = () => {
</td>
<td className="px-6 py-4 whitespace-nowrap">
<span
className={`inline-flex items-center px-2.5 py-0.5 rounded-full text-xs font-medium ${
className={`inline-flex items-center px-2.5 py-0.5 rounded-md text-xs font-medium ${
user.role === "admin"
? "bg-primary-100 text-primary-800"
: user.role === "host_manager"

View File

@@ -91,10 +91,29 @@ export const AuthProvider = ({ children }) => {
const login = async (username, password) => {
try {
// Get or generate device ID for TFA remember-me
let deviceId = localStorage.getItem("device_id");
if (!deviceId) {
if (typeof crypto !== "undefined" && crypto.randomUUID) {
deviceId = crypto.randomUUID();
} else {
deviceId = "xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx".replace(
/[xy]/g,
(c) => {
const r = (Math.random() * 16) | 0;
const v = c === "x" ? r : (r & 0x3) | 0x8;
return v.toString(16);
},
);
}
localStorage.setItem("device_id", deviceId);
}
const response = await fetch("/api/v1/auth/login", {
method: "POST",
headers: {
"Content-Type": "application/json",
"X-Device-ID": deviceId,
},
body: JSON.stringify({ username, password }),
});
@@ -119,6 +138,9 @@ export const AuthProvider = ({ children }) => {
setPermissions(userPermissions);
}
// Note: User preferences will be automatically fetched by ColorThemeContext
// when the component mounts, so no need to invalidate here
return { success: true };
} else {
// Handle HTTP error responses (like 500 CORS errors)
@@ -205,8 +227,19 @@ export const AuthProvider = ({ children }) => {
const data = await response.json();
if (response.ok) {
// Validate that we received user data with expected fields
if (!data.user || !data.user.id) {
console.error("Invalid user data in response:", data);
return {
success: false,
error: "Invalid response from server",
};
}
// Update both state and localStorage atomically
setUser(data.user);
localStorage.setItem("user", JSON.stringify(data.user));
return { success: true, user: data.user };
} else {
// Handle HTTP error responses (like 500 CORS errors)

View File

@@ -0,0 +1,251 @@
import { useQuery, useQueryClient } from "@tanstack/react-query";
import {
createContext,
useCallback,
useContext,
useEffect,
useMemo,
useRef,
useState,
} from "react";
import { userPreferencesAPI } from "../utils/api";
import { useAuth } from "./AuthContext";
const ColorThemeContext = createContext();
// Theme configurations matching the login backgrounds
export const THEME_PRESETS = {
default: {
name: "Normal Dark",
login: {
cellSize: 90,
variance: 0.85,
xColors: ["#0f172a", "#1e293b", "#334155", "#475569", "#64748b"],
yColors: ["#0f172a", "#1e293b", "#334155", "#475569", "#64748b"],
},
app: {
bgPrimary: "#1e293b",
bgSecondary: "#1e293b",
bgTertiary: "#334155",
borderColor: "#475569",
cardBg: "#1e293b",
cardBorder: "#334155",
buttonBg: "#334155",
buttonHover: "#475569",
},
},
cyber_blue: {
name: "Cyber Blue",
login: {
cellSize: 90,
variance: 0.85,
xColors: ["#0a0820", "#1a1f3a", "#2d3561", "#4a5584", "#667eaf"],
yColors: ["#0a0820", "#1a1f3a", "#2d3561", "#4a5584", "#667eaf"],
},
app: {
bgPrimary: "#0a0820",
bgSecondary: "#1a1f3a",
bgTertiary: "#2d3561",
borderColor: "#4a5584",
cardBg: "#1a1f3a",
cardBorder: "#2d3561",
buttonBg: "#2d3561",
buttonHover: "#4a5584",
},
},
neon_purple: {
name: "Neon Purple",
login: {
cellSize: 80,
variance: 0.9,
xColors: ["#0f0a1e", "#1e0f3e", "#4a0082", "#7209b7", "#b5179e"],
yColors: ["#0f0a1e", "#1e0f3e", "#4a0082", "#7209b7", "#b5179e"],
},
app: {
bgPrimary: "#0f0a1e",
bgSecondary: "#1e0f3e",
bgTertiary: "#4a0082",
borderColor: "#7209b7",
cardBg: "#1e0f3e",
cardBorder: "#4a0082",
buttonBg: "#4a0082",
buttonHover: "#7209b7",
},
},
matrix_green: {
name: "Matrix Green",
login: {
cellSize: 70,
variance: 0.7,
xColors: ["#001a00", "#003300", "#004d00", "#006600", "#00b300"],
yColors: ["#001a00", "#003300", "#004d00", "#006600", "#00b300"],
},
app: {
bgPrimary: "#001a00",
bgSecondary: "#003300",
bgTertiary: "#004d00",
borderColor: "#006600",
cardBg: "#003300",
cardBorder: "#004d00",
buttonBg: "#004d00",
buttonHover: "#006600",
},
},
ocean_blue: {
name: "Ocean Blue",
login: {
cellSize: 85,
variance: 0.8,
xColors: ["#001845", "#023e7d", "#0077b6", "#0096c7", "#00b4d8"],
yColors: ["#001845", "#023e7d", "#0077b6", "#0096c7", "#00b4d8"],
},
app: {
bgPrimary: "#001845",
bgSecondary: "#023e7d",
bgTertiary: "#0077b6",
borderColor: "#0096c7",
cardBg: "#023e7d",
cardBorder: "#0077b6",
buttonBg: "#0077b6",
buttonHover: "#0096c7",
},
},
sunset_gradient: {
name: "Sunset Gradient",
login: {
cellSize: 95,
variance: 0.75,
xColors: ["#1a0033", "#330066", "#4d0099", "#6600cc", "#9933ff"],
yColors: ["#1a0033", "#660033", "#990033", "#cc0066", "#ff0099"],
},
app: {
bgPrimary: "#1a0033",
bgSecondary: "#330066",
bgTertiary: "#4d0099",
borderColor: "#6600cc",
cardBg: "#330066",
cardBorder: "#4d0099",
buttonBg: "#4d0099",
buttonHover: "#6600cc",
},
},
};
export const ColorThemeProvider = ({ children }) => {
const queryClient = useQueryClient();
const lastThemeRef = useRef(null);
// Use reactive authentication state from AuthContext
// This ensures the query re-enables when user logs in
const { user } = useAuth();
const isAuthenticated = !!user;
// Source of truth: Database (via userPreferences query)
// localStorage is only used as a temporary cache until DB loads
// Only fetch if user is authenticated to avoid 401 errors on login page
const { data: userPreferences, isLoading: preferencesLoading } = useQuery({
queryKey: ["userPreferences"],
queryFn: () => userPreferencesAPI.get().then((res) => res.data),
enabled: isAuthenticated, // Only run query if user is authenticated
retry: 2,
staleTime: 5 * 60 * 1000, // 5 minutes
refetchOnWindowFocus: true, // Refetch when user returns to tab
});
// Get theme from database (source of truth), fallback to user object from login, then localStorage cache, then default
// Memoize to prevent recalculation on every render
const colorThemeValue = useMemo(() => {
return (
userPreferences?.color_theme ||
user?.color_theme ||
localStorage.getItem("colorTheme") ||
"cyber_blue"
);
}, [userPreferences?.color_theme, user?.color_theme]);
// Only update state if the theme value actually changed (prevent loops)
const [colorTheme, setColorTheme] = useState(() => colorThemeValue);
useEffect(() => {
// Only update if the value actually changed from what we last saw (prevent loops)
if (colorThemeValue !== lastThemeRef.current) {
setColorTheme(colorThemeValue);
lastThemeRef.current = colorThemeValue;
}
}, [colorThemeValue]);
const isLoading = preferencesLoading;
// Sync localStorage cache when DB data is available (for offline/performance)
useEffect(() => {
if (userPreferences?.color_theme) {
localStorage.setItem("colorTheme", userPreferences.color_theme);
}
}, [userPreferences?.color_theme]);
const updateColorTheme = useCallback(
async (theme) => {
// Store previous theme for potential revert
const previousTheme = colorTheme;
// Immediately update state for instant UI feedback
setColorTheme(theme);
lastThemeRef.current = theme;
// Also update localStorage cache
localStorage.setItem("colorTheme", theme);
// Save to backend (source of truth)
try {
await userPreferencesAPI.update({ color_theme: theme });
// Invalidate and refetch user preferences to ensure sync across tabs/browsers
await queryClient.invalidateQueries({ queryKey: ["userPreferences"] });
} catch (error) {
console.error("Failed to save color theme preference:", error);
// Revert to previous theme if save failed
setColorTheme(previousTheme);
lastThemeRef.current = previousTheme;
localStorage.setItem("colorTheme", previousTheme);
// Invalidate to refresh from DB
await queryClient.invalidateQueries({ queryKey: ["userPreferences"] });
// Show error to user if possible (could add toast notification here)
throw error; // Re-throw so calling code can handle it
}
},
[colorTheme, queryClient],
);
// Memoize themeConfig to prevent unnecessary re-renders
const themeConfig = useMemo(
() => THEME_PRESETS[colorTheme] || THEME_PRESETS.default,
[colorTheme],
);
// Memoize the context value to prevent unnecessary re-renders
const value = useMemo(
() => ({
colorTheme,
setColorTheme: updateColorTheme,
themeConfig,
isLoading,
}),
[colorTheme, themeConfig, isLoading, updateColorTheme],
);
return (
<ColorThemeContext.Provider value={value}>
{children}
</ColorThemeContext.Provider>
);
};
export const useColorTheme = () => {
const context = useContext(ColorThemeContext);
if (!context) {
throw new Error("useColorTheme must be used within ColorThemeProvider");
}
return context;
};

View File

@@ -0,0 +1,45 @@
import { useQuery } from "@tanstack/react-query";
import { createContext, useContext } from "react";
import { isAuthReady } from "../constants/authPhases";
import { settingsAPI } from "../utils/api";
import { useAuth } from "./AuthContext";
const SettingsContext = createContext();
export const useSettings = () => {
const context = useContext(SettingsContext);
if (!context) {
throw new Error("useSettings must be used within a SettingsProvider");
}
return context;
};
export const SettingsProvider = ({ children }) => {
const { authPhase, isAuthenticated } = useAuth();
const {
data: settings,
isLoading,
error,
refetch,
} = useQuery({
queryKey: ["settings"],
queryFn: () => settingsAPI.get().then((res) => res.data),
staleTime: 5 * 60 * 1000, // Settings stay fresh for 5 minutes
refetchOnWindowFocus: false,
enabled: isAuthReady(authPhase, isAuthenticated()),
});
const value = {
settings,
isLoading,
error,
refetch,
};
return (
<SettingsContext.Provider value={value}>
{children}
</SettingsContext.Provider>
);
};

View File

@@ -1,4 +1,7 @@
import { useQuery } from "@tanstack/react-query";
import { createContext, useContext, useEffect, useState } from "react";
import { userPreferencesAPI } from "../utils/api";
import { useAuth } from "./AuthContext";
const ThemeContext = createContext();
@@ -12,7 +15,7 @@ export const useTheme = () => {
export const ThemeProvider = ({ children }) => {
const [theme, setTheme] = useState(() => {
// Check localStorage first, then system preference
// Check localStorage first for immediate render
const savedTheme = localStorage.getItem("theme");
if (savedTheme) {
return savedTheme;
@@ -24,6 +27,30 @@ export const ThemeProvider = ({ children }) => {
return "light";
});
// Use reactive authentication state from AuthContext
// This ensures the query re-enables when user logs in
const { user } = useAuth();
const isAuthenticated = !!user;
// Fetch user preferences from backend (only if authenticated)
const { data: userPreferences } = useQuery({
queryKey: ["userPreferences"],
queryFn: () => userPreferencesAPI.get().then((res) => res.data),
enabled: isAuthenticated, // Only run query if user is authenticated
retry: 1,
staleTime: 5 * 60 * 1000, // 5 minutes
});
// Sync with user preferences from backend or user object from login
useEffect(() => {
const preferredTheme =
userPreferences?.theme_preference || user?.theme_preference;
if (preferredTheme) {
setTheme(preferredTheme);
localStorage.setItem("theme", preferredTheme);
}
}, [userPreferences, user?.theme_preference]);
useEffect(() => {
// Apply theme to document
if (theme === "dark") {
@@ -36,8 +63,17 @@ export const ThemeProvider = ({ children }) => {
localStorage.setItem("theme", theme);
}, [theme]);
const toggleTheme = () => {
setTheme((prevTheme) => (prevTheme === "light" ? "dark" : "light"));
const toggleTheme = async () => {
const newTheme = theme === "light" ? "dark" : "light";
setTheme(newTheme);
// Save to backend
try {
await userPreferencesAPI.update({ theme_preference: newTheme });
} catch (error) {
console.error("Failed to save theme preference:", error);
// Theme is already set locally, so user still sees the change
}
};
const value = {

View File

@@ -1,8 +1,5 @@
import { useQuery } from "@tanstack/react-query";
import { createContext, useContext, useState } from "react";
import { isAuthReady } from "../constants/authPhases";
import { settingsAPI } from "../utils/api";
import { useAuth } from "./AuthContext";
import { useSettings } from "./SettingsContext";
const UpdateNotificationContext = createContext();
@@ -18,17 +15,7 @@ export const useUpdateNotification = () => {
export const UpdateNotificationProvider = ({ children }) => {
const [dismissed, setDismissed] = useState(false);
const { authPhase, isAuthenticated } = useAuth();
// Ensure settings are loaded - but only after auth is fully ready
// This reads cached update info from backend (updated by scheduler)
const { data: settings, isLoading: settingsLoading } = useQuery({
queryKey: ["settings"],
queryFn: () => settingsAPI.get().then((res) => res.data),
staleTime: 5 * 60 * 1000, // Settings stay fresh for 5 minutes
refetchOnWindowFocus: false,
enabled: isAuthReady(authPhase, isAuthenticated()),
});
const { settings, isLoading: settingsLoading } = useSettings();
// Read cached update information from settings (no GitHub API calls)
// The backend scheduler updates this data periodically

View File

@@ -9,7 +9,7 @@
}
body {
@apply bg-secondary-50 dark:bg-secondary-800 text-secondary-900 dark:text-secondary-100 antialiased;
@apply bg-secondary-50 dark:bg-transparent text-secondary-900 dark:text-secondary-100 antialiased;
}
}
@@ -39,19 +39,46 @@
}
.btn-outline {
@apply btn border-secondary-300 dark:border-secondary-600 text-secondary-700 dark:text-secondary-200 bg-white dark:bg-secondary-800 hover:bg-secondary-50 dark:hover:bg-secondary-700 focus:ring-secondary-500;
@apply btn border-secondary-300 text-secondary-700 bg-white hover:bg-secondary-50 focus:ring-secondary-500;
}
.dark .btn-outline {
background-color: var(--theme-button-bg, #1e293b);
border-color: var(--card-border, #334155);
color: white;
}
.dark .btn-outline:hover {
background-color: var(--theme-button-hover, #334155);
}
.card {
@apply bg-white dark:bg-secondary-800 rounded-lg shadow-card dark:shadow-card-dark border border-secondary-200 dark:border-secondary-600;
@apply bg-white rounded-lg shadow-card border border-secondary-200;
}
.dark .card {
background-color: var(--card-bg, #1e293b);
border-color: var(--card-border, #334155);
box-shadow: 0 4px 6px -1px rgba(0, 0, 0, 0.3), 0 2px 4px -1px rgba(0, 0, 0, 0.2);
}
.card-hover {
@apply card hover:shadow-card-hover transition-shadow duration-150;
@apply card transition-all duration-150;
}
.dark .card-hover:hover {
background-color: var(--card-bg-hover, #334155);
box-shadow: 0 10px 15px -3px rgba(0, 0, 0, 0.4), 0 4px 6px -2px rgba(0, 0, 0, 0.3);
}
.input {
@apply block w-full px-3 py-2 border border-secondary-300 dark:border-secondary-600 rounded-md shadow-sm focus:outline-none focus:ring-primary-500 focus:border-primary-500 sm:text-sm bg-white dark:bg-secondary-800 text-secondary-900 dark:text-secondary-100;
@apply block w-full px-3 py-2 border border-secondary-300 rounded-md shadow-sm focus:outline-none focus:ring-primary-500 focus:border-primary-500 sm:text-sm bg-white text-secondary-900;
}
.dark .input {
background-color: var(--card-bg, #1e293b);
border-color: var(--card-border, #334155);
color: white;
}
.label {
@@ -84,6 +111,27 @@
}
@layer utilities {
/* Theme-aware backgrounds for general elements */
.dark .bg-secondary-800 {
background-color: var(--card-bg, #1e293b) !important;
}
.dark .bg-secondary-700 {
background-color: var(--card-bg-hover, #334155) !important;
}
.dark .bg-secondary-900 {
background-color: var(--theme-button-bg, #1e293b) !important;
}
.dark .border-secondary-600 {
border-color: var(--card-border, #334155) !important;
}
.dark .border-secondary-700 {
border-color: var(--theme-button-hover, #475569) !important;
}
.text-shadow {
text-shadow: 0 1px 2px rgba(0, 0, 0, 0.05);
}

View File

@@ -169,6 +169,20 @@ const Automation = () => {
year: "numeric",
});
}
if (schedule === "Daily at 4 AM") {
const now = new Date();
const tomorrow = new Date(now);
tomorrow.setDate(tomorrow.getDate() + 1);
tomorrow.setHours(4, 0, 0, 0);
return tomorrow.toLocaleString([], {
hour12: true,
hour: "numeric",
minute: "2-digit",
day: "numeric",
month: "numeric",
year: "numeric",
});
}
if (schedule === "Every hour") {
const now = new Date();
const nextHour = new Date(now);
@@ -182,6 +196,25 @@ const Automation = () => {
year: "numeric",
});
}
if (schedule === "Every 30 minutes") {
const now = new Date();
const nextRun = new Date(now);
// Round up to the next 30-minute mark
const minutes = now.getMinutes();
if (minutes < 30) {
nextRun.setMinutes(30, 0, 0);
} else {
nextRun.setHours(nextRun.getHours() + 1, 0, 0, 0);
}
return nextRun.toLocaleString([], {
hour12: true,
hour: "numeric",
minute: "2-digit",
day: "numeric",
month: "numeric",
year: "numeric",
});
}
return "Unknown";
};
@@ -209,12 +242,31 @@ const Automation = () => {
tomorrow.setHours(3, 0, 0, 0);
return tomorrow.getTime();
}
if (schedule === "Daily at 4 AM") {
const now = new Date();
const tomorrow = new Date(now);
tomorrow.setDate(tomorrow.getDate() + 1);
tomorrow.setHours(4, 0, 0, 0);
return tomorrow.getTime();
}
if (schedule === "Every hour") {
const now = new Date();
const nextHour = new Date(now);
nextHour.setHours(nextHour.getHours() + 1, 0, 0, 0);
return nextHour.getTime();
}
if (schedule === "Every 30 minutes") {
const now = new Date();
const nextRun = new Date(now);
// Round up to the next 30-minute mark
const minutes = now.getMinutes();
if (minutes < 30) {
nextRun.setMinutes(30, 0, 0);
} else {
nextRun.setHours(nextRun.getHours() + 1, 0, 0, 0);
}
return nextRun.getTime();
}
return Number.MAX_SAFE_INTEGER; // Unknown schedules go to bottom
};
@@ -269,8 +321,12 @@ const Automation = () => {
endpoint = "/automation/trigger/orphaned-repo-cleanup";
} else if (jobType === "orphaned-packages") {
endpoint = "/automation/trigger/orphaned-package-cleanup";
} else if (jobType === "docker-inventory") {
endpoint = "/automation/trigger/docker-inventory-cleanup";
} else if (jobType === "agent-collection") {
endpoint = "/automation/trigger/agent-collection";
} else if (jobType === "system-statistics") {
endpoint = "/automation/trigger/system-statistics";
}
const _response = await api.post(endpoint, data);
@@ -584,10 +640,18 @@ const Automation = () => {
automation.queue.includes("orphaned-package")
) {
triggerManualJob("orphaned-packages");
} else if (
automation.queue.includes("docker-inventory")
) {
triggerManualJob("docker-inventory");
} else if (
automation.queue.includes("agent-commands")
) {
triggerManualJob("agent-collection");
} else if (
automation.queue.includes("system-statistics")
) {
triggerManualJob("system-statistics");
}
}}
className="inline-flex items-center justify-center w-6 h-6 border border-transparent rounded text-white bg-green-600 hover:bg-green-700 focus:outline-none focus:ring-2 focus:ring-offset-2 focus:ring-green-500 transition-colors duration-200"

View File

@@ -13,10 +13,12 @@ import {
} from "chart.js";
import {
AlertTriangle,
CheckCircle,
Folder,
GitBranch,
Package,
RefreshCw,
RotateCcw,
Server,
Settings,
Shield,
@@ -55,6 +57,8 @@ const Dashboard = () => {
const [cardPreferences, setCardPreferences] = useState([]);
const [packageTrendsPeriod, setPackageTrendsPeriod] = useState("1"); // days
const [packageTrendsHost, setPackageTrendsHost] = useState("all"); // host filter
const [systemStatsJobId, setSystemStatsJobId] = useState(null); // Track job ID for system statistics
const [isTriggeringJob, setIsTriggeringJob] = useState(false);
const navigate = useNavigate();
const { isDark } = useTheme();
const { user } = useAuth();
@@ -97,6 +101,20 @@ const Dashboard = () => {
navigate("/repositories");
};
const handleNeedsRebootClick = () => {
// Navigate to hosts with reboot filter, clearing any other filters
const newSearchParams = new URLSearchParams();
newSearchParams.set("reboot", "true");
navigate(`/hosts?${newSearchParams.toString()}`);
};
const handleUpToDateClick = () => {
// Navigate to hosts with upToDate filter, clearing any other filters
const newSearchParams = new URLSearchParams();
newSearchParams.set("filter", "upToDate");
navigate(`/hosts?${newSearchParams.toString()}`);
};
const _handleOSDistributionClick = () => {
navigate("/hosts?showFilters=true", { replace: true });
};
@@ -306,9 +324,10 @@ const Dashboard = () => {
[
"totalHosts",
"hostsNeedingUpdates",
"upToDateHosts",
"totalOutdatedPackages",
"securityUpdates",
"upToDateHosts",
"hostsNeedingReboot",
"totalHostGroups",
"totalUsers",
"totalRepos",
@@ -339,7 +358,7 @@ const Dashboard = () => {
const getGroupClassName = (cardType) => {
switch (cardType) {
case "stats":
return "grid grid-cols-1 sm:grid-cols-2 lg:grid-cols-4 gap-4";
return "grid grid-cols-1 sm:grid-cols-2 lg:grid-cols-5 gap-4";
case "charts":
return "grid grid-cols-1 lg:grid-cols-3 gap-6";
case "widecharts":
@@ -354,23 +373,33 @@ const Dashboard = () => {
// Helper function to render a card by ID
const renderCard = (cardId) => {
switch (cardId) {
case "upToDateHosts":
case "hostsNeedingReboot":
return (
<div className="card p-4">
<button
type="button"
className="card p-4 cursor-pointer hover:shadow-card-hover dark:hover:shadow-card-hover-dark transition-shadow duration-200 w-full text-left"
onClick={handleNeedsRebootClick}
onKeyDown={(e) => {
if (e.key === "Enter" || e.key === " ") {
e.preventDefault();
handleNeedsRebootClick();
}
}}
>
<div className="flex items-center">
<div className="flex-shrink-0">
<TrendingUp className="h-5 w-5 text-success-600 mr-2" />
<RotateCcw className="h-5 w-5 text-orange-600 mr-2" />
</div>
<div className="w-0 flex-1">
<p className="text-sm text-secondary-500 dark:text-white">
Up to date
Needs Reboots
</p>
<p className="text-xl font-semibold text-secondary-900 dark:text-white">
{stats.cards.upToDateHosts}/{stats.cards.totalHosts}
{stats.cards.hostsNeedingReboot}
</p>
</div>
</div>
</div>
</button>
);
case "totalHosts":
return (
@@ -430,6 +459,35 @@ const Dashboard = () => {
</button>
);
case "upToDateHosts":
return (
<button
type="button"
className="card p-4 cursor-pointer hover:shadow-card-hover dark:hover:shadow-card-hover-dark transition-shadow duration-200 w-full text-left"
onClick={handleUpToDateClick}
onKeyDown={(e) => {
if (e.key === "Enter" || e.key === " ") {
e.preventDefault();
handleUpToDateClick();
}
}}
>
<div className="flex items-center">
<div className="flex-shrink-0">
<CheckCircle className="h-5 w-5 text-success-600 mr-2" />
</div>
<div className="w-0 flex-1">
<p className="text-sm text-secondary-500 dark:text-white">
Up to date
</p>
<p className="text-xl font-semibold text-secondary-900 dark:text-white">
{stats.cards.upToDateHosts}
</p>
</div>
</div>
</button>
);
case "totalOutdatedPackages":
return (
<button
@@ -772,56 +830,108 @@ const Dashboard = () => {
<h3 className="text-lg font-medium text-secondary-900 dark:text-white">
Package Trends Over Time
</h3>
<div className="flex items-center gap-3">
{/* Refresh Button */}
<button
type="button"
onClick={() => refetchPackageTrends()}
disabled={packageTrendsFetching}
className="px-3 py-1.5 text-sm border border-secondary-300 dark:border-secondary-600 rounded-md bg-white dark:bg-secondary-800 text-secondary-900 dark:text-white hover:bg-secondary-50 dark:hover:bg-secondary-700 focus:ring-2 focus:ring-primary-500 focus:border-primary-500 disabled:opacity-50 disabled:cursor-not-allowed flex items-center gap-2"
title="Refresh data"
>
<RefreshCw
className={`h-4 w-4 ${packageTrendsFetching ? "animate-spin" : ""}`}
/>
Refresh
</button>
<div className="flex flex-col gap-2">
<div className="flex items-center gap-3">
{/* Refresh Button */}
<button
type="button"
onClick={async () => {
if (packageTrendsHost === "all") {
// For "All Hosts", trigger system statistics collection job
setIsTriggeringJob(true);
try {
const response =
await dashboardAPI.triggerSystemStatistics();
if (response.data?.data?.jobId) {
setSystemStatsJobId(response.data.data.jobId);
// Wait a moment for the job to complete, then refetch
setTimeout(() => {
refetchPackageTrends();
}, 2000);
// Clear the job ID message after 2 seconds
setTimeout(() => {
setSystemStatsJobId(null);
}, 2000);
}
} catch (error) {
console.error(
"Failed to trigger system statistics:",
error,
);
// Still refetch data even if job trigger fails
refetchPackageTrends();
} finally {
setIsTriggeringJob(false);
}
} else {
// For individual host, just refetch the data
refetchPackageTrends();
}
}}
disabled={packageTrendsFetching || isTriggeringJob}
className="px-3 py-1.5 text-sm border border-secondary-300 dark:border-secondary-600 rounded-md bg-white dark:bg-secondary-800 text-secondary-900 dark:text-white hover:bg-secondary-50 dark:hover:bg-secondary-700 focus:ring-2 focus:ring-primary-500 focus:border-primary-500 disabled:opacity-50 disabled:cursor-not-allowed flex items-center gap-2"
title={
packageTrendsHost === "all"
? "Trigger system statistics collection"
: "Refresh data"
}
>
<RefreshCw
className={`h-4 w-4 ${
packageTrendsFetching || isTriggeringJob
? "animate-spin"
: ""
}`}
/>
Refresh
</button>
{/* Period Selector */}
<select
value={packageTrendsPeriod}
onChange={(e) => setPackageTrendsPeriod(e.target.value)}
className="px-3 py-1.5 text-sm border border-secondary-300 dark:border-secondary-600 rounded-md bg-white dark:bg-secondary-800 text-secondary-900 dark:text-white focus:ring-2 focus:ring-primary-500 focus:border-primary-500"
>
<option value="1">Last 24 hours</option>
<option value="7">Last 7 days</option>
<option value="30">Last 30 days</option>
<option value="90">Last 90 days</option>
<option value="180">Last 6 months</option>
<option value="365">Last year</option>
</select>
{/* Period Selector */}
<select
value={packageTrendsPeriod}
onChange={(e) => setPackageTrendsPeriod(e.target.value)}
className="px-3 py-1.5 text-sm border border-secondary-300 dark:border-secondary-600 rounded-md bg-white dark:bg-secondary-800 text-secondary-900 dark:text-white focus:ring-2 focus:ring-primary-500 focus:border-primary-500"
>
<option value="1">Last 24 hours</option>
<option value="7">Last 7 days</option>
<option value="30">Last 30 days</option>
<option value="90">Last 90 days</option>
<option value="180">Last 6 months</option>
<option value="365">Last year</option>
</select>
{/* Host Selector */}
<select
value={packageTrendsHost}
onChange={(e) => setPackageTrendsHost(e.target.value)}
className="px-3 py-1.5 text-sm border border-secondary-300 dark:border-secondary-600 rounded-md bg-white dark:bg-secondary-800 text-secondary-900 dark:text-white focus:ring-2 focus:ring-primary-500 focus:border-primary-500"
>
<option value="all">All Hosts</option>
{packageTrendsData?.hosts?.length > 0 ? (
packageTrendsData.hosts.map((host) => (
<option key={host.id} value={host.id}>
{host.friendly_name || host.hostname}
{/* Host Selector */}
<select
value={packageTrendsHost}
onChange={(e) => {
setPackageTrendsHost(e.target.value);
// Clear job ID message when host selection changes
setSystemStatsJobId(null);
}}
className="px-3 py-1.5 text-sm border border-secondary-300 dark:border-secondary-600 rounded-md bg-white dark:bg-secondary-800 text-secondary-900 dark:text-white focus:ring-2 focus:ring-primary-500 focus:border-primary-500"
>
<option value="all">All Hosts</option>
{packageTrendsData?.hosts?.length > 0 ? (
packageTrendsData.hosts.map((host) => (
<option key={host.id} value={host.id}>
{host.friendly_name || host.hostname}
</option>
))
) : (
<option disabled>
{packageTrendsLoading
? "Loading hosts..."
: "No hosts available"}
</option>
))
) : (
<option disabled>
{packageTrendsLoading
? "Loading hosts..."
: "No hosts available"}
</option>
)}
</select>
)}
</select>
</div>
{/* Job ID Message */}
{systemStatsJobId && packageTrendsHost === "all" && (
<p className="text-xs text-secondary-600 dark:text-secondary-400 ml-1">
Ran collection job #{systemStatsJobId}
</p>
)}
</div>
</div>
@@ -1167,13 +1277,40 @@ const Dashboard = () => {
title: (context) => {
const label = context[0].label;
// Handle "Now" label
if (label === "Now") {
return "Now";
}
// Handle empty or invalid labels
if (!label || typeof label !== "string") {
return "Unknown Date";
}
// Check if it's a full ISO timestamp (for "Last 24 hours")
// Format: "2025-01-15T14:30:00.000Z" or "2025-01-15T14:30:00.000"
if (label.includes("T") && label.includes(":")) {
try {
const date = new Date(label);
// Check if date is valid
if (Number.isNaN(date.getTime())) {
return label; // Return original label if date is invalid
}
// Format full ISO timestamp with date and time
return date.toLocaleDateString("en-US", {
month: "short",
day: "numeric",
hour: "numeric",
minute: "2-digit",
hour12: true,
});
} catch (_error) {
return label; // Return original label if parsing fails
}
}
// Format hourly labels (e.g., "2025-10-07T14" -> "Oct 7, 2:00 PM")
if (label.includes("T")) {
if (label.includes("T") && !label.includes(":")) {
try {
const date = new Date(`${label}:00:00`);
// Check if date is valid
@@ -1233,13 +1370,41 @@ const Dashboard = () => {
callback: function (value, _index, _ticks) {
const label = this.getLabelForValue(value);
// Handle "Now" label
if (label === "Now") {
return "Now";
}
// Handle empty or invalid labels
if (!label || typeof label !== "string") {
return "Unknown";
}
// Check if it's a full ISO timestamp (for "Last 24 hours")
// Format: "2025-01-15T14:30:00.000Z" or "2025-01-15T14:30:00.000"
if (label.includes("T") && label.includes(":")) {
try {
const date = new Date(label);
// Check if date is valid
if (Number.isNaN(date.getTime())) {
return label; // Return original label if date is invalid
}
// Extract hour from full ISO timestamp
const hourNum = date.getHours();
return hourNum === 0
? "12 AM"
: hourNum < 12
? `${hourNum} AM`
: hourNum === 12
? "12 PM"
: `${hourNum - 12} PM`;
} catch (_error) {
return label; // Return original label if parsing fails
}
}
// Format hourly labels (e.g., "2025-10-07T14" -> "2 PM")
if (label.includes("T")) {
if (label.includes("T") && !label.includes(":")) {
try {
const hour = label.split("T")[1];
const hourNum = parseInt(hour, 10);

File diff suppressed because it is too large Load Diff

View File

@@ -12,6 +12,7 @@ import {
Copy,
Cpu,
Database,
Download,
Eye,
EyeOff,
HardDrive,
@@ -20,6 +21,7 @@ import {
Monitor,
Package,
RefreshCw,
RotateCcw,
Server,
Shield,
Terminal,
@@ -53,6 +55,8 @@ const HostDetail = () => {
const [historyLimit] = useState(10);
const [notes, setNotes] = useState("");
const [notesMessage, setNotesMessage] = useState({ text: "", type: "" });
const [updateMessage, setUpdateMessage] = useState({ text: "", jobId: "" });
const [reportMessage, setReportMessage] = useState({ text: "", jobId: "" });
const {
data: host,
@@ -187,6 +191,57 @@ const HostDetail = () => {
},
});
// Force agent update mutation
const forceAgentUpdateMutation = useMutation({
mutationFn: () =>
adminHostsAPI.forceAgentUpdate(hostId).then((res) => res.data),
onSuccess: (data) => {
queryClient.invalidateQueries(["host", hostId]);
queryClient.invalidateQueries(["hosts"]);
// Show success message with job ID
if (data?.jobId) {
setUpdateMessage({
text: "Update queued successfully",
jobId: data.jobId,
});
// Clear message after 5 seconds
setTimeout(() => setUpdateMessage({ text: "", jobId: "" }), 5000);
}
},
onError: (error) => {
setUpdateMessage({
text: error.response?.data?.error || "Failed to queue update",
jobId: "",
});
setTimeout(() => setUpdateMessage({ text: "", jobId: "" }), 5000);
},
});
// Fetch report mutation
const fetchReportMutation = useMutation({
mutationFn: () => adminHostsAPI.fetchReport(hostId).then((res) => res.data),
onSuccess: (data) => {
queryClient.invalidateQueries(["host", hostId]);
queryClient.invalidateQueries(["hosts"]);
// Show success message with job ID
if (data?.jobId) {
setReportMessage({
text: "Report fetch queued successfully",
jobId: data.jobId,
});
// Clear message after 5 seconds
setTimeout(() => setReportMessage({ text: "", jobId: "" }), 5000);
}
},
onError: (error) => {
setReportMessage({
text: error.response?.data?.error || "Failed to fetch report",
jobId: "",
});
setTimeout(() => setReportMessage({ text: "", jobId: "" }), 5000);
},
});
const updateFriendlyNameMutation = useMutation({
mutationFn: (friendlyName) =>
adminHostsAPI
@@ -227,6 +282,67 @@ const HostDetail = () => {
},
});
// Fetch integration status
const {
data: integrationsData,
isLoading: isLoadingIntegrations,
refetch: refetchIntegrations,
} = useQuery({
queryKey: ["host-integrations", hostId],
queryFn: () =>
adminHostsAPI.getIntegrations(hostId).then((res) => res.data),
staleTime: 30 * 1000, // 30 seconds
refetchOnWindowFocus: false,
enabled: !!hostId && activeTab === "integrations",
});
// Refetch integrations when WebSocket status changes (e.g., after agent restart)
useEffect(() => {
if (
wsStatus?.connected &&
activeTab === "integrations" &&
integrationsData?.data?.connected === false
) {
// Agent just reconnected, refetch integrations to get updated connection status
refetchIntegrations();
}
}, [
wsStatus?.connected,
activeTab,
integrationsData?.data?.connected,
refetchIntegrations,
]);
// Toggle integration mutation
const toggleIntegrationMutation = useMutation({
mutationFn: ({ integrationName, enabled }) =>
adminHostsAPI
.toggleIntegration(hostId, integrationName, enabled)
.then((res) => res.data),
onSuccess: (data) => {
// Optimistically update the cache with the new state
queryClient.setQueryData(["host-integrations", hostId], (oldData) => {
if (!oldData) return oldData;
return {
...oldData,
data: {
...oldData.data,
integrations: {
...oldData.data.integrations,
[data.data.integration]: data.data.enabled,
},
},
};
});
// Also invalidate to ensure we get fresh data
queryClient.invalidateQueries(["host-integrations", hostId]);
},
onError: () => {
// On error, refetch to get the actual state
refetchIntegrations();
},
});
const handleDeleteHost = async () => {
if (
window.confirm(
@@ -378,6 +494,12 @@ const HostDetail = () => {
{getStatusIcon(isStale, host.stats.outdated_packages > 0)}
{getStatusText(isStale, host.stats.outdated_packages > 0)}
</div>
{host.needs_reboot && (
<span className="inline-flex items-center gap-1 px-2 py-0.5 rounded-md text-xs font-medium bg-orange-100 text-orange-800 dark:bg-orange-900 dark:text-orange-200">
<RotateCcw className="h-3 w-3" />
Reboot Required
</span>
)}
</div>
{/* Info row with uptime and last updated */}
<div className="flex items-center gap-4 text-sm text-secondary-600 dark:text-secondary-400">
@@ -399,20 +521,53 @@ const HostDetail = () => {
</div>
</div>
<div className="flex items-center gap-2">
<div>
<button
type="button"
onClick={() => fetchReportMutation.mutate()}
disabled={fetchReportMutation.isPending || !wsStatus?.connected}
className="btn-outline flex items-center gap-2 text-sm"
title={
!wsStatus?.connected
? "Agent is not connected"
: "Fetch package data from agent"
}
>
<Download
className={`h-4 w-4 ${
fetchReportMutation.isPending ? "animate-spin" : ""
}`}
/>
Fetch Report
</button>
{reportMessage.text && (
<p className="text-xs mt-1.5 text-secondary-600 dark:text-secondary-400">
{reportMessage.text}
{reportMessage.jobId && (
<span className="ml-1 font-mono text-secondary-500">
(Job #{reportMessage.jobId})
</span>
)}
</p>
)}
</div>
<button
type="button"
onClick={() => setShowCredentialsModal(true)}
className="btn-outline flex items-center gap-2 text-sm"
className={`btn-outline flex items-center text-sm ${
host?.machine_id ? "justify-center p-2" : "gap-2"
}`}
title="View credentials"
>
<Key className="h-4 w-4" />
Deploy Agent
{!host?.machine_id && <span>Deploy Agent</span>}
</button>
<button
type="button"
onClick={() => refetch()}
disabled={isFetching}
className="btn-outline flex items-center justify-center p-2 text-sm"
title="Refresh host data"
title="Refresh dashboard"
>
<RefreshCw
className={`h-4 w-4 ${isFetching ? "animate-spin" : ""}`}
@@ -579,6 +734,17 @@ const HostDetail = () => {
>
Notes
</button>
<button
type="button"
onClick={() => handleTabChange("integrations")}
className={`px-4 py-2 text-sm font-medium ${
activeTab === "integrations"
? "text-primary-600 dark:text-primary-400 border-b-2 border-primary-500"
: "text-secondary-500 dark:text-secondary-400 hover:text-secondary-700 dark:hover:text-secondary-300"
}`}
>
Integrations
</button>
</div>
<div className="p-4">
@@ -690,7 +856,7 @@ const HostDetail = () => {
toggleAutoUpdateMutation.mutate(!host.auto_update)
}
disabled={toggleAutoUpdateMutation.isPending}
className={`relative inline-flex h-5 w-9 items-center rounded-full transition-colors focus:outline-none focus:ring-2 focus:ring-primary-500 focus:ring-offset-2 ${
className={`relative inline-flex h-5 w-9 items-center rounded-md transition-colors focus:outline-none focus:ring-2 focus:ring-primary-500 focus:ring-offset-2 ${
host.auto_update
? "bg-primary-600 dark:bg-primary-500"
: "bg-secondary-200 dark:bg-secondary-600"
@@ -703,6 +869,49 @@ const HostDetail = () => {
/>
</button>
</div>
<div>
<p className="text-xs text-secondary-500 dark:text-secondary-300 mb-1.5">
Force Agent Version Upgrade
</p>
<button
type="button"
onClick={() => forceAgentUpdateMutation.mutate()}
disabled={
forceAgentUpdateMutation.isPending ||
!wsStatus?.connected
}
title={
!wsStatus?.connected
? "Agent is not connected"
: "Force agent to update now"
}
className="flex items-center gap-1.5 px-3 py-1.5 text-xs font-medium text-primary-600 dark:text-primary-400 bg-primary-50 dark:bg-primary-900/20 border border-primary-200 dark:border-primary-800 rounded-md hover:bg-primary-100 dark:hover:bg-primary-900/40 transition-colors disabled:opacity-50 disabled:cursor-not-allowed"
>
<RefreshCw
className={`h-3 w-3 ${
forceAgentUpdateMutation.isPending
? "animate-spin"
: ""
}`}
/>
{forceAgentUpdateMutation.isPending
? "Updating..."
: wsStatus?.connected
? "Update Now"
: "Offline"}
</button>
{updateMessage.text && (
<p className="text-xs mt-1.5 text-secondary-600 dark:text-secondary-400">
{updateMessage.text}
{updateMessage.jobId && (
<span className="ml-1 font-mono text-secondary-500">
(Job #{updateMessage.jobId})
</span>
)}
</p>
)}
</div>
</div>
</div>
)}
@@ -792,7 +1001,7 @@ const HostDetail = () => {
<Terminal className="h-4 w-4 text-primary-600 dark:text-primary-400" />
System Information
</h4>
<div className="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-4 gap-4">
<div className="grid grid-cols-1 md:grid-cols-3 gap-4">
{host.architecture && (
<div>
<p className="text-xs text-secondary-500 dark:text-secondary-300">
@@ -804,20 +1013,32 @@ const HostDetail = () => {
</div>
)}
{host.kernel_version && (
<div>
<p className="text-xs text-secondary-500 dark:text-secondary-300">
Kernel Version
</p>
<p className="font-medium text-secondary-900 dark:text-white font-mono text-sm">
{host.kernel_version}
</p>
{(host.kernel_version ||
host.installed_kernel_version) && (
<div className="flex flex-col gap-2">
{host.kernel_version && (
<div>
<p className="text-xs text-secondary-500 dark:text-secondary-300 mb-1">
Running Kernel
</p>
<p className="font-medium text-secondary-900 dark:text-white font-mono text-sm break-all">
{host.kernel_version}
</p>
</div>
)}
{host.installed_kernel_version && (
<div>
<p className="text-xs text-secondary-500 dark:text-secondary-300 mb-1">
Installed Kernel
</p>
<p className="font-medium text-secondary-900 dark:text-white font-mono text-sm break-all">
{host.installed_kernel_version}
</p>
</div>
)}
</div>
)}
{/* Empty div to push SELinux status to the right */}
<div></div>
{host.selinux_status && (
<div>
<p className="text-xs text-secondary-500 dark:text-secondary-300">
@@ -1316,6 +1537,101 @@ const HostDetail = () => {
{/* Agent Queue */}
{activeTab === "queue" && <AgentQueueTab hostId={hostId} />}
{/* Integrations */}
{activeTab === "integrations" && (
<div className="max-w-2xl space-y-4">
{isLoadingIntegrations ? (
<div className="flex items-center justify-center h-32">
<RefreshCw className="h-6 w-6 animate-spin text-primary-600" />
</div>
) : (
<div className="space-y-4">
{/* Docker Integration */}
<div className="bg-secondary-50 dark:bg-secondary-700 rounded-lg p-4 border border-secondary-200 dark:border-secondary-600">
<div className="flex items-start justify-between gap-4">
<div className="flex-1">
<div className="flex items-center gap-3 mb-2">
<Database className="h-5 w-5 text-primary-600 dark:text-primary-400" />
<h4 className="text-sm font-medium text-secondary-900 dark:text-white">
Docker
</h4>
{integrationsData?.data?.integrations?.docker ? (
<span className="inline-flex items-center px-2 py-0.5 rounded text-xs font-semibold bg-green-100 text-green-800 dark:bg-green-900 dark:text-green-200">
Enabled
</span>
) : (
<span className="inline-flex items-center px-2 py-0.5 rounded text-xs font-semibold bg-gray-200 text-gray-600 dark:bg-gray-600 dark:text-gray-400">
Disabled
</span>
)}
</div>
<p className="text-xs text-secondary-600 dark:text-secondary-300">
Monitor Docker containers, images, volumes, and
networks. Collects real-time container status
events.
</p>
</div>
<div className="flex-shrink-0">
<button
type="button"
onClick={() =>
toggleIntegrationMutation.mutate({
integrationName: "docker",
enabled:
!integrationsData?.data?.integrations?.docker,
})
}
disabled={
toggleIntegrationMutation.isPending ||
!wsStatus?.connected
}
title={
!wsStatus?.connected
? "Agent is not connected"
: integrationsData?.data?.integrations?.docker
? "Disable Docker integration"
: "Enable Docker integration"
}
className={`relative inline-flex h-5 w-9 items-center rounded-md transition-colors focus:outline-none focus:ring-2 focus:ring-primary-500 focus:ring-offset-2 ${
integrationsData?.data?.integrations?.docker
? "bg-primary-600 dark:bg-primary-500"
: "bg-secondary-200 dark:bg-secondary-600"
} ${
toggleIntegrationMutation.isPending ||
!integrationsData?.data?.connected
? "opacity-50 cursor-not-allowed"
: ""
}`}
>
<span
className={`inline-block h-3 w-3 transform rounded-full bg-white transition-transform ${
integrationsData?.data?.integrations?.docker
? "translate-x-5"
: "translate-x-1"
}`}
/>
</button>
</div>
</div>
{!wsStatus?.connected && (
<p className="text-xs text-warning-600 dark:text-warning-400 mt-2">
Agent must be connected via WebSocket to toggle
integrations
</p>
)}
{toggleIntegrationMutation.isPending && (
<p className="text-xs text-secondary-600 dark:text-secondary-400 mt-2">
Updating integration...
</p>
)}
</div>
{/* Future integrations can be added here with the same pattern */}
</div>
)}
</div>
)}
</div>
</div>
</div>
@@ -1348,10 +1664,8 @@ const CredentialsModal = ({ host, isOpen, onClose }) => {
const [showApiKey, setShowApiKey] = useState(false);
const [activeTab, setActiveTab] = useState("quick-install");
const [forceInstall, setForceInstall] = useState(false);
const [architecture, setArchitecture] = useState("amd64");
const apiIdInputId = useId();
const apiKeyInputId = useId();
const architectureSelectId = useId();
const { data: serverUrlData } = useQuery({
queryKey: ["serverUrl"],
@@ -1371,13 +1685,13 @@ const CredentialsModal = ({ host, isOpen, onClose }) => {
return settings?.ignore_ssl_self_signed ? "-sk" : "-s";
};
// Helper function to build installation URL with optional force flag and architecture
// Helper function to build installation URL with optional force flag
const getInstallUrl = () => {
const baseUrl = `${serverUrl}/api/v1/hosts/install`;
const params = new URLSearchParams();
if (forceInstall) params.append("force", "true");
params.append("arch", architecture);
return `${baseUrl}?${params.toString()}`;
if (forceInstall) {
return `${baseUrl}?force=true`;
}
return baseUrl;
};
const copyToClipboard = async (text) => {
@@ -1493,33 +1807,10 @@ const CredentialsModal = ({ host, isOpen, onClose }) => {
</p>
</div>
{/* Architecture Selection */}
<div className="mb-3">
<label
htmlFor={architectureSelectId}
className="block text-sm font-medium text-primary-800 dark:text-primary-200 mb-2"
>
Target Architecture
</label>
<select
id={architectureSelectId}
value={architecture}
onChange={(e) => setArchitecture(e.target.value)}
className="px-3 py-2 border border-primary-300 dark:border-primary-600 rounded-md bg-white dark:bg-secondary-800 text-sm text-secondary-900 dark:text-white focus:ring-primary-500 focus:border-primary-500"
>
<option value="amd64">AMD64 (x86_64) - Default</option>
<option value="386">386 (i386) - 32-bit</option>
<option value="arm64">ARM64 (aarch64) - ARM</option>
</select>
<p className="text-xs text-primary-600 dark:text-primary-400 mt-1">
Select the architecture of the target host
</p>
</div>
<div className="flex items-center gap-2">
<input
type="text"
value={`curl ${getCurlFlags()} ${getInstallUrl()} -H "X-API-ID: ${host.api_id}" -H "X-API-KEY: ${host.api_key}" | bash`}
value={`curl ${getCurlFlags()} ${getInstallUrl()} -H "X-API-ID: ${host.api_id}" -H "X-API-KEY: ${host.api_key}" | sh`}
readOnly
className="flex-1 px-3 py-2 border border-primary-300 dark:border-primary-600 rounded-md bg-white dark:bg-secondary-800 text-sm font-mono text-secondary-900 dark:text-white"
/>
@@ -1527,7 +1818,7 @@ const CredentialsModal = ({ host, isOpen, onClose }) => {
type="button"
onClick={() =>
copyToClipboard(
`curl ${getCurlFlags()} ${getInstallUrl()} -H "X-API-ID: ${host.api_id}" -H "X-API-KEY: ${host.api_key}" | bash`,
`curl ${getCurlFlags()} ${getInstallUrl()} -H "X-API-ID: ${host.api_id}" -H "X-API-KEY: ${host.api_key}" | sh`,
)
}
className="btn-primary flex items-center gap-1"
@@ -1537,270 +1828,6 @@ const CredentialsModal = ({ host, isOpen, onClose }) => {
</button>
</div>
</div>
<div className="bg-secondary-50 dark:bg-secondary-700 rounded-lg p-4">
<h4 className="text-sm font-medium text-secondary-900 dark:text-white mb-2">
Manual Installation
</h4>
<p className="text-sm text-secondary-600 dark:text-secondary-300 mb-3">
If you prefer to install manually, follow these steps:
</p>
<div className="space-y-3">
<div className="bg-white dark:bg-secondary-800 rounded-md p-3 border border-secondary-200 dark:border-secondary-600">
<h5 className="text-sm font-medium text-secondary-900 dark:text-white mb-2">
1. Create Configuration Directory
</h5>
<div className="flex items-center gap-2">
<input
type="text"
value="sudo mkdir -p /etc/patchmon"
readOnly
className="flex-1 px-3 py-2 border border-secondary-300 dark:border-secondary-600 rounded-md bg-white dark:bg-secondary-800 text-sm font-mono text-secondary-900 dark:text-white"
/>
<button
type="button"
onClick={() =>
copyToClipboard("sudo mkdir -p /etc/patchmon")
}
className="btn-secondary flex items-center gap-1"
>
<Copy className="h-4 w-4" />
Copy
</button>
</div>
</div>
<div className="bg-white dark:bg-secondary-800 rounded-md p-3 border border-secondary-200 dark:border-secondary-600">
<h5 className="text-sm font-medium text-secondary-900 dark:text-white mb-2">
2. Download and Install Agent Binary
</h5>
<div className="flex items-center gap-2">
<input
type="text"
value={`curl ${getCurlFlags()} -o /usr/local/bin/patchmon-agent ${serverUrl}/api/v1/hosts/agent/download?arch=${architecture} -H "X-API-ID: ${host.api_id}" -H "X-API-KEY: ${host.api_key}" && sudo chmod +x /usr/local/bin/patchmon-agent`}
readOnly
className="flex-1 px-3 py-2 border border-secondary-300 dark:border-secondary-600 rounded-md bg-white dark:bg-secondary-800 text-sm font-mono text-secondary-900 dark:text-white"
/>
<button
type="button"
onClick={() =>
copyToClipboard(
`curl ${getCurlFlags()} -o /usr/local/bin/patchmon-agent ${serverUrl}/api/v1/hosts/agent/download?arch=${architecture} -H "X-API-ID: ${host.api_id}" -H "X-API-KEY: ${host.api_key}" && sudo chmod +x /usr/local/bin/patchmon-agent`,
)
}
className="btn-secondary flex items-center gap-1"
>
<Copy className="h-4 w-4" />
Copy
</button>
</div>
</div>
<div className="bg-white dark:bg-secondary-800 rounded-md p-3 border border-secondary-200 dark:border-secondary-600">
<h5 className="text-sm font-medium text-secondary-900 dark:text-white mb-2">
3. Configure Credentials
</h5>
<div className="flex items-center gap-2">
<input
type="text"
value={`sudo /usr/local/bin/patchmon-agent config set-api "${host.api_id}" "${host.api_key}" "${serverUrl}"`}
readOnly
className="flex-1 px-3 py-2 border border-secondary-300 dark:border-secondary-600 rounded-md bg-white dark:bg-secondary-800 text-sm font-mono text-secondary-900 dark:text-white"
/>
<button
type="button"
onClick={() =>
copyToClipboard(
`sudo /usr/local/bin/patchmon-agent config set-api "${host.api_id}" "${host.api_key}" "${serverUrl}"`,
)
}
className="btn-secondary flex items-center gap-1"
>
<Copy className="h-4 w-4" />
Copy
</button>
</div>
</div>
<div className="bg-white dark:bg-secondary-800 rounded-md p-3 border border-secondary-200 dark:border-secondary-600">
<h5 className="text-sm font-medium text-secondary-900 dark:text-white mb-2">
4. Test Configuration
</h5>
<div className="flex items-center gap-2">
<input
type="text"
value="sudo /usr/local/bin/patchmon-agent ping"
readOnly
className="flex-1 px-3 py-2 border border-secondary-300 dark:border-secondary-600 rounded-md bg-white dark:bg-secondary-800 text-sm font-mono text-secondary-900 dark:text-white"
/>
<button
type="button"
onClick={() =>
copyToClipboard(
"sudo /usr/local/bin/patchmon-agent ping",
)
}
className="btn-secondary flex items-center gap-1"
>
<Copy className="h-4 w-4" />
Copy
</button>
</div>
</div>
<div className="bg-white dark:bg-secondary-800 rounded-md p-3 border border-secondary-200 dark:border-secondary-600">
<h5 className="text-sm font-medium text-secondary-900 dark:text-white mb-2">
5. Send Initial Data
</h5>
<div className="flex items-center gap-2">
<input
type="text"
value="sudo /usr/local/bin/patchmon-agent report"
readOnly
className="flex-1 px-3 py-2 border border-secondary-300 dark:border-secondary-600 rounded-md bg-white dark:bg-secondary-800 text-sm font-mono text-secondary-900 dark:text-white"
/>
<button
type="button"
onClick={() =>
copyToClipboard(
"sudo /usr/local/bin/patchmon-agent report",
)
}
className="btn-secondary flex items-center gap-1"
>
<Copy className="h-4 w-4" />
Copy
</button>
</div>
</div>
<div className="bg-white dark:bg-secondary-800 rounded-md p-3 border border-secondary-200 dark:border-secondary-600">
<h5 className="text-sm font-medium text-secondary-900 dark:text-white mb-2">
6. Create Systemd Service File
</h5>
<div className="flex items-center gap-2">
<input
type="text"
value={`sudo tee /etc/systemd/system/patchmon-agent.service > /dev/null << 'EOF'
[Unit]
Description=PatchMon Agent Service
After=network.target
Wants=network.target
[Service]
Type=simple
User=root
ExecStart=/usr/local/bin/patchmon-agent serve
Restart=always
RestartSec=10
WorkingDirectory=/etc/patchmon
# Logging
StandardOutput=journal
StandardError=journal
SyslogIdentifier=patchmon-agent
[Install]
WantedBy=multi-user.target
EOF`}
readOnly
className="flex-1 px-3 py-2 border border-secondary-300 dark:border-secondary-600 rounded-md bg-white dark:bg-secondary-800 text-sm font-mono text-secondary-900 dark:text-white"
/>
<button
type="button"
onClick={() =>
copyToClipboard(
`sudo tee /etc/systemd/system/patchmon-agent.service > /dev/null << 'EOF'
[Unit]
Description=PatchMon Agent Service
After=network.target
Wants=network.target
[Service]
Type=simple
User=root
ExecStart=/usr/local/bin/patchmon-agent serve
Restart=always
RestartSec=10
WorkingDirectory=/etc/patchmon
# Logging
StandardOutput=journal
StandardError=journal
SyslogIdentifier=patchmon-agent
[Install]
WantedBy=multi-user.target
EOF`,
)
}
className="btn-secondary flex items-center gap-1"
>
<Copy className="h-4 w-4" />
Copy
</button>
</div>
</div>
<div className="bg-white dark:bg-secondary-800 rounded-md p-3 border border-secondary-200 dark:border-secondary-600">
<h5 className="text-sm font-medium text-secondary-900 dark:text-white mb-2">
7. Enable and Start Service
</h5>
<div className="flex items-center gap-2">
<input
type="text"
value="sudo systemctl daemon-reload && sudo systemctl enable patchmon-agent && sudo systemctl start patchmon-agent"
readOnly
className="flex-1 px-3 py-2 border border-secondary-300 dark:border-secondary-600 rounded-md bg-white dark:bg-secondary-800 text-sm font-mono text-secondary-900 dark:text-white"
/>
<button
type="button"
onClick={() =>
copyToClipboard(
"sudo systemctl daemon-reload && sudo systemctl enable patchmon-agent && sudo systemctl start patchmon-agent",
)
}
className="btn-secondary flex items-center gap-1"
>
<Copy className="h-4 w-4" />
Copy
</button>
</div>
<p className="text-xs text-secondary-600 dark:text-secondary-400 mt-2">
This will start the agent service and establish WebSocket
connection for real-time communication
</p>
</div>
<div className="bg-white dark:bg-secondary-800 rounded-md p-3 border border-secondary-200 dark:border-secondary-600">
<h5 className="text-sm font-medium text-secondary-900 dark:text-white mb-2">
8. Verify Service Status
</h5>
<div className="flex items-center gap-2">
<input
type="text"
value="sudo systemctl status patchmon-agent"
readOnly
className="flex-1 px-3 py-2 border border-secondary-300 dark:border-secondary-600 rounded-md bg-white dark:bg-secondary-800 text-sm font-mono text-secondary-900 dark:text-white"
/>
<button
type="button"
onClick={() =>
copyToClipboard("sudo systemctl status patchmon-agent")
}
className="btn-secondary flex items-center gap-1"
>
<Copy className="h-4 w-4" />
Copy
</button>
</div>
<p className="text-xs text-secondary-600 dark:text-secondary-400 mt-2">
Check that the service is running and WebSocket connection
is established
</p>
</div>
</div>
</div>
</div>
)}

View File

@@ -470,9 +470,18 @@ const EditHostGroupModal = ({ group, onClose, onSubmit, isLoading }) => {
// Delete Confirmation Modal
const DeleteHostGroupModal = ({ group, onClose, onConfirm, isLoading }) => {
// Fetch hosts for this group
const { data: hostsData } = useQuery({
queryKey: ["hostGroupHosts", group?.id],
queryFn: () => hostGroupsAPI.getHosts(group.id).then((res) => res.data),
enabled: !!group && group._count?.hosts > 0,
});
const hosts = hostsData || [];
return (
<div className="fixed inset-0 bg-black bg-opacity-50 flex items-center justify-center z-50">
<div className="bg-white dark:bg-secondary-800 rounded-lg p-6 w-full max-w-md">
<div className="bg-white dark:bg-secondary-800 rounded-lg p-6 w-full max-w-md max-h-[90vh] overflow-y-auto">
<div className="flex items-center gap-3 mb-4">
<div className="w-10 h-10 bg-danger-100 rounded-full flex items-center justify-center">
<AlertTriangle className="h-5 w-5 text-danger-600" />
@@ -494,12 +503,30 @@ const DeleteHostGroupModal = ({ group, onClose, onConfirm, isLoading }) => {
</p>
{group._count.hosts > 0 && (
<div className="mt-3 p-3 bg-warning-50 border border-warning-200 rounded-md">
<p className="text-sm text-warning-800">
<p className="text-sm text-warning-800 mb-2">
<strong>Warning:</strong> This group contains{" "}
{group._count.hosts} host
{group._count.hosts !== 1 ? "s" : ""}. You must move or remove
these hosts before deleting the group.
</p>
{hosts.length > 0 && (
<div className="mt-2">
<p className="text-xs font-medium text-warning-900 mb-1">
Hosts in this group:
</p>
<div className="max-h-32 overflow-y-auto bg-warning-100 rounded p-2">
{hosts.map((host) => (
<div
key={host.id}
className="text-xs text-warning-900 flex items-center gap-1"
>
<Server className="h-3 w-3" />
{host.friendly_name || host.hostname}
</div>
))}
</div>
</div>
)}
</div>
)}
</div>

View File

@@ -16,6 +16,7 @@ import {
GripVertical,
Plus,
RefreshCw,
RotateCcw,
Search,
Server,
Square,
@@ -247,6 +248,7 @@ const Hosts = () => {
const showFiltersParam = searchParams.get("showFilters");
const osFilterParam = searchParams.get("osFilter");
const groupParam = searchParams.get("group");
const rebootParam = searchParams.get("reboot");
if (filter === "needsUpdates") {
setShowFilters(true);
@@ -331,10 +333,11 @@ const Hosts = () => {
},
{ id: "ws_status", label: "Connection", visible: true, order: 9 },
{ id: "status", label: "Status", visible: true, order: 10 },
{ id: "updates", label: "Updates", visible: true, order: 11 },
{ id: "notes", label: "Notes", visible: false, order: 12 },
{ id: "last_update", label: "Last Update", visible: true, order: 13 },
{ id: "actions", label: "Actions", visible: true, order: 14 },
{ id: "needs_reboot", label: "Reboot", visible: true, order: 11 },
{ id: "updates", label: "Updates", visible: true, order: 12 },
{ id: "notes", label: "Notes", visible: false, order: 13 },
{ id: "last_update", label: "Last Update", visible: true, order: 14 },
{ id: "actions", label: "Actions", visible: true, order: 15 },
];
const saved = localStorage.getItem("hosts-column-config");
@@ -356,8 +359,25 @@ const Hosts = () => {
localStorage.removeItem("hosts-column-config");
return defaultConfig;
} else {
// Ensure ws_status column is visible in saved config
const updatedConfig = savedConfig.map((col) =>
// Merge saved config with defaults to handle new columns
// This preserves user's visibility preferences while adding new columns
const mergedConfig = defaultConfig.map((defaultCol) => {
const savedCol = savedConfig.find(
(col) => col.id === defaultCol.id,
);
if (savedCol) {
// Use saved visibility preference, but keep default order and label
return {
...defaultCol,
visible: savedCol.visible,
};
}
// New column not in saved config, use default
return defaultCol;
});
// Ensure ws_status column is visible
const updatedConfig = mergedConfig.map((col) =>
col.id === "ws_status" ? { ...col, visible: true } : col,
);
return updatedConfig;
@@ -402,105 +422,71 @@ const Hosts = () => {
const token = localStorage.getItem("token");
if (!token) return;
// Fetch initial WebSocket status for all hosts
// Fetch initial WebSocket status for all hosts
const fetchInitialStatus = async () => {
const statusPromises = hosts
const apiIds = hosts
.filter((host) => host.api_id)
.map(async (host) => {
try {
const response = await fetch(`/api/v1/ws/status/${host.api_id}`, {
headers: {
Authorization: `Bearer ${token}`,
},
});
if (response.ok) {
const data = await response.json();
return { apiId: host.api_id, status: data.data };
}
} catch (_error) {
// Silently handle errors
}
return {
apiId: host.api_id,
status: { connected: false, secure: false },
};
});
.map((host) => host.api_id);
const results = await Promise.all(statusPromises);
const initialStatusMap = {};
results.forEach(({ apiId, status }) => {
initialStatusMap[apiId] = status;
});
if (apiIds.length === 0) return;
setWsStatusMap(initialStatusMap);
try {
const response = await fetch(
`/api/v1/ws/status?apiIds=${apiIds.join(",")}`,
{
headers: {
Authorization: `Bearer ${token}`,
},
},
);
if (response.ok) {
const result = await response.json();
setWsStatusMap(result.data);
}
} catch (_error) {
// Silently handle errors
}
};
fetchInitialStatus();
}, [hosts]);
// Subscribe to WebSocket status changes for all hosts via SSE
// Subscribe to WebSocket status changes for all hosts via polling (lightweight alternative to SSE)
useEffect(() => {
if (!hosts || hosts.length === 0) return;
const token = localStorage.getItem("token");
if (!token) return;
const eventSources = new Map();
let isMounted = true;
// Use polling instead of SSE to avoid connection pool issues
// Poll every 10 seconds instead of 19 persistent connections
const pollInterval = setInterval(() => {
const apiIds = hosts
.filter((host) => host.api_id)
.map((host) => host.api_id);
const connectHost = (apiId) => {
if (!isMounted || eventSources.has(apiId)) return;
if (apiIds.length === 0) return;
try {
const es = new EventSource(
`/api/v1/ws/status/${apiId}/stream?token=${encodeURIComponent(token)}`,
);
es.onmessage = (event) => {
try {
const data = JSON.parse(event.data);
if (isMounted) {
setWsStatusMap((prev) => {
const newMap = { ...prev, [apiId]: data };
return newMap;
});
}
} catch (_err) {
// Silently handle parse errors
fetch(`/api/v1/ws/status?apiIds=${apiIds.join(",")}`, {
headers: {
Authorization: `Bearer ${token}`,
},
})
.then((response) => response.json())
.then((result) => {
if (result.success && result.data) {
setWsStatusMap(result.data);
}
};
es.onerror = (_error) => {
console.log(`[SSE] Connection error for ${apiId}, retrying...`);
es?.close();
eventSources.delete(apiId);
if (isMounted) {
// Retry connection after 5 seconds with exponential backoff
setTimeout(() => connectHost(apiId), 5000);
}
};
eventSources.set(apiId, es);
} catch (_err) {
// Silently handle connection errors
}
};
// Connect to all hosts
for (const host of hosts) {
if (host.api_id) {
connectHost(host.api_id);
} else {
}
}
})
.catch(() => {
// Silently handle errors
});
}, 10000); // Poll every 10 seconds
// Cleanup function
return () => {
isMounted = false;
for (const es of eventSources.values()) {
es.close();
}
eventSources.clear();
clearInterval(pollInterval);
};
}, [hosts]);
@@ -565,12 +551,11 @@ const Hosts = () => {
"with new data:",
data.host,
);
// Ensure hostGroupId is set correctly
// Host already has host_group_memberships from backend
const updatedHost = {
...data.host,
hostGroupId: data.host.host_groups?.id || null,
};
console.log("Updated host with hostGroupId:", updatedHost);
console.log("Updated host in cache:", updatedHost);
return updatedHost;
}
return host;
@@ -688,11 +673,15 @@ const Hosts = () => {
host.os_type?.toLowerCase().includes(searchTerm.toLowerCase()) ||
host.notes?.toLowerCase().includes(searchTerm.toLowerCase());
// Group filter
// Group filter - handle multiple groups per host
const memberships = host.host_group_memberships || [];
const matchesGroup =
groupFilter === "all" ||
(groupFilter === "ungrouped" && !host.host_groups) ||
(groupFilter !== "ungrouped" && host.host_groups?.id === groupFilter);
(groupFilter === "ungrouped" && memberships.length === 0) ||
(groupFilter !== "ungrouped" &&
memberships.some(
(membership) => membership.host_groups?.id === groupFilter,
));
// Status filter
const matchesStatus =
@@ -704,8 +693,9 @@ const Hosts = () => {
osFilter === "all" ||
host.os_type?.toLowerCase() === osFilter.toLowerCase();
// URL filter for hosts needing updates, inactive hosts, up-to-date hosts, stale hosts, or offline hosts
// URL filter for hosts needing updates, inactive hosts, up-to-date hosts, stale hosts, offline hosts, or reboot required
const filter = searchParams.get("filter");
const rebootParam = searchParams.get("reboot");
const matchesUrlFilter =
(filter !== "needsUpdates" ||
(host.updatesCount && host.updatesCount > 0)) &&
@@ -713,7 +703,9 @@ const Hosts = () => {
(host.effectiveStatus || host.status) === "inactive") &&
(filter !== "upToDate" || (!host.isStale && host.updatesCount === 0)) &&
(filter !== "stale" || host.isStale) &&
(filter !== "offline" || wsStatusMap[host.api_id]?.connected !== true);
(filter !== "offline" ||
wsStatusMap[host.api_id]?.connected !== true) &&
(!rebootParam || host.needs_reboot === true);
// Hide stale filter
const matchesHideStale = !hideStale || !host.isStale;
@@ -745,10 +737,30 @@ const Hosts = () => {
aValue = a.ip?.toLowerCase() || "zzz_no_ip";
bValue = b.ip?.toLowerCase() || "zzz_no_ip";
break;
case "group":
aValue = a.host_groups?.name || "zzz_ungrouped";
bValue = b.host_groups?.name || "zzz_ungrouped";
case "group": {
// Handle multiple groups per host - use first group alphabetically for sorting
const aGroups = a.host_group_memberships || [];
const bGroups = b.host_group_memberships || [];
if (aGroups.length === 0) {
aValue = "zzz_ungrouped";
} else {
const aGroupNames = aGroups
.map((m) => m.host_groups?.name || "")
.filter((name) => name)
.sort();
aValue = aGroupNames[0] || "zzz_ungrouped";
}
if (bGroups.length === 0) {
bValue = "zzz_ungrouped";
} else {
const bGroupNames = bGroups
.map((m) => m.host_groups?.name || "")
.filter((name) => name)
.sort();
bValue = bGroupNames[0] || "zzz_ungrouped";
}
break;
}
case "os":
aValue = a.os_type?.toLowerCase() || "zzz_unknown";
bValue = b.os_type?.toLowerCase() || "zzz_unknown";
@@ -769,6 +781,11 @@ const Hosts = () => {
aValue = a.updatesCount || 0;
bValue = b.updatesCount || 0;
break;
case "needs_reboot":
// Sort by boolean: false (0) comes before true (1)
aValue = a.needs_reboot ? 1 : 0;
bValue = b.needs_reboot ? 1 : 0;
break;
case "last_update":
aValue = new Date(a.last_update);
bValue = new Date(b.last_update);
@@ -821,27 +838,46 @@ const Hosts = () => {
const groups = {};
filteredAndSortedHosts.forEach((host) => {
let groupKey;
switch (groupBy) {
case "group":
groupKey = host.host_groups?.name || "Ungrouped";
break;
case "status":
groupKey =
(host.effectiveStatus || host.status).charAt(0).toUpperCase() +
(host.effectiveStatus || host.status).slice(1);
break;
case "os":
groupKey = host.os_type || "Unknown";
break;
default:
groupKey = "All Hosts";
}
if (groupBy === "group") {
// Handle multiple groups per host
const memberships = host.host_group_memberships || [];
if (memberships.length === 0) {
// Host has no groups, add to "Ungrouped"
if (!groups.Ungrouped) {
groups.Ungrouped = [];
}
groups.Ungrouped.push(host);
} else {
// Host has one or more groups, add to each group
memberships.forEach((membership) => {
const groupName = membership.host_groups?.name || "Unknown";
if (!groups[groupName]) {
groups[groupName] = [];
}
groups[groupName].push(host);
});
}
} else {
// Other grouping types (status, os, etc.)
let groupKey;
switch (groupBy) {
case "status":
groupKey =
(host.effectiveStatus || host.status).charAt(0).toUpperCase() +
(host.effectiveStatus || host.status).slice(1);
break;
case "os":
groupKey = host.os_type || "Unknown";
break;
default:
groupKey = "All Hosts";
}
if (!groups[groupKey]) {
groups[groupKey] = [];
if (!groups[groupKey]) {
groups[groupKey] = [];
}
groups[groupKey].push(host);
}
groups[groupKey].push(host);
});
return groups;
@@ -909,10 +945,11 @@ const Hosts = () => {
},
{ id: "ws_status", label: "Connection", visible: true, order: 9 },
{ id: "status", label: "Status", visible: true, order: 10 },
{ id: "updates", label: "Updates", visible: true, order: 11 },
{ id: "notes", label: "Notes", visible: false, order: 12 },
{ id: "last_update", label: "Last Update", visible: true, order: 13 },
{ id: "actions", label: "Actions", visible: true, order: 14 },
{ id: "needs_reboot", label: "Reboot", visible: true, order: 11 },
{ id: "updates", label: "Updates", visible: true, order: 12 },
{ id: "notes", label: "Notes", visible: false, order: 13 },
{ id: "last_update", label: "Last Update", visible: true, order: 14 },
{ id: "actions", label: "Actions", visible: true, order: 15 },
];
updateColumnConfig(defaultConfig);
};
@@ -1034,7 +1071,7 @@ const Hosts = () => {
const wsStatus = wsStatusMap[host.api_id];
if (!wsStatus) {
return (
<span className="inline-flex items-center px-2 py-1 rounded-full text-xs font-medium bg-gray-100 text-gray-600 dark:bg-gray-700 dark:text-gray-400">
<span className="inline-flex items-center px-2 py-1 rounded-md text-xs font-medium bg-gray-100 text-gray-600 dark:bg-gray-700 dark:text-gray-400">
<div className="w-2 h-2 bg-gray-400 rounded-full mr-1.5"></div>
Unknown
</span>
@@ -1042,7 +1079,7 @@ const Hosts = () => {
}
return (
<span
className={`inline-flex items-center px-2 py-1 rounded-full text-xs font-medium ${
className={`inline-flex items-center px-2 py-1 rounded-md text-xs font-medium ${
wsStatus.connected
? "bg-green-100 text-green-800 dark:bg-green-900 dark:text-green-200"
: "bg-red-100 text-red-800 dark:bg-red-900 dark:text-red-200"
@@ -1069,6 +1106,22 @@ const Hosts = () => {
(host.effectiveStatus || host.status).slice(1)}
</div>
);
case "needs_reboot":
return (
<div className="flex justify-center">
{host.needs_reboot ? (
<span className="inline-flex items-center gap-1 px-2 py-1 rounded-md text-xs font-medium bg-orange-100 text-orange-800 dark:bg-orange-900 dark:text-orange-200">
<RotateCcw className="h-3 w-3" />
Required
</span>
) : (
<span className="inline-flex items-center gap-1 px-2 py-1 rounded-md text-xs font-medium bg-green-100 text-green-800 dark:bg-green-900 dark:text-green-200">
<CheckCircle className="h-3 w-3" />
No
</span>
)}
</div>
);
case "updates":
return (
<button
@@ -1141,9 +1194,10 @@ const Hosts = () => {
// Filter to show only up-to-date hosts
setStatusFilter("active");
setShowFilters(true);
// Use the upToDate URL filter
// Clear conflicting filters and set upToDate filter
const newSearchParams = new URLSearchParams(window.location.search);
newSearchParams.set("filter", "upToDate");
newSearchParams.delete("reboot"); // Clear reboot filter when switching to upToDate
navigate(`/hosts?${newSearchParams.toString()}`, { replace: true });
};
@@ -1151,9 +1205,10 @@ const Hosts = () => {
// Filter to show hosts needing updates (regardless of status)
setStatusFilter("all");
setShowFilters(true);
// We'll use the existing needsUpdates URL filter logic
// Clear conflicting filters and set needsUpdates filter
const newSearchParams = new URLSearchParams(window.location.search);
newSearchParams.set("filter", "needsUpdates");
newSearchParams.delete("reboot"); // Clear reboot filter when switching to needsUpdates
navigate(`/hosts?${newSearchParams.toString()}`, { replace: true });
};
@@ -1161,9 +1216,10 @@ const Hosts = () => {
// Filter to show offline hosts (not connected via WebSocket)
setStatusFilter("all");
setShowFilters(true);
// Use a new URL filter for connection status
// Clear conflicting filters and set offline filter
const newSearchParams = new URLSearchParams(window.location.search);
newSearchParams.set("filter", "offline");
newSearchParams.delete("reboot"); // Clear reboot filter when switching to offline
navigate(`/hosts?${newSearchParams.toString()}`, { replace: true });
};
@@ -1254,24 +1310,6 @@ const Hosts = () => {
</div>
</div>
</button>
<button
type="button"
className="card p-4 cursor-pointer hover:shadow-card-hover dark:hover:shadow-card-hover-dark transition-shadow duration-200 text-left w-full"
onClick={handleUpToDateClick}
>
<div className="flex items-center">
<CheckCircle className="h-5 w-5 text-success-600 mr-2" />
<div>
<p className="text-sm text-secondary-500 dark:text-white">
Up to Date
</p>
<p className="text-xl font-semibold text-secondary-900 dark:text-white">
{hosts?.filter((h) => !h.isStale && h.updatesCount === 0)
.length || 0}
</p>
</div>
</div>
</button>
<button
type="button"
className="card p-4 cursor-pointer hover:shadow-card-hover dark:hover:shadow-card-hover-dark transition-shadow duration-200 text-left w-full"
@@ -1289,6 +1327,28 @@ const Hosts = () => {
</div>
</div>
</button>
<button
type="button"
className="card p-4 cursor-pointer hover:shadow-card-hover dark:hover:shadow-card-hover-dark transition-shadow duration-200 text-left w-full"
onClick={() => {
const newSearchParams = new URLSearchParams();
newSearchParams.set("reboot", "true");
// Clear filter parameter when setting reboot filter
navigate(`/hosts?${newSearchParams.toString()}`, { replace: true });
}}
>
<div className="flex items-center">
<RotateCcw className="h-5 w-5 text-orange-600 mr-2" />
<div>
<p className="text-sm text-secondary-500 dark:text-white">
Needs Reboots
</p>
<p className="text-xl font-semibold text-secondary-900 dark:text-white">
{hosts?.filter((h) => h.needs_reboot === true).length || 0}
</p>
</div>
</div>
</button>
<button
type="button"
className="card p-4 cursor-pointer hover:shadow-card-hover dark:hover:shadow-card-hover-dark transition-shadow duration-200 text-left w-full"
@@ -1428,14 +1488,6 @@ const Hosts = () => {
<AlertTriangle className="h-4 w-4" />
Hide Stale
</button>
<button
type="button"
onClick={() => setShowAddModal(true)}
className="btn-primary flex items-center gap-2"
>
<Plus className="h-4 w-4" />
Add Host
</button>
</div>
</div>
@@ -1679,6 +1731,17 @@ const Hosts = () => {
{column.label}
{getSortIcon("updates")}
</button>
) : column.id === "needs_reboot" ? (
<button
type="button"
onClick={() =>
handleSort("needs_reboot")
}
className="flex items-center gap-2 hover:text-secondary-700"
>
{column.label}
{getSortIcon("needs_reboot")}
</button>
) : column.id === "last_update" ? (
<button
type="button"

File diff suppressed because it is too large Load Diff

View File

@@ -557,9 +557,18 @@ const EditHostGroupModal = ({ group, onClose, onSubmit, isLoading }) => {
// Delete Confirmation Modal
const DeleteHostGroupModal = ({ group, onClose, onConfirm, isLoading }) => {
// Fetch hosts for this group
const { data: hostsData } = useQuery({
queryKey: ["hostGroupHosts", group?.id],
queryFn: () => hostGroupsAPI.getHosts(group.id).then((res) => res.data),
enabled: !!group && group._count?.hosts > 0,
});
const hosts = hostsData || [];
return (
<div className="fixed inset-0 bg-black bg-opacity-50 flex items-center justify-center z-50">
<div className="bg-white dark:bg-secondary-800 rounded-lg p-6 w-full max-w-md">
<div className="bg-white dark:bg-secondary-800 rounded-lg p-6 w-full max-w-md max-h-[90vh] overflow-y-auto">
<div className="flex items-center gap-3 mb-4">
<div className="w-10 h-10 bg-danger-100 rounded-full flex items-center justify-center">
<AlertTriangle className="h-5 w-5 text-danger-600" />
@@ -581,12 +590,30 @@ const DeleteHostGroupModal = ({ group, onClose, onConfirm, isLoading }) => {
</p>
{group._count.hosts > 0 && (
<div className="mt-3 p-3 bg-warning-50 border border-warning-200 rounded-md">
<p className="text-sm text-warning-800">
<p className="text-sm text-warning-800 mb-2">
<strong>Warning:</strong> This group contains{" "}
{group._count.hosts} host
{group._count.hosts !== 1 ? "s" : ""}. You must move or remove
these hosts before deleting the group.
</p>
{hosts.length > 0 && (
<div className="mt-2">
<p className="text-xs font-medium text-warning-900 mb-1">
Hosts in this group:
</p>
<div className="max-h-32 overflow-y-auto bg-warning-100 rounded p-2">
{hosts.map((host) => (
<div
key={host.id}
className="text-xs text-warning-900 flex items-center gap-1"
>
<Server className="h-3 w-3" />
{host.friendly_name || host.hostname}
</div>
))}
</div>
</div>
)}
</div>
)}
</div>

View File

@@ -8,6 +8,7 @@ import {
Download,
Package,
RefreshCw,
RotateCcw,
Search,
Server,
Shield,
@@ -370,6 +371,9 @@ const PackageDetail = () => {
<th className="px-6 py-3 text-left text-xs font-medium text-secondary-500 dark:text-secondary-300 uppercase tracking-wider">
Last Updated
</th>
<th className="px-6 py-3 text-left text-xs font-medium text-secondary-500 dark:text-secondary-300 uppercase tracking-wider">
Reboot Required
</th>
</tr>
</thead>
<tbody className="bg-white dark:bg-secondary-800 divide-y divide-secondary-200 dark:divide-secondary-600">
@@ -420,6 +424,18 @@ const PackageDetail = () => {
? formatRelativeTime(host.lastUpdate)
: "Never"}
</td>
<td className="px-6 py-4 whitespace-nowrap">
{host.needsReboot ? (
<span className="inline-flex items-center gap-1 px-2 py-1 rounded-md text-xs font-medium bg-orange-100 text-orange-800 dark:bg-orange-900 dark:text-orange-200">
<RotateCcw className="h-3 w-3" />
Required
</span>
) : (
<span className="text-sm text-secondary-500 dark:text-secondary-300">
No
</span>
)}
</td>
</tr>
))}
</tbody>

View File

@@ -539,7 +539,7 @@ const Packages = () => {
<Package className="h-5 w-5 text-primary-600 mr-2" />
<div>
<p className="text-sm text-secondary-500 dark:text-white">
Total Packages
Packages
</p>
<p className="text-xl font-semibold text-secondary-900 dark:text-white">
{totalPackagesCount}
@@ -553,7 +553,7 @@ const Packages = () => {
<Package className="h-5 w-5 text-blue-600 mr-2" />
<div>
<p className="text-sm text-secondary-500 dark:text-white">
Total Installations
Installations
</p>
<p className="text-xl font-semibold text-secondary-900 dark:text-white">
{totalInstallationsCount}
@@ -562,47 +562,72 @@ const Packages = () => {
</div>
</div>
<div className="card p-4 cursor-pointer hover:shadow-card-hover dark:hover:shadow-card-hover-dark transition-shadow duration-200">
<button
type="button"
onClick={() => {
setUpdateStatusFilter("needs-updates");
setCategoryFilter("all");
setHostFilter("all");
setSearchTerm("");
}}
className="card p-4 cursor-pointer hover:shadow-card-hover dark:hover:shadow-card-hover-dark transition-shadow duration-200 text-left w-full"
title="Click to filter packages that need updates"
>
<div className="flex items-center">
<Package className="h-5 w-5 text-warning-600 mr-2" />
<div>
<p className="text-sm text-secondary-500 dark:text-white">
Total Outdated Packages
Outdated Packages
</p>
<p className="text-xl font-semibold text-secondary-900 dark:text-white">
{outdatedPackagesCount}
</p>
</div>
</div>
</div>
</button>
<div className="card p-4 cursor-pointer hover:shadow-card-hover dark:hover:shadow-card-hover-dark transition-shadow duration-200">
<div className="flex items-center">
<Server className="h-5 w-5 text-warning-600 mr-2" />
<div>
<p className="text-sm text-secondary-500 dark:text-white">
Hosts Pending Updates
</p>
<p className="text-xl font-semibold text-secondary-900 dark:text-white">
{uniquePackageHostsCount}
</p>
</div>
</div>
</div>
<div className="card p-4 cursor-pointer hover:shadow-card-hover dark:hover:shadow-card-hover-dark transition-shadow duration-200">
<button
type="button"
onClick={() => {
setUpdateStatusFilter("security-updates");
setCategoryFilter("all");
setHostFilter("all");
setSearchTerm("");
}}
className="card p-4 cursor-pointer hover:shadow-card-hover dark:hover:shadow-card-hover-dark transition-shadow duration-200 text-left w-full"
title="Click to filter packages with security updates"
>
<div className="flex items-center">
<Shield className="h-5 w-5 text-danger-600 mr-2" />
<div>
<p className="text-sm text-secondary-500 dark:text-white">
Security Updates Across All Hosts
Security Packages
</p>
<p className="text-xl font-semibold text-secondary-900 dark:text-white">
{securityUpdatesCount}
</p>
</div>
</div>
</div>
</button>
<button
type="button"
onClick={() => navigate("/hosts?filter=needsUpdates")}
className="card p-4 cursor-pointer hover:shadow-card-hover dark:hover:shadow-card-hover-dark transition-shadow duration-200 text-left w-full"
title="Click to view hosts that need updates"
>
<div className="flex items-center">
<Server className="h-5 w-5 text-warning-600 mr-2" />
<div>
<p className="text-sm text-secondary-500 dark:text-white">
Outdated Hosts
</p>
<p className="text-xl font-semibold text-secondary-900 dark:text-white">
{uniquePackageHostsCount}
</p>
</div>
</div>
</button>
</div>
{/* Packages List */}

View File

@@ -25,6 +25,7 @@ import {
import { useEffect, useId, useState } from "react";
import { useAuth } from "../contexts/AuthContext";
import { THEME_PRESETS, useColorTheme } from "../contexts/ColorThemeContext";
import { useTheme } from "../contexts/ThemeContext";
import { isCorsError, tfaAPI } from "../utils/api";
@@ -38,6 +39,7 @@ const Profile = () => {
const confirmPasswordId = useId();
const { user, updateProfile, changePassword } = useAuth();
const { toggleTheme, isDark } = useTheme();
const { colorTheme, setColorTheme } = useColorTheme();
const [activeTab, setActiveTab] = useState("profile");
const [isLoading, setIsLoading] = useState(false);
const [message, setMessage] = useState({ type: "", text: "" });
@@ -78,8 +80,10 @@ const Profile = () => {
setIsLoading(true);
setMessage({ type: "", text: "" });
console.log("Submitting profile data:", profileData);
try {
const result = await updateProfile(profileData);
console.log("Profile update result:", result);
if (result.success) {
setMessage({ type: "success", text: "Profile updated successfully!" });
} else {
@@ -202,7 +206,7 @@ const Profile = () => {
</p>
<div className="mt-2">
<span
className={`inline-flex items-center px-2.5 py-0.5 rounded-full text-xs font-medium ${
className={`inline-flex items-center px-2.5 py-0.5 rounded-md text-xs font-medium ${
user?.role === "admin"
? "bg-primary-100 text-primary-800"
: user?.role === "host_manager"
@@ -396,7 +400,7 @@ const Profile = () => {
<button
type="button"
onClick={toggleTheme}
className={`relative inline-flex h-6 w-11 flex-shrink-0 cursor-pointer rounded-full border-2 border-transparent transition-colors duration-200 ease-in-out focus:outline-none focus:ring-2 focus:ring-primary-500 focus:ring-offset-2 ${
className={`relative inline-flex h-6 w-11 flex-shrink-0 cursor-pointer rounded-md border-2 border-transparent transition-colors duration-200 ease-in-out focus:outline-none focus:ring-2 focus:ring-primary-500 focus:ring-offset-2 ${
isDark ? "bg-primary-600" : "bg-secondary-200"
}`}
role="switch"
@@ -411,6 +415,68 @@ const Profile = () => {
</button>
</div>
</div>
{/* Color Theme Settings */}
<div className="mt-6 pt-6 border-t border-secondary-200 dark:border-secondary-600">
<h4 className="text-sm font-medium text-secondary-900 dark:text-white mb-2">
Color Theme
</h4>
<p className="text-xs text-secondary-500 dark:text-secondary-400 mb-4">
Choose your preferred color scheme for the application
</p>
<div className="grid grid-cols-2 md:grid-cols-3 gap-4">
{Object.entries(THEME_PRESETS).map(([themeKey, theme]) => {
const isSelected = colorTheme === themeKey;
const gradientColors = theme.login.xColors;
return (
<button
key={themeKey}
type="button"
onClick={() => setColorTheme(themeKey)}
className={`relative p-4 rounded-lg border-2 transition-all ${
isSelected
? "border-primary-500 ring-2 ring-primary-200 dark:ring-primary-800"
: "border-secondary-200 dark:border-secondary-600 hover:border-primary-300"
} cursor-pointer`}
>
{/* Theme Preview */}
<div
className="h-20 rounded-md mb-3 overflow-hidden"
style={{
background: `linear-gradient(135deg, ${gradientColors.join(", ")})`,
}}
/>
{/* Theme Name */}
<div className="text-sm font-medium text-secondary-900 dark:text-white mb-1">
{theme.name}
</div>
{/* Selected Indicator */}
{isSelected && (
<div className="absolute top-2 right-2 bg-primary-500 text-white rounded-full p-1">
<svg
className="w-4 h-4"
fill="currentColor"
viewBox="0 0 20 20"
aria-label="Selected theme"
>
<title>Selected</title>
<path
fillRule="evenodd"
d="M16.707 5.293a1 1 0 010 1.414l-8 8a1 1 0 01-1.414 0l-4-4a1 1 0 011.414-1.414L8 12.586l7.293-7.293a1 1 0 011.414 0z"
clipRule="evenodd"
/>
</svg>
</div>
)}
</button>
);
})}
</div>
</div>
</div>
<div className="flex justify-end">
@@ -564,6 +630,7 @@ const Profile = () => {
// TFA Tab Component
const TfaTab = () => {
const verificationTokenId = useId();
const disablePasswordId = useId();
const [setupStep, setSetupStep] = useState("status"); // 'status', 'setup', 'verify', 'backup-codes'
const [verificationToken, setVerificationToken] = useState("");
const [password, setPassword] = useState("");
@@ -1278,12 +1345,12 @@ const SessionsTab = () => {
{session.device_info?.os}
</h4>
{session.is_current_session && (
<span className="inline-flex items-center px-2 py-1 rounded-full text-xs font-medium bg-primary-100 text-primary-800 dark:bg-primary-900 dark:text-primary-200">
<span className="inline-flex items-center px-2 py-1 rounded-md text-xs font-medium bg-primary-100 text-primary-800 dark:bg-primary-900 dark:text-primary-200">
Current Session
</span>
)}
{session.tfa_remember_me && (
<span className="inline-flex items-center px-2 py-1 rounded-full text-xs font-medium bg-success-100 text-success-800 dark:bg-success-900 dark:text-success-200">
<span className="inline-flex items-center px-2 py-1 rounded-md text-xs font-medium bg-success-100 text-success-800 dark:bg-success-900 dark:text-success-200">
Remembered
</span>
)}

View File

@@ -237,8 +237,14 @@ const Repositories = () => {
// Handle special cases
if (sortField === "security") {
aValue = a.isSecure ? "Secure" : "Insecure";
bValue = b.isSecure ? "Secure" : "Insecure";
// Use the same logic as filtering to determine isSecure
const aIsSecure =
a.isSecure !== undefined ? a.isSecure : a.url.startsWith("https://");
const bIsSecure =
b.isSecure !== undefined ? b.isSecure : b.url.startsWith("https://");
// Sort by boolean: true (Secure) comes before false (Insecure) when ascending
aValue = aIsSecure ? 1 : 0;
bValue = bIsSecure ? 1 : 0;
} else if (sortField === "status") {
aValue = a.is_active ? "Active" : "Inactive";
bValue = b.is_active ? "Active" : "Inactive";
@@ -535,12 +541,12 @@ const Repositories = () => {
{visibleColumns.map((column) => (
<th
key={column.id}
className="px-4 py-2 text-center text-xs font-medium text-secondary-500 dark:text-secondary-300 uppercase tracking-wider"
className="px-4 py-2 text-left text-xs font-medium text-secondary-500 dark:text-secondary-300 uppercase tracking-wider"
>
<button
type="button"
onClick={() => handleSort(column.id)}
className="flex items-center gap-1 hover:text-secondary-700 dark:hover:text-secondary-200 transition-colors"
className="flex items-center justify-start gap-1 hover:text-secondary-700 dark:hover:text-secondary-200 transition-colors"
>
{column.label}
{getSortIcon(column.id)}
@@ -559,7 +565,7 @@ const Repositories = () => {
{visibleColumns.map((column) => (
<td
key={column.id}
className="px-4 py-2 whitespace-nowrap text-center"
className="px-4 py-2 whitespace-nowrap text-left"
>
{renderCellContent(column, repo)}
</td>
@@ -622,7 +628,7 @@ const Repositories = () => {
? repo.isSecure
: repo.url.startsWith("https://");
return (
<div className="flex items-center justify-center">
<div className="flex items-center justify-start">
{isSecure ? (
<div className="flex items-center gap-1 text-green-600">
<Lock className="h-4 w-4" />
@@ -651,14 +657,14 @@ const Repositories = () => {
);
case "hostCount":
return (
<div className="flex items-center justify-center gap-1 text-sm text-secondary-900 dark:text-white">
<div className="flex items-center justify-start gap-1 text-sm text-secondary-900 dark:text-white">
<Server className="h-4 w-4" />
<span>{repo.hostCount}</span>
</div>
);
case "actions":
return (
<div className="flex items-center justify-center">
<div className="flex items-center justify-start">
<button
type="button"
onClick={(e) => handleDeleteRepository(repo, e)}

View File

@@ -6,6 +6,7 @@ import {
Database,
Globe,
Lock,
RotateCcw,
Search,
Server,
Shield,
@@ -556,6 +557,9 @@ const RepositoryDetail = () => {
<th className="px-6 py-3 text-left text-xs font-medium text-secondary-500 dark:text-secondary-300 uppercase tracking-wider">
Last Update
</th>
<th className="px-6 py-3 text-left text-xs font-medium text-secondary-500 dark:text-secondary-300 uppercase tracking-wider">
Reboot Required
</th>
</tr>
</thead>
<tbody className="bg-white dark:bg-secondary-800 divide-y divide-secondary-200 dark:divide-secondary-600">
@@ -604,6 +608,18 @@ const RepositoryDetail = () => {
? formatRelativeTime(hostRepo.hosts.last_update)
: "Never"}
</td>
<td className="px-6 py-4 whitespace-nowrap">
{hostRepo.hosts.needs_reboot ? (
<span className="inline-flex items-center gap-1 px-2 py-1 rounded-md text-xs font-medium bg-orange-100 text-orange-800 dark:bg-orange-900 dark:text-orange-200">
<RotateCcw className="h-3 w-3" />
Required
</span>
) : (
<span className="text-sm text-secondary-500 dark:text-secondary-300">
No
</span>
)}
</td>
</tr>
))}
</tbody>

View File

@@ -53,6 +53,7 @@ const Settings = () => {
});
const [errors, setErrors] = useState({});
const [isDirty, setIsDirty] = useState(false);
const [toast, setToast] = useState(null);
// Tab management
const [activeTab, setActiveTab] = useState("server");
@@ -60,6 +61,56 @@ const Settings = () => {
// Get update notification state
const { updateAvailable } = useUpdateNotification();
// Auto-hide toast after 3 seconds
useEffect(() => {
if (toast) {
const timer = setTimeout(() => {
setToast(null);
}, 3000);
return () => clearTimeout(timer);
}
}, [toast]);
const showToast = (message, type = "success") => {
setToast({ message, type });
};
// Fallback clipboard copy function for HTTP and older browsers
const copyToClipboard = async (text) => {
// Try modern clipboard API first
if (navigator.clipboard && navigator.clipboard.writeText) {
try {
await navigator.clipboard.writeText(text);
return true;
} catch (err) {
console.warn("Clipboard API failed, using fallback:", err);
}
}
// Fallback for HTTP or unsupported browsers
try {
const textArea = document.createElement("textarea");
textArea.value = text;
textArea.style.position = "fixed";
textArea.style.left = "-999999px";
textArea.style.top = "-999999px";
document.body.appendChild(textArea);
textArea.focus();
textArea.select();
const successful = document.execCommand("copy");
document.body.removeChild(textArea);
if (successful) {
return true;
}
throw new Error("execCommand failed");
} catch (err) {
console.error("Fallback copy failed:", err);
throw err;
}
};
// Tab configuration
const tabs = [
{ id: "server", name: "Server Configuration", icon: Server },
@@ -120,7 +171,7 @@ const Settings = () => {
});
// Helper function to get curl flags based on settings
const _getCurlFlags = () => {
const getCurlFlags = () => {
return settings?.ignore_ssl_self_signed ? "-sk" : "-s";
};
@@ -442,6 +493,53 @@ const Settings = () => {
return (
<div className="max-w-4xl mx-auto p-6">
{/* Toast Notification */}
{toast && (
<div
className={`fixed top-4 right-4 z-50 max-w-md rounded-lg shadow-lg border-2 p-4 flex items-start space-x-3 animate-in slide-in-from-top-5 ${
toast.type === "success"
? "bg-green-50 dark:bg-green-900/90 border-green-500 dark:border-green-600"
: "bg-red-50 dark:bg-red-900/90 border-red-500 dark:border-red-600"
}`}
>
<div
className={`flex-shrink-0 rounded-full p-1 ${
toast.type === "success"
? "bg-green-100 dark:bg-green-800"
: "bg-red-100 dark:bg-red-800"
}`}
>
{toast.type === "success" ? (
<CheckCircle className="h-5 w-5 text-green-600 dark:text-green-400" />
) : (
<AlertCircle className="h-5 w-5 text-red-600 dark:text-red-400" />
)}
</div>
<div className="flex-1">
<p
className={`text-sm font-medium ${
toast.type === "success"
? "text-green-800 dark:text-green-100"
: "text-red-800 dark:text-red-100"
}`}
>
{toast.message}
</p>
</div>
<button
type="button"
onClick={() => setToast(null)}
className={`flex-shrink-0 rounded-lg p-1 transition-colors ${
toast.type === "success"
? "hover:bg-green-100 dark:hover:bg-green-800 text-green-600 dark:text-green-400"
: "hover:bg-red-100 dark:hover:bg-red-800 text-red-600 dark:text-red-400"
}`}
>
<X className="h-4 w-4" />
</button>
</div>
)}
<div className="mb-8">
<p className="text-secondary-600 dark:text-secondary-300">
Configure your PatchMon server settings. These settings will be used
@@ -819,7 +917,7 @@ const Settings = () => {
key={m}
type="button"
onClick={() => handleInputChange("updateInterval", m)}
className={`px-3 py-1.5 rounded-full text-xs font-medium border ${
className={`px-3 py-1.5 rounded-md text-xs font-medium border ${
formData.updateInterval === m
? "bg-primary-600 text-white border-primary-600"
: "bg-white dark:bg-secondary-700 text-secondary-700 dark:text-secondary-200 border-secondary-300 dark:border-secondary-600 hover:bg-secondary-50 dark:hover:bg-secondary-600"
@@ -1159,19 +1257,74 @@ const Settings = () => {
To completely remove PatchMon from a host:
</p>
{/* Go Agent Uninstall */}
{/* Agent Removal Script - Standard */}
<div className="mb-3">
<div className="space-y-2">
<div className="text-xs font-semibold text-red-700 dark:text-red-300 mb-1">
Standard Removal (preserves backups):
</div>
<div className="flex items-center gap-2">
<div className="bg-red-100 dark:bg-red-800 rounded p-2 font-mono text-xs flex-1">
sudo patchmon-agent uninstall
curl {getCurlFlags()} {window.location.origin}
/api/v1/hosts/remove | sudo sh
</div>
<button
type="button"
onClick={() => {
navigator.clipboard.writeText(
"sudo patchmon-agent uninstall",
);
onClick={async () => {
try {
await copyToClipboard(
`curl ${getCurlFlags()} ${window.location.origin}/api/v1/hosts/remove | sudo sh`,
);
showToast(
"Standard removal command copied!",
"success",
);
} catch (err) {
console.error("Failed to copy:", err);
showToast(
"Failed to copy to clipboard",
"error",
);
}
}}
className="px-2 py-1 bg-red-200 dark:bg-red-700 text-red-800 dark:text-red-200 rounded text-xs hover:bg-red-300 dark:hover:bg-red-600 transition-colors"
>
Copy
</button>
</div>
</div>
</div>
{/* Agent Removal Script - Complete */}
<div className="mb-3">
<div className="space-y-2">
<div className="text-xs font-semibold text-red-700 dark:text-red-300 mb-1">
Complete Removal (includes backups):
</div>
<div className="flex items-center gap-2">
<div className="bg-red-100 dark:bg-red-800 rounded p-2 font-mono text-xs flex-1">
curl {getCurlFlags()} {window.location.origin}
/api/v1/hosts/remove | sudo REMOVE_BACKUPS=1
sh
</div>
<button
type="button"
onClick={async () => {
try {
await copyToClipboard(
`curl ${getCurlFlags()} ${window.location.origin}/api/v1/hosts/remove | sudo REMOVE_BACKUPS=1 sh`,
);
showToast(
"Complete removal command copied!",
"success",
);
} catch (err) {
console.error("Failed to copy:", err);
showToast(
"Failed to copy to clipboard",
"error",
);
}
}}
className="px-2 py-1 bg-red-200 dark:bg-red-700 text-red-800 dark:text-red-200 rounded text-xs hover:bg-red-300 dark:hover:bg-red-600 transition-colors"
>
@@ -1179,16 +1332,16 @@ const Settings = () => {
</button>
</div>
<div className="text-xs text-red-600 dark:text-red-400">
Options: <code>--remove-config</code>,{" "}
<code>--remove-logs</code>,{" "}
<code>--remove-all</code>, <code>--force</code>
This removes: binaries, systemd/OpenRC services,
configuration files, logs, crontab entries, and
backup files
</div>
</div>
</div>
<p className="mt-2 text-xs">
This command will remove all PatchMon files,
configuration, and crontab entries
<p className="mt-2 text-xs text-red-700 dark:text-red-400">
Standard removal preserves backup files for
safety. Use complete removal to delete everything.
</p>
</div>
</div>

View File

@@ -0,0 +1,483 @@
import { useQuery } from "@tanstack/react-query";
import {
AlertTriangle,
ArrowLeft,
CheckCircle,
Container,
Globe,
Network,
RefreshCw,
Server,
Tag,
XCircle,
} from "lucide-react";
import { Link, useParams } from "react-router-dom";
import api, { formatRelativeTime } from "../../utils/api";
const NetworkDetail = () => {
const { id } = useParams();
const { data, isLoading, error } = useQuery({
queryKey: ["docker", "network", id],
queryFn: async () => {
const response = await api.get(`/docker/networks/${id}`);
return response.data;
},
refetchInterval: 30000,
});
const network = data?.network;
const host = data?.host;
if (isLoading) {
return (
<div className="flex items-center justify-center min-h-screen">
<RefreshCw className="h-8 w-8 animate-spin text-secondary-400" />
</div>
);
}
if (error || !network) {
return (
<div className="max-w-7xl mx-auto px-4 sm:px-6 lg:px-8 py-8">
<div className="bg-red-50 dark:bg-red-900/20 border border-red-200 dark:border-red-800 rounded-lg p-4">
<div className="flex">
<AlertTriangle className="h-5 w-5 text-red-400" />
<div className="ml-3">
<h3 className="text-sm font-medium text-red-800 dark:text-red-200">
Network not found
</h3>
<p className="mt-2 text-sm text-red-700 dark:text-red-300">
The network you're looking for doesn't exist or has been
removed.
</p>
</div>
</div>
</div>
<Link
to="/docker"
className="mt-4 inline-flex items-center text-primary-600 hover:text-primary-900 dark:text-primary-400 dark:hover:text-primary-300"
>
<ArrowLeft className="h-4 w-4 mr-2" />
Back to Docker
</Link>
</div>
);
}
const BooleanBadge = ({ value, trueLabel = "Yes", falseLabel = "No" }) => {
return value ? (
<span className="inline-flex items-center px-2.5 py-0.5 rounded-full text-xs font-medium bg-green-100 text-green-800 dark:bg-green-900 dark:text-green-200">
<CheckCircle className="h-3 w-3 mr-1" />
{trueLabel}
</span>
) : (
<span className="inline-flex items-center px-2.5 py-0.5 rounded-full text-xs font-medium bg-secondary-100 text-secondary-800 dark:bg-secondary-700 dark:text-secondary-200">
<XCircle className="h-3 w-3 mr-1" />
{falseLabel}
</span>
);
};
return (
<div className="space-y-6">
{/* Header */}
<div>
<Link
to="/docker"
className="inline-flex items-center text-sm text-primary-600 hover:text-primary-900 dark:text-primary-400 dark:hover:text-primary-300 mb-4"
>
<ArrowLeft className="h-4 w-4 mr-2" />
Back to Docker
</Link>
<div className="flex items-center">
<Network className="h-8 w-8 text-secondary-400 mr-3" />
<div>
<h1 className="text-2xl font-bold text-secondary-900 dark:text-white">
{network.name}
</h1>
<p className="mt-1 text-sm text-secondary-600 dark:text-secondary-400">
Network ID: {network.network_id.substring(0, 12)}
</p>
</div>
</div>
</div>
{/* Overview Cards */}
<div className="grid grid-cols-1 gap-5 sm:grid-cols-2 lg:grid-cols-4">
<div className="card p-4">
<div className="flex items-center">
<div className="flex-shrink-0">
<Network className="h-5 w-5 text-blue-600 mr-2" />
</div>
<div className="w-0 flex-1">
<p className="text-sm text-secondary-500 dark:text-white">
Driver
</p>
<p className="text-xl font-semibold text-secondary-900 dark:text-white">
{network.driver}
</p>
</div>
</div>
</div>
<div className="card p-4">
<div className="flex items-center">
<div className="flex-shrink-0">
<Globe className="h-5 w-5 text-purple-600 mr-2" />
</div>
<div className="w-0 flex-1">
<p className="text-sm text-secondary-500 dark:text-white">
Scope
</p>
<p className="text-xl font-semibold text-secondary-900 dark:text-white">
{network.scope}
</p>
</div>
</div>
</div>
<div className="card p-4">
<div className="flex items-center">
<div className="flex-shrink-0">
<Container className="h-5 w-5 text-green-600 mr-2" />
</div>
<div className="w-0 flex-1">
<p className="text-sm text-secondary-500 dark:text-white">
Containers
</p>
<p className="text-xl font-semibold text-secondary-900 dark:text-white">
{network.container_count || 0}
</p>
</div>
</div>
</div>
<div className="card p-4">
<div className="flex items-center">
<div className="flex-shrink-0">
<RefreshCw className="h-5 w-5 text-secondary-400 mr-2" />
</div>
<div className="w-0 flex-1">
<p className="text-sm text-secondary-500 dark:text-white">
Last Checked
</p>
<p className="text-sm font-medium text-secondary-900 dark:text-white">
{formatRelativeTime(network.last_checked)}
</p>
</div>
</div>
</div>
</div>
{/* Network Information Card */}
<div className="card">
<div className="px-6 py-5 border-b border-secondary-200 dark:border-secondary-700">
<h3 className="text-lg leading-6 font-medium text-secondary-900 dark:text-white">
Network Information
</h3>
</div>
<div className="px-6 py-5">
<dl className="grid grid-cols-1 gap-x-4 gap-y-6 sm:grid-cols-2">
<div>
<dt className="text-sm font-medium text-secondary-500 dark:text-secondary-400">
Network ID
</dt>
<dd className="mt-1 text-sm text-secondary-900 dark:text-white font-mono break-all">
{network.network_id}
</dd>
</div>
<div>
<dt className="text-sm font-medium text-secondary-500 dark:text-secondary-400">
Name
</dt>
<dd className="mt-1 text-sm text-secondary-900 dark:text-white">
{network.name}
</dd>
</div>
<div>
<dt className="text-sm font-medium text-secondary-500 dark:text-secondary-400">
Driver
</dt>
<dd className="mt-1">
<span className="inline-flex items-center px-2.5 py-0.5 rounded-full text-xs font-medium bg-blue-100 text-blue-800 dark:bg-blue-900 dark:text-blue-200">
{network.driver}
</span>
</dd>
</div>
<div>
<dt className="text-sm font-medium text-secondary-500 dark:text-secondary-400">
Scope
</dt>
<dd className="mt-1">
<span className="inline-flex items-center px-2.5 py-0.5 rounded-full text-xs font-medium bg-purple-100 text-purple-800 dark:bg-purple-900 dark:text-purple-200">
{network.scope}
</span>
</dd>
</div>
<div>
<dt className="text-sm font-medium text-secondary-500 dark:text-secondary-400">
Containers Attached
</dt>
<dd className="mt-1 text-sm text-secondary-900 dark:text-white">
{network.container_count || 0}
</dd>
</div>
<div>
<dt className="text-sm font-medium text-secondary-500 dark:text-secondary-400">
IPv6 Enabled
</dt>
<dd className="mt-1">
<BooleanBadge value={network.ipv6_enabled} />
</dd>
</div>
<div>
<dt className="text-sm font-medium text-secondary-500 dark:text-secondary-400">
Internal
</dt>
<dd className="mt-1">
<BooleanBadge value={network.internal} />
</dd>
</div>
<div>
<dt className="text-sm font-medium text-secondary-500 dark:text-secondary-400">
Attachable
</dt>
<dd className="mt-1">
<BooleanBadge value={network.attachable} />
</dd>
</div>
<div>
<dt className="text-sm font-medium text-secondary-500 dark:text-secondary-400">
Ingress
</dt>
<dd className="mt-1">
<BooleanBadge value={network.ingress} />
</dd>
</div>
<div>
<dt className="text-sm font-medium text-secondary-500 dark:text-secondary-400">
Config Only
</dt>
<dd className="mt-1">
<BooleanBadge value={network.config_only} />
</dd>
</div>
{network.created_at && (
<div>
<dt className="text-sm font-medium text-secondary-500 dark:text-secondary-400">
Created
</dt>
<dd className="mt-1 text-sm text-secondary-900 dark:text-white">
{formatRelativeTime(network.created_at)}
</dd>
</div>
)}
<div>
<dt className="text-sm font-medium text-secondary-500 dark:text-secondary-400">
Last Checked
</dt>
<dd className="mt-1 text-sm text-secondary-900 dark:text-white">
{formatRelativeTime(network.last_checked)}
</dd>
</div>
</dl>
</div>
</div>
{/* IPAM Configuration */}
{network.ipam && (
<div className="card">
<div className="px-6 py-5 border-b border-secondary-200 dark:border-secondary-700">
<h3 className="text-lg leading-6 font-medium text-secondary-900 dark:text-white">
IPAM Configuration
</h3>
<p className="mt-1 text-sm text-secondary-500 dark:text-secondary-400">
IP Address Management settings
</p>
</div>
<div className="px-6 py-5">
{network.ipam.driver && (
<div className="mb-4">
<dt className="text-sm font-medium text-secondary-500 dark:text-secondary-400 mb-1">
Driver
</dt>
<dd>
<span className="inline-flex items-center px-2.5 py-0.5 rounded-full text-xs font-medium bg-blue-100 text-blue-800 dark:bg-blue-900 dark:text-blue-200">
{network.ipam.driver}
</span>
</dd>
</div>
)}
{network.ipam.config && network.ipam.config.length > 0 && (
<div>
<dt className="text-sm font-medium text-secondary-500 dark:text-secondary-400 mb-3">
Subnet Configuration
</dt>
<div className="space-y-4">
{network.ipam.config.map((config, index) => (
<div
key={config.subnet || `config-${index}`}
className="bg-secondary-50 dark:bg-secondary-900/50 rounded-lg p-4"
>
<dl className="grid grid-cols-1 gap-x-4 gap-y-3 sm:grid-cols-2">
{config.subnet && (
<div>
<dt className="text-xs font-medium text-secondary-500 dark:text-secondary-400">
Subnet
</dt>
<dd className="mt-1 text-sm text-secondary-900 dark:text-white font-mono">
{config.subnet}
</dd>
</div>
)}
{config.gateway && (
<div>
<dt className="text-xs font-medium text-secondary-500 dark:text-secondary-400">
Gateway
</dt>
<dd className="mt-1 text-sm text-secondary-900 dark:text-white font-mono">
{config.gateway}
</dd>
</div>
)}
{config.ip_range && (
<div>
<dt className="text-xs font-medium text-secondary-500 dark:text-secondary-400">
IP Range
</dt>
<dd className="mt-1 text-sm text-secondary-900 dark:text-white font-mono">
{config.ip_range}
</dd>
</div>
)}
{config.aux_addresses &&
Object.keys(config.aux_addresses).length > 0 && (
<div className="sm:col-span-2">
<dt className="text-xs font-medium text-secondary-500 dark:text-secondary-400 mb-2">
Auxiliary Addresses
</dt>
<dd className="space-y-1">
{Object.entries(config.aux_addresses).map(
([key, value]) => (
<div
key={key}
className="flex items-center text-sm"
>
<span className="text-secondary-500 dark:text-secondary-400 min-w-[120px]">
{key}:
</span>
<span className="text-secondary-900 dark:text-white font-mono">
{value}
</span>
</div>
),
)}
</dd>
</div>
)}
</dl>
</div>
))}
</div>
</div>
)}
{network.ipam.options &&
Object.keys(network.ipam.options).length > 0 && (
<div className="mt-4">
<dt className="text-sm font-medium text-secondary-500 dark:text-secondary-400 mb-2">
IPAM Options
</dt>
<dd className="space-y-1">
{Object.entries(network.ipam.options).map(
([key, value]) => (
<div
key={key}
className="flex items-start py-2 border-b border-secondary-100 dark:border-secondary-700 last:border-0"
>
<span className="text-sm font-medium text-secondary-500 dark:text-secondary-400 min-w-[200px]">
{key}
</span>
<span className="text-sm text-secondary-900 dark:text-white break-all">
{value}
</span>
</div>
),
)}
</dd>
</div>
)}
</div>
</div>
)}
{/* Host Information */}
{host && (
<div className="card">
<div className="px-6 py-5 border-b border-secondary-200 dark:border-secondary-700">
<h3 className="text-lg leading-6 font-medium text-secondary-900 dark:text-white flex items-center">
<Server className="h-5 w-5 mr-2" />
Host Information
</h3>
</div>
<div className="px-6 py-5">
<dl className="grid grid-cols-1 gap-x-4 gap-y-6 sm:grid-cols-2">
<div>
<dt className="text-sm font-medium text-secondary-500 dark:text-secondary-400">
Hostname
</dt>
<dd className="mt-1 text-sm text-secondary-900 dark:text-white">
<Link
to={`/hosts/${host.id}`}
className="text-primary-600 hover:text-primary-900 dark:text-primary-400 dark:hover:text-primary-300"
>
{host.hostname}
</Link>
</dd>
</div>
<div>
<dt className="text-sm font-medium text-secondary-500 dark:text-secondary-400">
Operating System
</dt>
<dd className="mt-1 text-sm text-secondary-900 dark:text-white">
{host.os_name} {host.os_version}
</dd>
</div>
</dl>
</div>
</div>
)}
{/* Labels */}
{network.labels && Object.keys(network.labels).length > 0 && (
<div className="card">
<div className="px-6 py-5 border-b border-secondary-200 dark:border-secondary-700">
<h3 className="text-lg leading-6 font-medium text-secondary-900 dark:text-white flex items-center">
<Tag className="h-5 w-5 mr-2" />
Labels
</h3>
</div>
<div className="px-6 py-5">
<div className="space-y-2">
{Object.entries(network.labels).map(([key, value]) => (
<div
key={key}
className="flex items-start py-2 border-b border-secondary-100 dark:border-secondary-700 last:border-0"
>
<span className="text-sm font-medium text-secondary-500 dark:text-secondary-400 min-w-[200px]">
{key}
</span>
<span className="text-sm text-secondary-900 dark:text-white break-all">
{value}
</span>
</div>
))}
</div>
</div>
</div>
)}
</div>
);
};
export default NetworkDetail;

View File

@@ -0,0 +1,359 @@
import { useQuery } from "@tanstack/react-query";
import {
AlertTriangle,
ArrowLeft,
Database,
HardDrive,
RefreshCw,
Server,
Tag,
} from "lucide-react";
import { Link, useParams } from "react-router-dom";
import api, { formatRelativeTime } from "../../utils/api";
const VolumeDetail = () => {
const { id } = useParams();
const { data, isLoading, error } = useQuery({
queryKey: ["docker", "volume", id],
queryFn: async () => {
const response = await api.get(`/docker/volumes/${id}`);
return response.data;
},
refetchInterval: 30000,
});
const volume = data?.volume;
const host = data?.host;
if (isLoading) {
return (
<div className="flex items-center justify-center min-h-screen">
<RefreshCw className="h-8 w-8 animate-spin text-secondary-400" />
</div>
);
}
if (error || !volume) {
return (
<div className="max-w-7xl mx-auto px-4 sm:px-6 lg:px-8 py-8">
<div className="bg-red-50 dark:bg-red-900/20 border border-red-200 dark:border-red-800 rounded-lg p-4">
<div className="flex">
<AlertTriangle className="h-5 w-5 text-red-400" />
<div className="ml-3">
<h3 className="text-sm font-medium text-red-800 dark:text-red-200">
Volume not found
</h3>
<p className="mt-2 text-sm text-red-700 dark:text-red-300">
The volume you're looking for doesn't exist or has been removed.
</p>
</div>
</div>
</div>
<Link
to="/docker"
className="mt-4 inline-flex items-center text-primary-600 hover:text-primary-900 dark:text-primary-400 dark:hover:text-primary-300"
>
<ArrowLeft className="h-4 w-4 mr-2" />
Back to Docker
</Link>
</div>
);
}
const formatBytes = (bytes) => {
if (bytes === null || bytes === undefined) return "N/A";
const sizes = ["Bytes", "KB", "MB", "GB", "TB"];
if (bytes === 0) return "0 Bytes";
const i = Math.floor(Math.log(bytes) / Math.log(1024));
return `${Math.round((bytes / 1024 ** i) * 100) / 100} ${sizes[i]}`;
};
return (
<div className="space-y-6">
{/* Header */}
<div>
<Link
to="/docker"
className="inline-flex items-center text-sm text-primary-600 hover:text-primary-900 dark:text-primary-400 dark:hover:text-primary-300 mb-4"
>
<ArrowLeft className="h-4 w-4 mr-2" />
Back to Docker
</Link>
<div className="flex items-center">
<HardDrive className="h-8 w-8 text-secondary-400 mr-3" />
<div>
<h1 className="text-2xl font-bold text-secondary-900 dark:text-white">
{volume.name}
</h1>
<p className="mt-1 text-sm text-secondary-600 dark:text-secondary-400">
Volume ID: {volume.volume_id}
</p>
</div>
</div>
</div>
{/* Overview Cards */}
<div className="grid grid-cols-1 gap-5 sm:grid-cols-2 lg:grid-cols-4">
<div className="card p-4">
<div className="flex items-center">
<div className="flex-shrink-0">
<HardDrive className="h-5 w-5 text-blue-600 mr-2" />
</div>
<div className="w-0 flex-1">
<p className="text-sm text-secondary-500 dark:text-white">
Driver
</p>
<p className="text-xl font-semibold text-secondary-900 dark:text-white">
{volume.driver}
</p>
</div>
</div>
</div>
<div className="card p-4">
<div className="flex items-center">
<div className="flex-shrink-0">
<Database className="h-5 w-5 text-purple-600 mr-2" />
</div>
<div className="w-0 flex-1">
<p className="text-sm text-secondary-500 dark:text-white">Size</p>
<p className="text-xl font-semibold text-secondary-900 dark:text-white">
{formatBytes(volume.size_bytes)}
</p>
</div>
</div>
</div>
<div className="card p-4">
<div className="flex items-center">
<div className="flex-shrink-0">
<Server className="h-5 w-5 text-green-600 mr-2" />
</div>
<div className="w-0 flex-1">
<p className="text-sm text-secondary-500 dark:text-white">
Containers
</p>
<p className="text-xl font-semibold text-secondary-900 dark:text-white">
{volume.ref_count || 0}
</p>
</div>
</div>
</div>
<div className="card p-4">
<div className="flex items-center">
<div className="flex-shrink-0">
<RefreshCw className="h-5 w-5 text-secondary-400 mr-2" />
</div>
<div className="w-0 flex-1">
<p className="text-sm text-secondary-500 dark:text-white">
Last Checked
</p>
<p className="text-sm font-medium text-secondary-900 dark:text-white">
{formatRelativeTime(volume.last_checked)}
</p>
</div>
</div>
</div>
</div>
{/* Volume Information Card */}
<div className="card">
<div className="px-6 py-5 border-b border-secondary-200 dark:border-secondary-700">
<h3 className="text-lg leading-6 font-medium text-secondary-900 dark:text-white">
Volume Information
</h3>
</div>
<div className="px-6 py-5">
<dl className="grid grid-cols-1 gap-x-4 gap-y-6 sm:grid-cols-2">
<div>
<dt className="text-sm font-medium text-secondary-500 dark:text-secondary-400">
Volume ID
</dt>
<dd className="mt-1 text-sm text-secondary-900 dark:text-white font-mono">
{volume.volume_id}
</dd>
</div>
<div>
<dt className="text-sm font-medium text-secondary-500 dark:text-secondary-400">
Name
</dt>
<dd className="mt-1 text-sm text-secondary-900 dark:text-white">
{volume.name}
</dd>
</div>
<div>
<dt className="text-sm font-medium text-secondary-500 dark:text-secondary-400">
Driver
</dt>
<dd className="mt-1">
<span className="inline-flex items-center px-2.5 py-0.5 rounded-full text-xs font-medium bg-blue-100 text-blue-800 dark:bg-blue-900 dark:text-blue-200">
{volume.driver}
</span>
</dd>
</div>
<div>
<dt className="text-sm font-medium text-secondary-500 dark:text-secondary-400">
Scope
</dt>
<dd className="mt-1">
<span className="inline-flex items-center px-2.5 py-0.5 rounded-full text-xs font-medium bg-purple-100 text-purple-800 dark:bg-purple-900 dark:text-purple-200">
{volume.scope}
</span>
</dd>
</div>
<div>
<dt className="text-sm font-medium text-secondary-500 dark:text-secondary-400">
Size
</dt>
<dd className="mt-1 text-sm text-secondary-900 dark:text-white">
{formatBytes(volume.size_bytes)}
</dd>
</div>
<div>
<dt className="text-sm font-medium text-secondary-500 dark:text-secondary-400">
Containers Using
</dt>
<dd className="mt-1 text-sm text-secondary-900 dark:text-white">
{volume.ref_count || 0}
</dd>
</div>
{volume.mountpoint && (
<div className="sm:col-span-2">
<dt className="text-sm font-medium text-secondary-500 dark:text-secondary-400">
Mount Point
</dt>
<dd className="mt-1 text-sm text-secondary-900 dark:text-white font-mono break-all">
{volume.mountpoint}
</dd>
</div>
)}
{volume.renderer && (
<div>
<dt className="text-sm font-medium text-secondary-500 dark:text-secondary-400">
Renderer
</dt>
<dd className="mt-1 text-sm text-secondary-900 dark:text-white">
{volume.renderer}
</dd>
</div>
)}
<div>
<dt className="text-sm font-medium text-secondary-500 dark:text-secondary-400">
Created
</dt>
<dd className="mt-1 text-sm text-secondary-900 dark:text-white">
{formatRelativeTime(volume.created_at)}
</dd>
</div>
<div>
<dt className="text-sm font-medium text-secondary-500 dark:text-secondary-400">
Last Checked
</dt>
<dd className="mt-1 text-sm text-secondary-900 dark:text-white">
{formatRelativeTime(volume.last_checked)}
</dd>
</div>
</dl>
</div>
</div>
{/* Host Information */}
{host && (
<div className="card">
<div className="px-6 py-5 border-b border-secondary-200 dark:border-secondary-700">
<h3 className="text-lg leading-6 font-medium text-secondary-900 dark:text-white flex items-center">
<Server className="h-5 w-5 mr-2" />
Host Information
</h3>
</div>
<div className="px-6 py-5">
<dl className="grid grid-cols-1 gap-x-4 gap-y-6 sm:grid-cols-2">
<div>
<dt className="text-sm font-medium text-secondary-500 dark:text-secondary-400">
Hostname
</dt>
<dd className="mt-1 text-sm text-secondary-900 dark:text-white">
<Link
to={`/hosts/${host.id}`}
className="text-primary-600 hover:text-primary-900 dark:text-primary-400 dark:hover:text-primary-300"
>
{host.hostname}
</Link>
</dd>
</div>
<div>
<dt className="text-sm font-medium text-secondary-500 dark:text-secondary-400">
Operating System
</dt>
<dd className="mt-1 text-sm text-secondary-900 dark:text-white">
{host.os_name} {host.os_version}
</dd>
</div>
</dl>
</div>
</div>
)}
{/* Labels */}
{volume.labels && Object.keys(volume.labels).length > 0 && (
<div className="card">
<div className="px-6 py-5 border-b border-secondary-200 dark:border-secondary-700">
<h3 className="text-lg leading-6 font-medium text-secondary-900 dark:text-white flex items-center">
<Tag className="h-5 w-5 mr-2" />
Labels
</h3>
</div>
<div className="px-6 py-5">
<div className="space-y-2">
{Object.entries(volume.labels).map(([key, value]) => (
<div
key={key}
className="flex items-start py-2 border-b border-secondary-100 dark:border-secondary-700 last:border-0"
>
<span className="text-sm font-medium text-secondary-500 dark:text-secondary-400 min-w-[200px]">
{key}
</span>
<span className="text-sm text-secondary-900 dark:text-white break-all">
{value}
</span>
</div>
))}
</div>
</div>
</div>
)}
{/* Options */}
{volume.options && Object.keys(volume.options).length > 0 && (
<div className="card">
<div className="px-6 py-5 border-b border-secondary-200 dark:border-secondary-700">
<h3 className="text-lg leading-6 font-medium text-secondary-900 dark:text-white">
Volume Options
</h3>
</div>
<div className="px-6 py-5">
<div className="space-y-2">
{Object.entries(volume.options).map(([key, value]) => (
<div
key={key}
className="flex items-start py-2 border-b border-secondary-100 dark:border-secondary-700 last:border-0"
>
<span className="text-sm font-medium text-secondary-500 dark:text-secondary-400 min-w-[200px]">
{key}
</span>
<span className="text-sm text-secondary-900 dark:text-white break-all">
{value}
</span>
</div>
))}
</div>
</div>
</div>
)}
</div>
);
};
export default VolumeDetail;

View File

@@ -32,7 +32,7 @@ const AlertChannels = () => {
including email, Slack, Discord, and webhooks.
</p>
<div className="mt-3">
<span className="inline-flex items-center px-2.5 py-0.5 rounded-full text-xs font-medium bg-primary-100 text-primary-800 dark:bg-primary-900 dark:text-primary-200">
<span className="inline-flex items-center px-2.5 py-0.5 rounded-md text-xs font-medium bg-primary-100 text-primary-800 dark:bg-primary-900 dark:text-primary-200">
In Development
</span>
</div>

Some files were not shown because too many files have changed in this diff Show More