Compare commits

..

440 Commits

Author SHA1 Message Date
Muhammad Ibrahim
c98203a997 Fixed bug on nginx configuration 2025-10-22 02:31:53 +01:00
Muhammad Ibrahim
37c8f5fa76 Modified setup.sh to handle the changes in version 1.3.0 2025-10-22 02:09:23 +01:00
Muhammad Ibrahim
50e546ee7e Fixed Bullboard authentication via Docker
Fixed Agent checking upon entrypoint
modified entrypoint to handle both binary files as well as the shell script
2025-10-21 21:29:15 +01:00
Muhammad Ibrahim
2174abf395 Static Lib for Go Agent 2025-10-21 00:07:51 +01:00
Muhammad Ibrahim
1350fd4e47 Added new binaries 2025-10-20 23:01:32 +01:00
Muhammad Ibrahim
6b9a42fb0b Addded better Go agent upgradation support 2025-10-20 21:39:20 +01:00
Muhammad Ibrahim
3ee6f9aaa0 Better update handling by the Go Agent 2025-10-20 21:13:08 +01:00
Muhammad Ibrahim
8a5d61a7c1 Remove /bullboard from caching
Fixed entrypoint to make the binary executable
2025-10-20 20:24:12 +01:00
Muhammad Ibrahim
df502c676f added bullboard url for docker nginx template 2025-10-20 19:43:58 +01:00
Muhammad Ibrahim
54cea6b20b Added axios in package.json 2025-10-20 19:19:00 +01:00
Muhammad Ibrahim
af9b0d5d76 Added websocket support in the nginx template for docker 2025-10-20 18:45:16 +01:00
Muhammad Ibrahim
7b8c29860c Improved Agent version checking logic and page with ability to download the binaries from the REPO again 2025-10-20 17:46:27 +01:00
Muhammad Ibrahim
d78fb63c2d Modified setup.sh to handle the bullboard url 2025-10-19 20:53:43 +01:00
Muhammad Ibrahim
d3dc068c8e Simplified the docker nginx template
Modified the url for the buillboard to just /bullboard and made the nginx configuration to match
2025-10-19 20:46:09 +01:00
Muhammad Ibrahim
46e19fbfc2 Modified the auto-enrollment route to cater for the new multihostgroups variable when creating a new host 2025-10-19 19:32:51 +01:00
Muhammad Ibrahim
80a701cc33 Modified the proxmox_auto-enroll.sh script to suit the new way 2025-10-19 18:57:28 +01:00
Muhammad Ibrahim
c4d0d8bee8 Fixed repo count issue
Refactored code to remove duplicate backend api endpoints for counting
Improved connection persistence issues
Improved database connection pooling issues
Fixed redis connection efficiency
Changed version to 1.3.0
Fixed GO binary detection based on package manager rather than OS
2025-10-19 17:53:10 +01:00
Muhammad Ibrahim
30c89de134 Improved detection logic and upgrade mechanism using intermeditary script 2025-10-18 22:59:03 +01:00
Muhammad Ibrahim
4b35fc9ab9 Fixed upgrade detection logic 2025-10-18 21:53:35 +01:00
9 Technology Group LTD
191a1afada Enhance Redis user creation and security
Updated Redis user creation process to enhance security by generating a separate user password. Adjusted Redis CLI commands to include host and port specifications.
2025-10-18 21:05:36 +01:00
9 Technology Group LTD
175f10b8b7 Improve Redis user creation error handling
Refactor Redis user creation to capture command output and verify success.
2025-10-18 21:01:57 +01:00
9 Technology Group LTD
080bcbe22e Merge pull request #181 from PatchMon/feature/go-agent
Upgrade from <1.2.8 to 1.3.0
2025-10-18 17:38:02 +01:00
Muhammad Ibrahim
3175ed79a5 Added arm32 based agent
Added support for migrating from legacy bash script to new binary via intermediatry 1.2.9 script
2025-10-18 17:28:46 +01:00
Muhammad Ibrahim
fba6d0ede5 Added REDIS_USER variable in the generation of .env 2025-10-18 16:34:10 +01:00
Muhammad Ibrahim
54a5012012 Created tools folder
Modified setup.sh to now cater for redis installation
2025-10-18 16:26:36 +01:00
Muhammad Ibrahim
5004e062b4 Setup Redis passwords to be used in Vm installation or via Docker
Setup so that CORS_ORIGIN error appears on the frontend to help new installations
2025-10-18 16:14:09 +01:00
9 Technology Group LTD
44d52a5536 Merge pull request #180 from PatchMon/feature/go-agent
fixing redis environment issue and some UI fixes
2025-10-18 02:06:34 +01:00
Muhammad Ibrahim
52c8ba6b03 feat: implement multi-select checkbox interface for bulk host group assignment
- Add new backend endpoint PUT /api/hosts/bulk/groups for multi-group assignment
- Update BulkAssignModal to use checkbox interface instead of single select
- Replace single group selection with multi-select checkboxes
- Maintain visual consistency with existing multi-select patterns
- Add proper validation and error handling for multiple groups
- Remove unused bulkHostGroupId variable to fix linting error

This allows users to assign multiple hosts to multiple groups simultaneously,
improving the bulk assignment workflow and user experience.
2025-10-18 02:01:06 +01:00
Muhammad Ibrahim
9db563dec3 Modified docker-compose.yml for redis password
Fixed Assigning hosts to multiple groups in the ui
2025-10-18 02:00:08 +01:00
9 Technology Group LTD
c328123bd3 Merge pull request #179 from PatchMon/feature/go-agent
Major release 1.3.0 - New architecture
2025-10-17 22:43:09 +01:00
Muhammad Ibrahim
46eb797ac3 I should really commit more often instead of sending over one massive commit
Blame my ADHD brain
Sorry
- Now we have the server working properly in automation using BullMQ and Redis
- It also presents an API endpoint that is used to accept connections for websockets by agents (WS or WSS)
- Updated the docker-compose.yml and its documentation
2025-10-17 22:10:55 +01:00
Muhammad Ibrahim
c43afeb127 Added qty of connected and offline to the hosts dashboard page 2025-10-15 22:40:52 +01:00
Muhammad Ibrahim
5b77a1328d Removed js file for the update checker for github
Added real-time feature for agent status
made some ui improvements on the host details page
2025-10-15 22:15:18 +01:00
Muhammad Ibrahim
9a40d5e6ee Added support for the new agent mechanism and Binary
Added bullMQ + redis to the platform for automation and queue mechanism
Added new tabs in host details
2025-10-15 20:56:58 +01:00
Muhammad Ibrahim
fdd0cfd619 Make user_sessions migration idempotent for 1.2.7 compatibility
- Modified 20251005000000_add_user_sessions to check if table exists first
- Added existence checks for all indexes and foreign keys
- Migration now works for both fresh installs and 1.2.7 upgrades
- Prevents P3018 error by gracefully handling existing table
- Added comprehensive logging for debugging
2025-10-13 21:36:52 +01:00
Muhammad Ibrahim
de236f9ae2 Simplify migration reconciliation for 1.2.7 upgrade
- Simplified logic to focus on core issue: table exists but no migration record
- Creates migration record when user_sessions table exists from 1.2.7
- Prevents P3018 error by marking migration as already applied
- More reliable approach for production upgrades
2025-10-13 21:33:22 +01:00
Muhammad Ibrahim
4d5040e0e9 Merge feature/automation into main
- Resolve migration reconciliation conflicts
- Include updated migration that handles 1.2.7 upgrade scenario
- Merge automation features and Docker support
2025-10-13 21:25:11 +01:00
Muhammad Ibrahim
28c5310b99 Fix migration reconciliation to handle 1.2.7 upgrade scenario
- Add case for table exists but no migration record (1.2.7 upgrade)
- Creates migration record for existing user_sessions table
- Prevents P3018 error when table exists from 1.2.7 installation
- Handles all upgrade scenarios properly
2025-10-13 21:24:35 +01:00
Muhammad Ibrahim
a2e9743da6 Add migration reconciliation for user_sessions 1.2.7 to 1.2.8+ upgrade
- Creates migration 20251004999999_reconcile_user_sessions_migration
- Runs before 20251005000000_add_user_sessions migration
- Properly handles failed migrations by marking them as rolled back first
- Handles migration name conflicts from 1.2.7 (add_user_sessions) to 1.2.8+ (20251005000000_add_user_sessions)
- Fixes failed migration states from upgrade attempts
- Works naturally within Prisma migration system
- No changes to Docker entrypoint or setup scripts needed
2025-10-13 21:15:22 +01:00
Muhammad Ibrahim
3863d641fa Fix git update conflicts in setup.sh --update
- Add git clean -fd to remove untracked files before pull
- Add git reset --hard HEAD to ensure clean state
- Prevents merge conflicts from untracked files during updates
- Ensures smooth updates from any version to any version
2025-10-13 21:15:04 +01:00
Muhammad Ibrahim
cc8f77a946 deleted the js file 2025-10-13 20:59:05 +01:00
Muhammad Ibrahim
36455e2bfd Fixed Migration to be done via a prixma migraiton not js script 2025-10-13 20:58:08 +01:00
Muhammad Ibrahim
af65d38cad - Fixes P3009 error when upgrading from 1.2.7
- Reconciles 'add_user_sessions' to '20251005000000_add_user_sessions'
- Prevents duplicate migration attempts
- Handles fresh installs gracefully"
2025-10-13 20:29:42 +01:00
Muhammad Ibrahim
29266b6d77 Added longer transaction timeout on Postgresql DB 2025-10-12 21:14:52 +01:00
Muhammad Ibrahim
f96e468482 Improved patchmon-agent.sh logic to handle locked apt processes
Introduced docker Feature integration via agent
2025-10-11 22:54:49 +01:00
Muhammad Ibrahim
9f8c88badf Remove 'coming soon' indicator from Automation menu item 2025-10-11 20:55:27 +01:00
Muhammad Ibrahim
7985a225d7 Merge main into feature/automation to align git history
Resolved conflicts:
- backend/src/server.js: Kept automation routes alongside gethomepage routes
- frontend/src/pages/Queue.jsx: Kept deleted (replaced by Automation.jsx)
- setup.sh: Kept newer version date (2025-10-11)

This merge brings in all commits from main including:
- GetHomepage integration
- Version 1.2.9 updates
- Migration file renames
- Bug fixes and improvements
2025-10-11 20:45:29 +01:00
Muhammad Ibrahim
8c538bd99c Merge changes from main: Add GetHomepage integration and update to v1.2.9
- Added gethomepageRoutes.js for GetHomepage integration
- Updated all package.json files to version 1.2.9
- Updated agent script to version 1.2.9
- Updated version fallbacks in versionRoutes.js and updateScheduler.js
- Updated setup.sh with version 1.2.9
- Merged GetHomepage integration UI (Integrations.jsx)
- Updated docker-entrypoint.sh from main
- Updated VersionUpdateTab component
- Combined automation and gethomepage routes in server.js
- Maintains both BullMQ automation and GetHomepage functionality
2025-10-11 20:35:47 +01:00
9 Technology Group LTD
623bf5e2c8 Merge pull request #161 from PatchMon/feature/gethomepage
Feature/gethomepage + new version 1.2.9
2025-10-11 20:21:44 +01:00
Muhammad Ibrahim
ed8cc81b89 Changed version from 1.2.8 to 1.2.9 in preperation for next release 2025-10-11 20:14:08 +01:00
Muhammad Ibrahim
5c4353a688 Fixed linting errors with gethomepage area 2025-10-11 20:04:29 +01:00
Muhammad Ibrahim
6ebcdd57d5 Fixed Migration order issue where users were getting error of "add_user_sessions" does not exist 2025-10-11 14:47:27 +01:00
Muhammad Ibrahim
a3d0dfd665 Fixed entrypoint to handle better updating of Agent mechanism
Updated Readme to show the --update flag
2025-10-10 21:52:57 +01:00
Muhammad Ibrahim
d99ded6d65 Added Database Backup ability when doing setup.sh -- update 2025-10-10 20:16:24 +01:00
Muhammad Ibrahim
1ea96b6172 Merge branch 'main' of github.com:9technologygroup/patchmon.net 2025-10-10 19:37:46 +01:00
Muhammad Ibrahim
1e5ee66825 Fixed version update checking mechanism
Updated the setup.sh script to have the --update flag
2025-10-10 19:32:44 +01:00
Muhammad Ibrahim
88130797e4 Updated Version to 1.2.8 2025-10-10 12:39:17 +01:00
Muhammad Ibrahim
0ad1a96871 Building the start of Automation page and implemented BullMQ module 2025-10-10 12:24:23 +01:00
9 Technology Group LTD
566c415471 Merge pull request #152 from PatchMon/feature/queue
Feature/Agent
2025-10-08 18:52:02 +01:00
Muhammad Ibrahim
cfc91243eb Fixed Issues with RHEL based systems not sending their repos to PatchMon 2025-10-08 18:46:39 +01:00
Muhammad Ibrahim
84cf31869b Fixed spacing in the header for the buttons 2025-10-08 17:57:56 +01:00
Muhammad Ibrahim
18c9d241eb Fixed RockyLinux 10 Support 2025-10-08 17:53:08 +01:00
Muhammad Ibrahim
86b5da3ea0 Removed titles from the top nav bar to give space to search bar 2025-10-08 17:25:24 +01:00
9 Technology Group LTD
c9b5ee63d8 Merge pull request #151 from PatchMon/fix/agentdata
Fix/agentdata
2025-10-08 16:25:56 +01:00
Muhammad Ibrahim
ac4415e1dc Added support for Oracle Linux 9 2025-10-08 16:24:35 +01:00
9 Technology Group LTD
3737a5a935 Merge pull request #145 from Maelstromeous/patch-1
Document manual result update process for PatchMon
2025-10-08 15:50:28 +01:00
9 Technology Group LTD
bcce48948a Merge pull request #148 from PatchMon/refactor/frontend_optimisations
Various optimisations/fixes - mostly frontend
2025-10-08 15:48:10 +01:00
Muhammad Ibrahim
5e4c628110 Dashboard Card ecit 2025-10-08 09:53:03 +01:00
Muhammad Ibrahim
a8668ee3f3 Hide Dashboard text in header to give more space to search bar 2025-10-08 09:47:10 +01:00
Muhammad Ibrahim
5487206384 Fix hamburger menu icon and separator dark mode styling 2025-10-08 09:46:04 +01:00
Muhammad Ibrahim
daa31973f9 Fix mobile menu dark mode styling for Dashboard and navigation items 2025-10-08 09:45:31 +01:00
Muhammad Ibrahim
561c78fb08 Remove coming soon items from mobile menu navigation 2025-10-08 09:44:26 +01:00
Muhammad Ibrahim
6d3f2d94ba Add dark mode support and logout functionality to mobile menu 2025-10-08 09:43:41 +01:00
Muhammad Ibrahim
93534ebe52 Add dark mode support to BulkAssignModal 2025-10-08 09:40:38 +01:00
Muhammad Ibrahim
5cf2811bfd Fix BulkAssignModal: add missing bulkHostGroupId variable 2025-10-08 09:40:02 +01:00
tigattack
8fd91eae1a fix(frontend): use updateUserMutation in EditUserModal
Makes it more consistent with the other user mutations and resolves a lint error for the formerly unused `updateUserMutation`
2025-10-08 02:18:20 +01:00
tigattack
da8c661d20 refactor: fix lint errors 2025-10-08 02:12:51 +01:00
tigattack
2bf639e315 chore: update gitignore for docker dev 2025-10-08 02:10:40 +01:00
tigattack
c02ac4bd6f fix(frontend): don't query settings before auth 2025-10-08 02:10:40 +01:00
tigattack
4e0eaf7323 feat(frontend): add lazy loading for routes with Suspense fallback 2025-10-08 02:10:40 +01:00
tigattack
ef9ef58bcb feat(vite): add manual chunking for optimized build output 2025-10-08 02:08:46 +01:00
9 Technology Group LTD
29afe3da1f Merge pull request #147 from PatchMon/fix/agentdata
Add Line Chart
2025-10-08 00:47:47 +01:00
Muhammad Ibrahim
a861e4f9eb Fix linting issues: remove unused imports, add button types, fix array keys 2025-10-08 00:42:26 +01:00
9 Technology Group LTD
12ef6fd8e1 Merge pull request #146 from PatchMon/fix/agentdata
Agent improvements for Debian
Removal of 100 package limit
Modified hosts detail page with Agent history
Added Device fingerprinting for better session management (I need to improve this though)
Added Dashboard card of Package trends for all or specific hosts
Fixed filtering on the package page
2025-10-08 00:33:12 +01:00
Muhammad Ibrahim
ba9de097dc Added Dashboard card to show Package trends over time 2025-10-07 22:48:15 +01:00
Muhammad Ibrahim
8103581d17 Added Package trends over time graph XD 2025-10-07 22:46:55 +01:00
Muhammad Ibrahim
cdb24520d8 Added Total Packages in the Agent history
Added Script execution time in the Agent history tab
Added Pagination for the agent History
2025-10-07 21:46:37 +01:00
Muhammad Ibrahim
831adf3038 Fixed filtering for regular / security updates pie chart on the dashboard 2025-10-07 21:13:22 +01:00
Muhammad Ibrahim
2a1eed1354 Fixed Filtering with the OS Distribution Dashboard card 2025-10-07 21:01:44 +01:00
Muhammad Ibrahim
7819d4512e Made the coffee cup Yellow 2025-10-07 20:54:21 +01:00
Muhammad Ibrahim
a305fe23d3 Fixed issues with the agent not sending apt data properly
Added Indexing to the database for faster and efficient searching
Fixed some filtering from the hosts page relating to packages that need updating
Added buy me a coffee link (sorry and thank you <3)
2025-10-07 20:52:46 +01:00
Matt Cavanagh
2b36e88d85 Revise manual update instructions in README
Updated instructions for forcing updates after host package changes.
2025-10-07 20:25:53 +01:00
Matt Cavanagh
6624ec002d Document manual update process for PatchMon
Add instructions for manual update in README
2025-10-07 20:24:15 +01:00
Muhammad Ibrahim
840779844a Removed 100 limit 2025-10-07 18:20:41 +01:00
Muhammad Ibrahim
f91d3324ba Merge branch 'main' of github.com:9technologygroup/patchmon.net 2025-10-07 18:13:04 +01:00
Muhammad Ibrahim
8c60b5277e Update frontend: HostDetail, Hosts, and osIcons 2025-10-07 18:12:56 +01:00
9 Technology Group LTD
2ac756af84 Merge pull request #139 from stianmeyer/patch-2
Search for the absence of .sh files in the /app/agents folder to trigger copying of the agent files
2025-10-06 09:49:42 +01:00
9 Technology Group LTD
e227004d6b Merge pull request #140 from PatchMon/docs/docker
docs(docker): add description for 'edge' tag
2025-10-06 09:47:12 +01:00
Muhammad Ibrahim
d379473568 Added TFA timeout env variables
Added profile session management
Added "Remember me" to bypass TFA using device fingerprint
Fixed profile name not being persistent after logout and login
2025-10-06 00:55:23 +01:00
9 Technology Group LTD
2edc773adf Merge pull request #141 from PatchMon/ci/docker_no_push_fork 2025-10-05 23:27:44 +01:00
Stian Meyer
2db839556c Copy from agents_backup only when no .sh scripts are present 2025-10-06 00:24:07 +02:00
tigattack
aab6fc244e ci(docker): fix push conditions to prevent pushes from forks 2025-10-05 23:09:01 +01:00
tigattack
811f5b5885 docs(docker): add description for 'edge' tag 2025-10-05 22:55:46 +01:00
tigattack
b43c9e94fd Merge pull request #117 from PatchMon/ci/tweaks 2025-10-05 22:38:29 +01:00
Stian Meyer
2e2a554aa3 Update backend.docker-entrypoint.sh 2025-10-05 23:36:46 +02:00
tigattack
eabcfd370c ci(docker): remove 'dev' branch from push trigger and update image tag handling
- Create 'edge' tag for pushes to main
- Create versioned & latest tags for new tags with `v` prefix (instead of on release)
2025-10-05 21:33:41 +01:00
tigattack
55cb07b3c8 ci(build): remove 'dev' branch from push trigger 2025-10-05 21:33:41 +01:00
tigattack
0e049ec3d5 ci: ignore changes to docker in build and code quality workflows 2025-10-05 21:33:41 +01:00
9 Technology Group LTD
a2464fac5c Merge pull request #138 from PatchMon/dev
Removed 100 packages limit.
2025-10-05 20:50:53 +01:00
Muhammad Ibrahim
5dc3e8ba81 Removed 100 packages limit. 2025-10-05 20:38:25 +01:00
9 Technology Group LTD
63817b450f Merge pull request #137 from PatchMon/dev
Fixed Profile Name editing issue where it wouldn't save
Added more environment variables to env.example
fixed setup.sh so it would ask for the release tag rather than just the branch
2025-10-05 19:44:40 +01:00
Muhammad Ibrahim
1fa0502d7d Modified setup.sh to cater for new environment variables
Added missing env variables in the env.example file
2025-10-05 19:27:55 +01:00
Muhammad Ibrahim
581dc5884c Fixed issue with users not being updated
Re-worked setup.sh to use last 3 tags and the main branch (development latest)
2025-10-05 19:12:51 +01:00
9 Technology Group LTD
dcaffe2805 Merge pull request #135 from PatchMon/dev
Add logo files
2025-10-05 13:19:02 +01:00
Muhammad Ibrahim
a3005bccb4 Merge branch 'dev' of github.com:9technologygroup/patchmon.net into dev 2025-10-05 13:13:05 +01:00
Muhammad Ibrahim
499ef9d5d9 Add the Logo files 2025-10-05 13:11:31 +01:00
9 Technology Group LTD
6eb6ea3fd6 Merge pull request #134 from PatchMon/dev
Implemented Machine ID check when enrolling a linux host into PatchMon rather than using the friendly name as the unique identifier. Mainly implemented when I worked on the auto-enrollment system for ProxMox LXC Containers
Implemented ProxMox auto-enrollment function where it searches and attaches LXC containers then enrolls them into PatchMon
Add Package deletion ability from tigattack
Made tables and views better and in sync with the rest of the ui by tigattack
Made JWT Token required as a environment variable when starting server.js
Added global search bar
Added PatchMon Logos and ability to change them, with a new branding option in the settings menu
Reworked github fetch for version updates checking to give more details of latest commits
Made changes to the navigation pane
2025-10-05 13:05:22 +01:00
9 Technology Group LTD
a27c607d9e Merge branch 'main' into dev 2025-10-05 13:00:05 +01:00
9 Technology Group LTD
d4e0abd407 Translate diagram to mermaid from stianmeyer/patch-1
Translate diagram to mermaid
2025-10-05 12:38:41 +01:00
Muhammad Ibrahim
8d447cab0d Merge main into dev - resolved README conflict 2025-10-05 12:23:06 +01:00
Muhammad Ibrahim
6988ecab12 Made github version checking better
Added functionality of Logo branding
Modified sidebar width
2025-10-05 10:55:34 +01:00
Stian Meyer
fd108c6a21 Translate diagram to mermaid 2025-10-05 01:41:58 +02:00
Muhammad Ibrahim
3ea8cc74b6 fix: resolve updateScheduler database and API issues
- Fix database field names: lastUpdateCheck -> last_update_check
- Fix database field names: updateAvailable -> update_available
- Fix database field names: latestVersion -> latest_version
- Add graceful GitHub API rate limit handling
- Return null instead of throwing error on rate limit
- Prevent database update errors on API failures
2025-10-04 20:30:58 +01:00
Muhammad Ibrahim
a43fc9d380 fix: remove outdated GitHub repository warning
- Update updateScheduler to use default GitHub repository
- Remove 'No GitHub repository configured' warning message
- Use same default fallback logic as version routes
2025-10-04 20:29:46 +01:00
Muhammad Ibrahim
864719b4b3 feat: implement main branch vs release commit comparison
- Add commit difference tracking between main branch and release tag
- Show how many commits main branch is ahead of current release
- Update UI to display branch status with clear messaging
- Fix linting issues with useCallback and unused parameters
- Simplify version display with My Version | Latest Release layout
2025-10-04 20:27:41 +01:00
9 Technology Group LTD
cc89df161b Update README.md
Added Documentation Links
2025-10-04 19:38:09 +01:00
Muhammad Ibrahim
2659a930d6 Add force flag to bypass broken packages upon installation 2025-10-04 13:37:05 +01:00
Muhammad Ibrahim
fa57b35270 Added /hosts/install?force=true to the api endpoint to force the installation of the agent if there are existing broken packages on the host you want to monito 2025-10-04 13:09:29 +01:00
Muhammad Ibrahim
766d36ff80 fix: migration to properly drop unique index on friendly_name
The migration was dropping the constraint but not the underlying unique index.
In PostgreSQL, unique constraints and unique indexes can exist independently.
This caused auto-enrollment to fail with 'unique constraint violated' errors.

Added explicit DROP INDEX statement to ensure the unique index is removed,
allowing duplicate friendly_name values while machine_id remains unique.
2025-10-04 10:44:06 +01:00
Muhammad Ibrahim
3a76d54707 Made Proxmox LXC a tab within integrations page 2025-10-04 09:44:18 +01:00
Muhammad Ibrahim
dd28e741d4 fix: manual host creation and improve host identification
- Add machine_id support for manual host creation from GUI
- Generate temporary 'pending-{uuid}' machine_id for new hosts
- Agent now collects and sends machine_id on every update
- Backend replaces pending machine_id with real one on first agent connection
- Remove unnecessary duplicate name check (friendly_name can be duplicated)
- Add get_machine_id() function to agent (reads from /etc/machine-id, /var/lib/dbus/machine-id, or generates fallback)
- Display IP address in Network tab on host details page
- Fix network tab visibility conditions to include host.ip

This ensures proper host identification using machine_id while maintaining backwards compatibility with API credentials as the primary authentication method.
2025-10-04 09:39:47 +01:00
Muhammad Ibrahim
35d3c28ae5 feat(ui): Display machine_id in host details page and enable search
- Added machine_id field to host details page
- Backend now returns machine_id in all host queries
- Users can search hosts by machine_id
- Added hostname index to schema for better performance
2025-10-04 09:15:43 +01:00
Muhammad Ibrahim
3cf2ada84e migration: Add machine_id column to hosts table
- Adds machine_id as unique identifier for hosts
- Migrates existing hosts with 'migrated-' prefix
- Removes unique constraint from friendly_name
- Adds indexes for performance
2025-10-04 09:05:36 +01:00
Muhammad Ibrahim
b25bba50a7 feat(backend): Update routes to use machine_id for host identification
- Auto-enrollment endpoints now require and validate machine_id
- Check for duplicates by machine_id instead of friendly_name
- Added /hosts/check-machine-id endpoint for agent installer
- Bulk enrollment updated to handle machine_id
- Multiple hosts with same hostname now supported
2025-10-04 09:04:35 +01:00
Muhammad Ibrahim
811930d1e2 feat: Implement machine_id based host identification
- Add machine_id field to hosts schema (unique, indexed)
- Remove unique constraint from friendly_name (allow duplicate hostnames)
- Agent installer now generates/reads persistent machine_id
- Proxmox script retrieves machine_id from LXC containers
- Backend will check machine_id instead of hostname for duplicates

This allows multiple hosts with same hostname to coexist in PatchMon
2025-10-04 09:02:56 +01:00
Muhammad Ibrahim
f3db16d6d0 feat: Auto-install curl in LXC containers if missing before agent installation 2025-10-03 23:57:38 +01:00
Muhammad Ibrahim
b3887c818d chore: Update GitHub repository URLs from 9technologygroup/patchmon.net to PatchMon/PatchMon 2025-10-03 23:39:58 +01:00
9 Technology Group LTD
f7b73ba280 Update app_build.yml 2025-10-03 23:26:46 +01:00
Muhammad Ibrahim
5c2bacb322 feat: Add failure details section showing last 5 lines of output for failed containers 2025-10-03 22:49:51 +01:00
Muhammad Ibrahim
657017801b fix: Restore server.js from aa8b42c (accidentally overwrote with older version) 2025-10-03 22:30:53 +01:00
Muhammad Ibrahim
5e8cfa6b63 feat: Add Proxmox LXC auto-enrollment script with dpkg error recovery 2025-10-03 22:27:04 +01:00
9 Technology Group LTD
f9bd56215d Update README.md
Changed the RoadMap URL
2025-10-03 22:10:41 +01:00
9 Technology Group LTD
aa8b42cbb0 Merge pull request #129 from PatchMon/dev-1-2-8
Global Search + Proxmox Auto lxc enrollment
2025-10-03 22:08:26 +01:00
9 Technology Group LTD
51f6fabd45 Merge pull request #122 from PatchMon/feat/delete_repos
feat: add repository deletion functionality
2025-10-03 22:02:13 +01:00
tigattack
32ab004f3f feat: add repository deletion functionality with confirmation modal 2025-10-03 21:53:13 +01:00
9 Technology Group LTD
71b27b4bcf Merge pull request #123 from PatchMon/feat/package_detail
feat: add package detail page and list all packages with pagination
2025-10-03 18:03:57 +01:00
9 Technology Group LTD
60ca2064bf Merge pull request #124 from PatchMon/feat/repo_detail
restyle repository details
2025-10-03 18:03:23 +01:00
tigattack
5ccd0aa163 feat(repository): make hosts in repo detail more consistent with package detail 2025-10-02 23:53:06 +01:00
tigattack
a13b4941cd refactor(repository): use server icon in repository host count display 2025-10-02 23:52:56 +01:00
tigattack
482a9e27c9 fix(packages): fix security update badge 2025-10-02 23:52:11 +01:00
tigattack
f085596b87 fix(packages): update host property names 2025-10-02 23:52:10 +01:00
tigattack
757feab9cd fix(packages): add needsUpdate and isSecurityUpdate fields to package hosts 2025-10-02 23:52:10 +01:00
tigattack
fffc571453 feat(packages): complete package detail page
Open by clicking package name
2025-10-02 23:52:10 +01:00
tigattack
6f59a1981d feat(api): endpoint to retrieve hosts for a pkg
With pagination and search functionality
2025-10-02 23:52:10 +01:00
tigattack
8bb16f0896 fix(api): update package host fields to match database schema 2025-10-02 23:52:10 +01:00
tigattack
b454b8d130 feat(packages): show all packages by default, add pagination 2025-10-02 23:52:10 +01:00
9 Technology Group LTD
3fc4b799be Merge pull request #121 from PatchMon/fix/jwt_secret_no_default
fix(auth): JWT_SECRET is required
2025-10-02 22:15:35 +01:00
tigattack
9c39d83fe5 fix(auth): JWT_SECRET is required 2025-10-02 21:26:19 +01:00
9 Technology Group LTD
2ce6d9cd73 Merge pull request #119 from PatchMon/docs/docker
docs(docker): clarify image tags
2025-10-02 21:09:54 +01:00
tigattack
e97ccc5cbd docs(docker): clarify image tags 2025-10-02 21:01:55 +01:00
9 Technology Group LTD
1f77e459ce 1.2.7 Release
Please see the release notes.
2025-10-02 18:12:09 +01:00
tigattack
9ddc27e50c ci(docker): add QEMU setup 2025-10-02 18:05:30 +01:00
tigattack
26c58f687b Merge pull request #115 from PatchMon/feat/docker_changes 2025-10-02 17:25:57 +01:00
tigattack
c004734a44 fix(docker): update image references to use the correct repository 2025-10-02 15:55:52 +01:00
tigattack
841b97cb5d chore(docker): remove optional env vars from compose 2025-10-02 15:55:52 +01:00
tigattack
8464a3692d docs(docker): restructure env var docs and add missing vars 2025-10-02 15:55:52 +01:00
tigattack
258bc67efc docs(docker): update repo links with new URL 2025-10-02 15:55:52 +01:00
tigattack
b3c1319df4 docs(docker): clarify instructions for version-specific updates
Changes example version to 1.2.3 to hopefully make it clearer that this is JUST an example.
2025-10-02 15:55:52 +01:00
tigattack
f6d21e0ed5 docs(docker): improve secrets instructions, add JWT info 2025-10-02 15:55:52 +01:00
tigattack
b85eddf22a feat(docker): add tags for dev images in compose file 2025-10-02 15:55:52 +01:00
tigattack
01dac49c05 refactor(docker): update PostgreSQL password placeholder in compose files 2025-10-02 15:55:52 +01:00
tigattack
ab97e04cc1 chore(docker): add service name to compose files 2025-10-02 15:55:52 +01:00
tigattack
50b47bdd65 feat(docker): add JWT configs to backend image & compose 2025-10-02 15:55:52 +01:00
tigattack
7a17958ad8 feat(env): validate required env vars on start 2025-10-02 15:55:52 +01:00
tigattack
806f554b96 Merge pull request #114 from PatchMon/ci/docker 2025-10-02 15:55:29 +01:00
Muhammad Ibrahim
373ef8f468 feat: Add interactive dpkg error recovery with automatic retry 2025-10-02 15:19:49 +01:00
Muhammad Ibrahim
513c268b36 fix: Reset install_exit_code per container and detect success via output message 2025-10-02 15:14:50 +01:00
Muhammad Ibrahim
13c4342135 feat: Remove 'proxmox-' prefix from friendly names, use hostname only 2025-10-02 14:39:36 +01:00
Muhammad Ibrahim
bbb97dbfda fix: Remove EXIT from error trap to prevent false failures on successful completion 2025-10-02 14:37:38 +01:00
tigattack
31a95ed946 ci(docker): simplify image name template 2025-10-02 13:56:21 +01:00
tigattack
3eb4130865 ci(docker): fix push condition for build step 2025-10-02 13:56:21 +01:00
tigattack
5a498a5f7a ci(docker): login before buildx setup 2025-10-02 13:56:08 +01:00
Muhammad Ibrahim
e0eb544205 fix: Make all counter increments safe with || true to prevent set -e exit 2025-10-02 13:42:09 +01:00
Muhammad Ibrahim
51982010db fix: Pass API credentials directly to curl instead of via env vars 2025-10-02 13:41:12 +01:00
Muhammad Ibrahim
dc68afcb87 fix: Prevent set -e exit on agent install failure and show output 2025-10-02 13:39:28 +01:00
Muhammad Ibrahim
bec09b9457 fix: Download agent installer to file before executing to prevent stdin pipe hang 2025-10-02 13:37:55 +01:00
Muhammad Ibrahim
55c8f74b73 chore: Bump debug version to 1.0.0-debug.6 2025-10-02 13:16:09 +01:00
Muhammad Ibrahim
16ea1dc743 fix: Make logging functions always return 0 to prevent set -e exit 2025-10-02 13:15:57 +01:00
Muhammad Ibrahim
8c326c8fe2 debug: Add version echo at script start for verification 2025-10-02 13:14:38 +01:00
Muhammad Ibrahim
2abc9b1f8a debug: Add strict error handling and exit trap to diagnose silent exit 2025-10-02 13:12:29 +01:00
Muhammad Ibrahim
e5f3b0ed26 debug: Add detailed logging to diagnose where script hangs 2025-10-02 13:10:51 +01:00
Muhammad Ibrahim
bfc5db11da fix: Close stdin before while loop to prevent hang when piped from curl 2025-10-02 13:09:13 +01:00
Muhammad Ibrahim
a0bea9b6e5 fix: Remove global exec stdin redirect that breaks curl pipe 2025-10-02 13:03:50 +01:00
Muhammad Ibrahim
ebda7331a9 fix: Detach stdin globally to prevent curl pipe hangs in Proxmox script 2025-10-02 12:55:52 +01:00
Muhammad Ibrahim
9963cfa417 fix: Add timeouts and stdin redirection to prevent pct exec hanging 2025-10-02 08:28:07 +01:00
Muhammad Ibrahim
4e6a9829cf chore: Add migration file for auto_enrollment_tokens table 2025-10-02 08:13:04 +01:00
Muhammad Ibrahim
b99f4aad4e feat: Add Proxmox LXC auto-enrollment integration
- Add auto_enrollment_tokens table with rate limiting and IP whitelisting
- Create backend API routes for token management and enrollment
- Build frontend UI for token creation and management
- Add one-liner curl command for easy Proxmox deployment
- Include Proxmox LXC discovery and enrollment script
- Support future integrations with /proxmox-lxc endpoint pattern
- Add comprehensive documentation

Security features:
- Hashed token secrets
- Per-day rate limits
- IP whitelist support
- Token expiration
- Separate enrollment vs host API credentials
2025-10-02 07:50:10 +01:00
Muhammad Ibrahim
7a8e9d95a0 Added a global search bar 2025-10-02 07:03:10 +01:00
tigattack
ac22adde67 ci(docker): simplify conditional for workflow_dispatch input handling
Don't skip Docker login, doesn't really match the input option
2025-10-02 00:39:00 +01:00
tigattack
db1f03b0e0 ci(docker): replace GHCR_PAT with GITHUB_TOKEN 2025-10-02 00:38:14 +01:00
9 Technology Group LTD
74cc13b7de Create LICENSE 2025-10-01 23:43:19 +01:00
9 Technology Group LTD
65025b50cf Merge pull request #113 from PatchMon/dev-agent
Removed crontab insertion by installer
2025-10-01 21:12:24 +01:00
9 Technology Group LTD
de76836ba0 Create app_build.yml 2025-10-01 21:09:10 +01:00
Muhammad Ibrahim
fe448d0111 Removed crontab insertion as the initial update by the agent configures crontab anyway 2025-10-01 20:43:37 +01:00
9 Technology Group LTD
1b08be8864 Merge pull request #112 from 9technologygroup/dev-agent
Fixed selinux detection issue
2025-10-01 13:30:35 +01:00
Muhammad Ibrahim
28124f5fba Fixed selinux detection issue 2025-10-01 13:13:39 +01:00
9 Technology Group LTD
f789c1cebe Merge pull request #111 from 9technologygroup/dev-agent
Fixed dnf and issues for almalinux / rhel dervied systems, added bc as a pre-requisite
2025-10-01 12:57:17 +01:00
Muhammad Ibrahim
d13469ce33 Fixed dnf and issues for almalinux / rhel dervied systems, added bc as a prerequisite 2025-10-01 12:50:55 +01:00
9 Technology Group LTD
443ec145e1 Merge pull request #110 from 9technologygroup/dev-agent
Made installation script output the install for jq and curl if theres issues with it
2025-10-01 10:58:48 +01:00
Muhammad Ibrahim
42f882c1c6 Made installation script output the install for jq and curl if theres any issues with repos rathet than just exiting 2025-10-01 10:52:08 +01:00
9 Technology Group LTD
69acd1726c Merge pull request #109 from 9technologygroup/dev-agent
Made changes to the host details area to add notes
2025-10-01 09:35:57 +01:00
Muhammad Ibrahim
02f9899b23 Added pages for coming soon features 2025-10-01 09:33:34 +01:00
Muhammad Ibrahim
0742c4b05c fixed TFA setting up and login redirect issue 2025-10-01 09:08:27 +01:00
Muhammad Ibrahim
5d8a1e71d6 Made changes to the host details area to add notes
Reconfigured JWT session timeouts
2025-10-01 08:38:40 +01:00
9 Technology Group LTD
84c26054b2 Merge pull request #108 from 9technologygroup/dev-agent
Agent and Settings pages updates
2025-09-30 23:04:40 +01:00
Muhammad Ibrahim
f254b54404 Fix notifications route pointing to wrong component
- Create proper Notifications component with comprehensive layout
- Update routing to use SettingsLayout and Notifications component
- Remove incorrect dependency on SettingsHostGroups for notifications
- Add placeholder content for notification rules and settings
2025-09-30 22:52:25 +01:00
Muhammad Ibrahim
7682d2fffd Remove unused Settings import from App.jsx 2025-09-30 22:51:20 +01:00
Muhammad Ibrahim
d6db557d87 Replace old settings page with new SettingsLayout for alert-channels route
- Create new AlertChannels component with proper layout
- Update routing to use SettingsLayout instead of old Settings component
- Add placeholder content for alert channels functionality
- Remove dependency on old settings server page
2025-09-30 22:51:15 +01:00
Muhammad Ibrahim
af62a466c8 Remove unused settings query from main HostDetail component 2025-09-30 22:49:45 +01:00
Muhammad Ibrahim
434fa86941 Fix unused getCurlFlags function by moving it to CredentialsModal component scope 2025-09-30 22:49:40 +01:00
Muhammad Ibrahim
a61a8681e0 Remove test files and update .gitignore to prevent future test file commits 2025-09-30 22:48:14 +01:00
Muhammad Ibrahim
8eb75fba7d Fixed agent outputs and improved crontab chancing logics, added timedatectl functionality 2025-09-30 22:46:00 +01:00
Muhammad Ibrahim
b3d7e49961 Fixed <1GB Ram issue with it throwing integer error 2025-09-30 21:53:30 +01:00
Muhammad Ibrahim
8be25283dc Relaid out settings page and configured agent and other communication to use curl flags which can be configured to ignore ssl cert if self-hosted. 2025-09-30 21:38:13 +01:00
Muhammad Ibrahim
ed0cf79b53 Added settings pages to bring all the settings together from patchmon options, profile page and server settings. 2025-09-30 19:48:28 +01:00
9 Technology Group LTD
678efa9574 Merge pull request #98 from 9technologygroup/dev
Fixed Crontab timing Expression
2025-09-30 09:38:37 +01:00
Muhammad Ibrahim
8ca22dc7ab Fixed crontab syntax issue with timing 2025-09-30 09:31:12 +01:00
Muhammad Ibrahim
3466c0c7fb made the setup script download from the right area 2025-09-30 08:53:38 +01:00
9 Technology Group LTD
3da0625231 Merge pull request #97 from 9technologygroup/dev
Fixed npm installation scripts
2025-09-30 08:48:27 +01:00
Muhammad Ibrahim
21d6a3b763 fixed lefthook error on setup.sh 2025-09-30 08:06:03 +01:00
Muhammad Ibrahim
33b2b4b0fe fixed syntax issue in setup.sh 2025-09-30 08:02:30 +01:00
9 Technology Group LTD
479909ecf3 Merge pull request #96 from 9technologygroup/dev
fixed setup installer file
2025-09-30 08:00:16 +01:00
Muhammad Ibrahim
61ca05526b fixed setup installer file 2025-09-30 07:54:50 +01:00
9 Technology Group LTD
e04680bc33 Agent re-worked
Lots of fixes such as Agent rework, un-installation scripts, auth api applied to other endpoints etc.
2025-09-30 07:35:38 +01:00
9 Technology Group LTD
a765b58868 Merge pull request #91 from 9technologygroup/chore/docker_dev
Docker improvements
2025-09-30 00:07:19 +01:00
tigattack
654943a00c refactor(docker): use relative paths 2025-09-30 00:02:33 +01:00
tigattack
b54900aaed docs(docker): add update info 2025-09-30 00:02:33 +01:00
tigattack
cdaba97232 chore: update Docker setup for development 2025-09-30 00:02:33 +01:00
9 Technology Group LTD
45ec71c387 Merge pull request #90 from 9technologygroup/fix/locks_and_docker
fix: Revert Dockerfile edits and lockfile changes
2025-09-29 23:58:08 +01:00
tigattack
823ae7f30a feat: add workflow_dispatch input for Docker image push 2025-09-29 22:36:27 +01:00
tigattack
8553f717e2 fix: regenerate lockfile 2025-09-29 22:00:42 +01:00
tigattack
841b6e41ff fix: remove frontend pkg lock 2025-09-29 21:53:23 +01:00
tigattack
d626493100 fix: Revert Dockerfile edits and lockfile changes
This reverts commits 8409b71857 and 78eb2b183e
2025-09-29 21:53:13 +01:00
Muhammad Ibrahim
12a82a8522 Merge branch 'dev' of github.com:9technologygroup/patchmon.net into dev 2025-09-29 21:14:56 +01:00
Muhammad Ibrahim
44f90edcd2 Suppress false positive linting warnings for useId variables 2025-09-29 21:08:42 +01:00
Muhammad Ibrahim
1cc5254331 Fix missing useId variables in Settings component
- Restored all missing useId() variables that were accidentally removed
- Fixed ReferenceError: protocolId is not defined
- Settings page should now work correctly
2025-09-29 21:08:32 +01:00
9 Technology Group LTD
5bf6283a1c Merge pull request #88 from 9technologygroup/v1-2-7-agent
Agent modification
2025-09-29 20:55:55 +01:00
Muhammad Ibrahim
e9843f80f8 Fix linting warnings for unused variables 2025-09-29 20:42:29 +01:00
Muhammad Ibrahim
b49ea6b197 Update PatchMon version to 1.2.7
- Updated agent script version to 1.2.7
- Updated all package.json files to version 1.2.7
- Updated backend version references
- Updated setup script version references
- Fixed agent file path issues in API endpoints
- Fixed linting issues (Node.js imports, unused variables, accessibility)
- Created comprehensive version update guide in patchmon-admin/READMEs/
2025-09-29 20:42:14 +01:00
Muhammad Ibrahim
49c02a54dc Changed Discord Link 2025-09-29 17:57:13 +01:00
Muhammad Ibrahim
c7b177d5cb removed 44 file 2025-09-29 17:31:34 +01:00
Muhammad Ibrahim
8409b71857 Changed node_modules app dependancy folder 2025-09-29 17:26:15 +01:00
Muhammad Ibrahim
78eb2b183e Docker package-lock.json generation for each backend and frontend folder. 2025-09-29 17:20:13 +01:00
Muhammad Ibrahim
b49d225e32 Modified curl so that it ignores SSL errors, more for those who self host.
Modified installer to not replace exisitng crontab entries
2025-09-29 17:08:44 +01:00
Muhammad Ibrahim
470948165c Fix renovate.json formatting 2025-09-29 15:41:57 +01:00
Muhammad Ibrahim
20df1eceb1 Resolve merge conflict - use v1-2-7 version 2025-09-29 15:38:07 +01:00
Muhammad Ibrahim
372bb42fc5 Merge branch 'v1-2-7' into dev 2025-09-29 15:34:58 +01:00
Muhammad Ibrahim
4a6b486ba1 package-lock.json rebuild 2025-09-29 15:28:27 +01:00
Muhammad Ibrahim
1f5b33eb73 Package-lock.json fixes 2025-09-29 15:22:35 +01:00
Muhammad Ibrahim
aca8b300dd Regenerated package-lock.json 2025-09-29 15:10:21 +01:00
9 Technology Group LTD
c6459a965f Merge branch 'main' into dev 2025-09-29 14:56:48 +01:00
Muhammad Ibrahim
3b72794307 made the crontab installation safe as opposed to nuking it - spotted by thespad 2025-09-29 12:06:13 +01:00
9 Technology Group LTD
b5b110fed2 Update README.md
Released Website
2025-09-29 09:24:15 +01:00
tigattack
40bf8747b1 fix(frontend.dockerfile): remove unused line 2025-09-28 18:24:02 +01:00
tigattack
178f871582 chore: remove unnecessary use of npx 2025-09-28 18:23:29 +01:00
tigattack
840664c39e chore: fixup package-lock, remove unused deps 2025-09-28 18:20:06 +01:00
Muhammad Ibrahim
c18696f772 Fixed syntax in npm commands 2025-09-28 16:09:50 +01:00
Muhammad Ibrahim
6adbbca439 changed Docker flags to RUN npm ci --ignore-scripts && \
npm install --only=optional

So that Vite can install dependancies it needs for multi Arch builds
2025-09-28 16:04:26 +01:00
Muhammad Ibrahim
edfd82a86d Added --ignore-scripts to npm ci commands for lefthook script to not takeplace upon build 2025-09-28 15:58:01 +01:00
Muhammad Ibrahim
bed52a04b2 Merge remote-tracking branch 'origin/v1-2-7' into v1-2-7 2025-09-28 15:46:02 +01:00
9 Technology Group LTD
7369e23061 Merge pull request #65 from 9technologygroup/renovate/vitejs-plugin-react-5.x
chore(deps): update dependency @vitejs/plugin-react to v5
2025-09-28 15:25:28 +01:00
9 Technology Group LTD
271d2c0df1 Merge pull request #64 from 9technologygroup/renovate/lucide-monorepo
fix(deps): update dependency lucide-react to ^0.544.0
2025-09-28 15:25:13 +01:00
renovate[bot]
518b08895e chore(deps): update dependency @vitejs/plugin-react to v5 2025-09-28 14:19:33 +00:00
renovate[bot]
aba8ec5d01 fix(deps): update dependency lucide-react to ^0.544.0 2025-09-28 14:19:20 +00:00
9 Technology Group LTD
630949b7b9 Merge pull request #62 from 9technologygroup/renovate/configure
chore: Configure Renovate
2025-09-28 15:18:14 +01:00
renovate[bot]
82d0ff315f Add renovate.json 2025-09-27 11:34:14 +00:00
9 Technology Group LTD
df04770113 Merge pull request #60 from tigattack/more_fixes
More fixes
2025-09-27 09:34:12 +01:00
9 Technology Group LTD
038d4c515b Update README.md
Fixed discord link to a redirect URL
2025-09-27 09:32:37 +01:00
tigattack
f99e01a120 refactor(frontend): don't store current host tab in localstorage 2025-09-27 02:26:25 +01:00
tigattack
175042690e refactor(frontend): don't store permissions in localstorage 2025-09-27 02:26:21 +01:00
tigattack
102546e45d fix(frontend): eliminate duplicate API calls during log in/out 2025-09-27 01:50:03 +01:00
tigattack
751a202fec fix(frontend): actually fix login after signup 2025-09-27 01:49:53 +01:00
tigattack
c886b812d6 chore: clarify auto-update feature 2025-09-27 00:52:04 +01:00
tigattack
be3fe52aea fix(api): resolve duplicate key constraint errors in host package updates 2025-09-27 00:52:04 +01:00
9 Technology Group LTD
d85920669d Merge pull request #58 from tigattack/post_lint_fixes
Post-lint fixes
2025-09-26 20:28:39 +01:00
tigattack
c4e056711b fix(frontend): use React Router navigation to open add host modal
Fixes page reload on add host button click.
2025-09-26 20:17:09 +01:00
tigattack
60fa598803 fix(frontend): solve missing imports 2025-09-26 20:17:09 +01:00
tigattack
5c66887732 refactor(frontend): optimise auth process
- Stops frontend trying to make calls that require auth before auth has occured
- Stops frontend making calls that aren't necessary before auth has occured
- Implements state machine to better handle auth phases
2025-09-26 20:17:09 +01:00
tigattack
ba087eb23e chore: add types/bcryptjs 2025-09-26 20:17:09 +01:00
tigattack
e3aa28a8d9 fix: login after signup
Also resolves entire user object being return to client, including password_hash... ⚠️
2025-09-26 20:17:09 +01:00
tigattack
71d9884a86 fix(frontend): imports are unused 2025-09-26 20:16:41 +01:00
9 Technology Group LTD
2c47999cb4 Update README.md 2025-09-26 17:54:34 +01:00
9 Technology Group LTD
6bf2a21f48 Update README.md
Updated discord link
2025-09-26 12:27:19 +01:00
Muhammad Ibrahim
a76a722364 Fixed inline toggle changes when viewing the list of hosts in table 2025-09-26 09:09:02 +01:00
9 Technology Group LTD
40a9003e6f Merge pull request #52 from tigattack/style/fmt
Lint & format code, add rules, hooks, and workflow
2025-09-26 09:00:07 +01:00
tigattack
e9bac06526 Merge pull request #2 from 9technologygroup/iby-suggestions-for-pr-52 2025-09-26 08:03:22 +01:00
Muhammad Ibrahim
0c0446ad69 Added a few dashboard enhancements with Doughnut charts 2025-09-26 02:10:55 +01:00
Muhammad Ibrahim
dbebb866b9 Removed formreturn data to be outputted in console log when host is created 2025-09-26 01:09:08 +01:00
Muhammad Ibrahim
eb3f3599f9 Fixed group selection to be in effect when selecting the group upon host creation page 2025-09-26 01:06:56 +01:00
Muhammad Ibrahim
527b0ccc3c Fixed auto-update toggle to refresh upon save 2025-09-26 01:05:49 +01:00
tigattack
1ff3da0a21 fix(frontend): add missing useId imports 2025-09-26 01:02:29 +01:00
Muhammad Ibrahim
641272dfb8 Put in more env variables and fallback to localhost if BACKEND_URL and BACKEND_PORT is not found
Fixed imports for react imports for useMemo, useEffect etc from hosts and HostDetail files
2025-09-26 00:53:44 +01:00
tigattack
3c01c4bfb2 chore: add lefthook commit hook 2025-09-26 00:24:32 +01:00
tigattack
35eb9303b1 style: fmt 2025-09-26 00:24:32 +01:00
tigattack
469107c149 fix: incorrect cardId 2025-09-26 00:24:32 +01:00
tigattack
22f6befc89 fix(backend): lint errors 2025-09-26 00:24:32 +01:00
tigattack
03802daf13 fix(frontend): bad key 2025-09-26 00:24:32 +01:00
tigattack
17509cbf3c fix(frontend): fix missing icons 2025-09-26 00:24:32 +01:00
tigattack
ffbf5f12e5 fix(frontend): add missing import 2025-09-26 00:24:32 +01:00
tigattack
3bdf3d1843 ci: add code quality workflow 2025-09-26 00:24:32 +01:00
tigattack
ea550259ff fix(frontend): unused imports 2025-09-25 23:54:24 +01:00
tigattack
047fdb4bd1 fix(frontend): unused parameters 2025-09-25 23:54:24 +01:00
tigattack
adc142fd85 fix(frontend): Missing radix parameter 2025-09-25 23:54:24 +01:00
tigattack
42f6971da7 fix(frontend): variable unused 2025-09-25 23:54:24 +01:00
tigattack
0414ea39d0 fix(frontend): Use Date.now() instead of new Date() 2025-09-25 23:54:24 +01:00
tigattack
6357839619 fix(frontend): Change to an optional chain 2025-09-25 23:54:24 +01:00
tigattack
c840a3fdcc fix(frontend): isNaN is unsafe 2025-09-25 23:54:24 +01:00
tigattack
a1bf2df59d fix(frontend): unused vars/params 2025-09-25 23:54:24 +01:00
tigattack
67a5462a25 fix(frontend): imports are unused 2025-09-25 23:54:24 +01:00
tigattack
a32007f56b fix(frontend): Static Elements should not be interactive 2025-09-25 23:54:24 +01:00
tigattack
6e1ec0d031 fix(frontend): elements with button role can be changed to button 2025-09-25 23:54:23 +01:00
tigattack
ce2ba0face fix(frontend): hook does not specify its dependency 2025-09-25 23:54:23 +01:00
tigattack
53f8471d75 fix(frontend): Avoid using the index of an array as key property in an element 2025-09-25 23:54:23 +01:00
tigattack
74f42b5bee fix(frontend): A form label must be associated with an input 2025-09-25 23:54:23 +01:00
tigattack
a84da7c731 fix(frontend): Static Elements should not be interactive 2025-09-25 23:54:23 +01:00
tigattack
83ce7c64fd style(frontend): fmt 2025-09-25 23:54:23 +01:00
tigattack
15902da87c fix(frontend): use useId() for input IDs 2025-09-25 23:54:23 +01:00
tigattack
a11f180d23 fix(frontend): add missing explicit types for button elems 2025-09-25 23:54:23 +01:00
tigattack
35bf858977 fix(frontend): use useId() for input IDs 2025-09-25 23:54:23 +01:00
tigattack
330f80478d chore: add biome 2025-09-25 23:54:23 +01:00
tigattack
b43b20fbe9 style(frontend): fmt 2025-09-25 23:14:35 +01:00
tigattack
591389a91f style(backend): fmt 2025-09-25 23:13:49 +01:00
tigattack
6d70a67a49 docs(docker): link to images in GHCR 2025-09-25 23:13:49 +01:00
tigattack
0cca6607d7 chore(docker): tweak db healthcheck
Make the stack faster to start.
2025-09-25 23:13:49 +01:00
tigattack
5f0ce7f26a chore(docker): unpin alpine version 2025-09-25 23:13:49 +01:00
tigattack
38d0dcb3c4 docs(docker): add volumes info 2025-09-25 23:13:49 +01:00
tigattack
03d6ebb43a chore(compose): move agent_files to docker vol 2025-09-25 23:13:49 +01:00
tigattack
b7ce2a3f54 docs(docker): move password update docs 2025-09-25 23:13:49 +01:00
Muhammad Ibrahim
783f8d73fe Replaced older dependancies with newer ones 2025-09-25 15:40:13 +01:00
9 Technology Group LTD
9f72690f82 Merge pull request #48 from 9technologygroup/docker_tweaks 2025-09-25 08:40:35 +01:00
tigattack
48d2a656e5 docs(docker): link to images in GHCR 2025-09-24 21:38:54 +01:00
tigattack
e9402dbf32 chore(docker): tweak db healthcheck
Make the stack faster to start.
2025-09-24 21:36:51 +01:00
tigattack
dc1ad6882c chore(docker): unpin alpine version 2025-09-24 21:35:36 +01:00
9 Technology Group LTD
22f616e110 Merge pull request #47 from 9technologygroup/docs/docker_backend_agents_vol 2025-09-24 21:21:41 +01:00
tigattack
3af269ee47 docs(docker): add volumes info 2025-09-24 21:06:05 +01:00
tigattack
f4ece11636 chore(compose): move agent_files to docker vol 2025-09-24 21:06:05 +01:00
tigattack
da6bd2c098 docs(docker): move password update docs 2025-09-24 21:00:19 +01:00
Muhammad Ibrahim
a479003ba9 fix: return first_name and last_name in setup-admin and signup responses
- Add first_name and last_name to select clause in setup-admin endpoint
- Add first_name and last_name to select clause in signup endpoint
- Ensures frontend receives the name fields after user creation
- Fixes issue where first/last names don't populate in UI after setup

The data was being saved to database correctly but not returned in the API response, causing frontend to not display the names properly.
2025-09-24 20:17:36 +01:00
Muhammad Ibrahim
78f4eff375 fix: correct package versions and npm flags
Frontend package.json fixes:
- react-router-dom: ^6.31.0 → ^6.30.1 (6.31.0 doesn't exist)
- postcss: ^8.5.1 → ^8.5.6 (use latest stable version)

Setup script fixes:
- Replace deprecated --production flag with --omit=dev
- Resolves npm warning: 'npm WARN config production Use --omit=dev instead'

Fixes npm install errors:
- 'No matching version found for react-router-dom@^6.31.0'
- Deprecated npm configuration warnings
2025-09-24 20:07:36 +01:00
Muhammad Ibrahim
c4376d35c9 setup: add intelligent package manager detection for Debian 13+
- Add detect_package_manager() function that prefers 'apt' over 'apt-get'
- Use PKG_MANAGER variables throughout script for consistency
- Support both modern Debian 13+ (apt) and older systems (apt-get)
- Automatically detect and use the best available package manager
- Better error handling when no package manager is found

Fixes compatibility with:
- Debian 13 (Trixie) and newer
- Ubuntu 22.04+ (prefers apt)
- Older Debian/Ubuntu systems (fallback to apt-get)

All package operations now use the detected package manager:
- System updates and upgrades
- Prerequisites installation
- Node.js, PostgreSQL, nginx, certbot installation
2025-09-24 19:55:42 +01:00
Muhammad Ibrahim
9f3016be57 deps: update all dependencies to latest non-deprecated versions
Frontend:
- ESLint: 8.53.0 → 9.17.0 (fixes deprecated warnings)
- React/React-DOM: 18.2.0 → 18.3.1
- Axios: 1.6.2 → 1.7.9
- Chart.js: 4.4.0 → 4.4.7
- Date-fns: 2.30.0 → 4.1.0 (major update)
- Express: 4.18.2 → 4.21.2
- HTTP-proxy-middleware: 2.0.6 → 3.0.3
- Lucide-react: 0.294.0 → 0.468.0
- React-router-dom: 6.20.1 → 6.31.0
- Add ESLint v9 flat config (eslint.config.js)
- Add required @eslint/js and globals dependencies

Backend:
- Prisma: 5.7.0 → 6.1.0
- Express: 4.18.2 → 4.21.2
- Dotenv: 16.3.1 → 16.4.7
- Express-rate-limit: 7.1.5 → 7.5.0
- Express-validator: 7.0.1 → 7.2.0
- Helmet: 7.1.0 → 8.0.0
- UUID: 9.0.1 → 11.0.3
- Winston: 3.11.0 → 3.17.0
- Nodemon: 3.0.2 → 3.1.9

Fixes npm warnings:
- rimraf < v4 deprecated
- inflight memory leaks
- glob < v9 deprecated
- @humanwhocodes packages deprecated
- ESLint v8 no longer supported
2025-09-24 19:50:50 +01:00
Muhammad Ibrahim
c3013cccd3 Added min specs of VM / LXC 2025-09-24 19:45:18 +01:00
Muhammad Ibrahim
456184d327 setup: improve sudo handling and add root checks
- Add root permission check at startup
- Install sudo automatically if not present (for Debian systems)
- Add run_as_user() helper function with better error handling
- Provide fallback PostgreSQL setup using 'su' when sudo unavailable
- Replace all sudo -u commands with the new helper function
- Better error messages for missing users/sudo

This ensures compatibility with minimal Debian systems that don't have sudo by default.
2025-09-24 19:25:49 +01:00
Muhammad Ibrahim
a9c579bdd0 setup: fix IP address handling for database/service names
- Detect when FQDN starts with digit (IP address)
- Generate 2 random letter prefix for database/service names
- Use prefixed names for DB_NAME, DB_USER, INSTANCE_USER
- Keep original FQDN for nginx configs and URLs
- Prevents PostgreSQL errors with numeric database names

Example: 192.168.1.50 -> xy192_168_1_50 for services
2025-09-24 19:22:32 +01:00
Muhammad Ibrahim
13fe8a0bc5 agents: use apt-get simulation for Ubuntu 18 compatibility; fix installer log path 2025-09-24 15:21:22 +01:00
9 Technology Group LTD
1d8742ccad Update README.md 2025-09-24 14:18:41 +01:00
9 Technology Group LTD
bccfc0876f Update README.md 2025-09-24 10:31:27 +01:00
9 Technology Group LTD
43a42aa931 Add files via upload 2025-09-24 10:30:11 +01:00
9 Technology Group LTD
ad5627362b Merge pull request #33 from 9technologygroup/docs/readme
docs: move header to top, add docker info
2025-09-24 10:15:56 +01:00
tigattack
61db61e1ba docs: move getting started above architecture/comms model 2025-09-24 10:07:43 +01:00
tigattack
8dc702e7fc docs: move header to top, add docker info 2025-09-24 10:04:14 +01:00
9 Technology Group LTD
45f5ccc638 Version 1.2.6 latest release 2025-09-24 09:45:57 +01:00
Muhammad Ibrahim
3fbe1369cf Ammended installation command 2025-09-24 09:38:30 +01:00
Muhammad Ibrahim
e62a4fed56 Merge branch 'dev' of github.com:9technologygroup/patchmon.net into dev 2025-09-24 09:20:29 +01:00
Muhammad Ibrahim
be549d4b34 Added README.md file -- finally !
Added self-hosting easy installer script -- finally !
2025-09-24 09:08:20 +01:00
Muhammad Ibrahim
99aa79a6a4 Fixed authentication redirect upon signin using states
Finally fixed dashboard defaults for new user signups
2025-09-24 07:42:15 +01:00
Muhammad Ibrahim
73761d8927 fixed dashboard defaults via server.js config 2025-09-24 07:07:38 +01:00
Muhammad Ibrahim
9889083900 reverted the removal of schema migration files to make it production save in order to not mess up previous instances 2025-09-24 03:05:22 +01:00
Muhammad Ibrahim
acb30f22bd Removed migration settings from earlier whic hwere messing with the logic of new dashboard layout system. Now has a single source of truth based on what the defaults should be in server.js 2025-09-24 02:54:44 +01:00
Muhammad Ibrahim
3a0b564a6f Fixed permissions issues
Created default user role
modified server.js to check if roles of admin/user is present
modified server.js to check dashboard cards
set up default dashboard cards to show
2025-09-24 02:33:26 +01:00
9 Technology Group LTD
e536a5b706 Merge pull request #29 from 9technologygroup/ci/build
Docker: Build workflow tweaks
2025-09-24 02:07:22 +01:00
tigattack
0a3e4ad5ee ci(build): add PAT note 2025-09-24 00:44:24 +01:00
tigattack
abcf88b8b9 ci(build): use PAT 2025-09-24 00:02:51 +01:00
tigattack
94ec14f08b ci(build): move permissions to higher scope 2025-09-23 17:55:46 +01:00
tigattack
f25834b4ba ci(build): simplify matrix 2025-09-23 17:41:37 +01:00
tigattack
f85464ad26 ci(build): bump actions 2025-09-23 17:41:16 +01:00
9 Technology Group LTD
db0ba201a4 Merge pull request #24 from tigattack/fix/docker_agents 2025-09-23 10:47:27 +01:00
tigattack
676082a967 chore: add docker/agents bind path to gitignore 2025-09-23 08:07:27 +01:00
tigattack
30bb29c9f4 fix(docker): copy agent files in docker bind mount if empty 2025-09-23 08:04:13 +01:00
tigattack
968d9f964b docs(docker): add more build instructions 2025-09-23 08:03:10 +01:00
9 Technology Group LTD
3e413e71e4 Merge pull request #21 from tigattack/ci/build_push
Add Docker build pipeline, rework Docker images, misc other tweaks
2025-09-23 04:18:02 +01:00
tigattack
e25baf0f55 ci(docker): build on PRs to dev 2025-09-23 00:04:49 +01:00
tigattack
2869d4e850 ci: add docker build workflow 2025-09-22 23:42:00 +01:00
tigattack
e459d8b378 docs(docker): add docker README 2025-09-22 23:42:00 +01:00
tigattack
31583716c8 refactor(docker): tweak compose setup and envs 2025-09-22 23:42:00 +01:00
tigattack
e645124356 refactor(docker): rework compose, add dev compose 2025-09-22 23:42:00 +01:00
tigattack
c3aa5534f3 fix: improve proxy config 2025-09-22 23:42:00 +01:00
tigattack
bf2ea908f4 refactor(docker): rework docker
- Move Docker files to own directory (tidier since I added several more files)
- Optimise images and reduce size
  - Uses multi-stage builds
  - Optimises layer efficiency
  - Uses NGINX as base for frontend
- Sets default env vars
- Uses tini for proper signal handling
2025-09-22 23:42:00 +01:00
tigattack
43ce146987 fix(frontend): fix login page icons 2025-09-22 23:42:00 +01:00
tigattack
69a121cdde fix(frontend): don't poll for updates when user not authorised 2025-09-22 23:42:00 +01:00
tigattack
001b234ecc fix(backend): set imported agent versions as current
Fixes agent installation script failing with "No agent version available" error

- Auto-imported agent versions now marked as current/default appropriately
- First imported version becomes both current and default
- Newer versions become current (but not default)
- Added concurrent database updates with Promise.all()
- Fixed agent download endpoint fallback to filesystem when no DB versions exist
2025-09-22 23:41:53 +01:00
tigattack
d300922312 refactor(backend): make /agent/download route more resilient
* Fixes early 404 return
* Fixes filename when agent version undefined
* Correctly returns 500 when error occurs
2025-09-22 22:23:54 +01:00
tigattack
20ff5b5b72 refactor(backend): update DB wait env var names 2025-09-22 22:23:54 +01:00
tigattack
5e6a2d863c fix(backend): update settings log msgs 2025-09-22 22:23:54 +01:00
tigattack
ab46b0138b feat(backend): strip default ports from serverUrl 2025-09-22 22:23:54 +01:00
tigattack
5ca0f086d4 fix(frontend): don't force HTTPS port 2025-09-22 22:23:54 +01:00
tigattack
9cb5cd380b feat(backend): add settings service with env var handling
New features:
* Settings initialised on startup rather than first request
* Settings are cached in memory for performance
* Reduction of code duplication/defensive coding practices
2025-09-22 22:23:54 +01:00
tigattack
517b5cd7cb feat(backend): make requests to health endpoint debug log level
avoids spam in docker
2025-09-22 22:23:54 +01:00
tigattack
5dafe34322 feat(backend): add option to log to console
Enabled by `PM_LOG_TO_CONSOLE=true`
2025-09-22 22:23:54 +01:00
tigattack
677d3b4df1 feat(backend): wait for DB on start 2025-09-22 22:23:54 +01:00
tigattack
c3365fedb2 fix: conflate frontend_url and server_url 2025-09-22 22:23:54 +01:00
Muhammad Ibrahim
f23f075e41 Added more dashboard cards
Fixed permissions roles creation bug
On initial deployment, made it so the agent being populated will be set as default and current
Fixed host detail to include package numbers
Added ability to add full name
- fixed loads of other bugs caused by camelcase to snake_Case migration
2025-09-22 21:31:14 +01:00
Muhammad Ibrahim
9b76d9f81a don't think the setting of agent update is neede din the env.example as its all done via the server itself 2025-09-22 02:43:58 +01:00
Muhammad Ibrahim
64d9c14002 Sorted the issue out with installation file not being found upon deployment. Ensured jq dependancy is installing in version 1.2.5 and above of the installation commands 2025-09-22 02:39:22 +01:00
Muhammad Ibrahim
9a01d27d8b Removed populate-agent-version.js as now agent is being imported if newer than existing upon service start 2025-09-22 02:30:12 +01:00
Muhammad Ibrahim
d72f96b598 Improved Agent version import upon service start 2025-09-22 02:22:25 +01:00
Muhammad Ibrahim
8f8b23ccf1 Fixed agent version display issue 2025-09-22 01:46:35 +01:00
Muhammad Ibrahim
1392976a7b Fixed agent version display issue 2025-09-22 01:45:27 +01:00
Muhammad Ibrahim
797be20c45 Created toggle for enable / disable user signup flow with user role
Fixed numbers mismatching in host cards
Fixed issues with the settings file
Fixed layouts on hosts/packages/repos
Added ability to delete multiple hosts at once
Fixed Dark mode styling in areas
Removed console debugging messages
Done some other stuff ...
2025-09-22 01:06:18 +01:00
Muhammad Ibrahim
a268f6b8f1 Fixed isAuthenticated function 2025-09-21 22:55:38 +01:00
Muhammad Ibrahim
a4770e5106 Addeed detailed logging to track first time admin setup 2025-09-21 22:52:23 +01:00
Muhammad Ibrahim
523756cef2 Fix AuthContext useEffect dependencies and add comprehensive debug logging for first-time setup flow 2025-09-21 22:48:29 +01:00
Muhammad Ibrahim
697da088d4 Fixed admin count endpoint 2025-09-21 22:42:47 +01:00
Muhammad Ibrahim
739ca6486a Fix React error #301 2025-09-21 22:37:53 +01:00
Muhammad Ibrahim
38d299701d Added security restrictions to admin count endpoint and force admin setup for testing 2025-09-21 22:31:07 +01:00
Muhammad Ibrahim
5d35abe496 Implemented first-time admin registration flow if no admin present 2025-09-21 22:09:37 +01:00
Muhammad Ibrahim
7ff051be3e Implemented first-time admin registration flow if no admin present 2025-09-21 22:08:07 +01:00
Muhammad Ibrahim
2de80f0c06 Updated frontend to snake_case and fixed bugs with some pages that were not showing. Fixed authentication side. 2025-09-21 20:27:47 +01:00
9 Technology Group LTD
875ab31317 Merge pull request #2 from AdamT20054/dev
Adam Made this with love (and a bit of hate) :D
Whilst he worked until stupid o'clock to get it completed.
2025-09-21 18:49:16 +01:00
AdamT20054
a96439596d Script actually downloads on deb systems (i only use deb, sm1 else will have to test the other distros) 2025-09-21 07:54:43 +01:00
AdamT20054
d2bf201f1e Fix script path in hostRoutes to correct installation script location. **Might break non docker installs?** 2025-09-21 07:35:10 +01:00
AdamT20054
b2d3181ffe This is MY docker config, just using it to test 2025-09-21 07:31:51 +01:00
AdamT20054
5a0229cef4 Should fix host configs generating with default params.
Another issue, caused by the wack prisma migration.
2025-09-21 07:28:12 +01:00
AdamT20054
f73c10f309 fml 2025-09-21 07:18:04 +01:00
AdamT20054
8722bd170f Fix updating the server URL in the dashboard not updatinng. 2025-09-21 07:11:58 +01:00
AdamT20054
fd76a9efd2 Refactor authentication and routing code to use consistent naming conventions for database fields.
Do NOT update the schema like that again for the love of god.
2025-09-21 06:59:39 +01:00
AdamT20054
584e5ed52b Refactor database model references to use consistent naming conventions and update related queries 2025-09-21 06:13:05 +01:00
AdamT20054
c5ff4b346a Refactor agent version handling to use consistent naming and add download URL 2025-09-21 05:19:09 +01:00
AdamT20054
cc9f0af1ac Fixed my cp fuckup. 2025-09-21 04:30:54 +01:00
Adam O'neill
d7460068d7 Merge branch 'dev' into dev 2025-09-21 04:18:07 +01:00
AdamT20054
9135fa93b3 Update Dockerfile to copy backend files and include agent version population script 2025-09-21 04:14:36 +01:00
AdamT20054
662a8d665a Add script to populate agent version from Docker container initialization 2025-09-21 04:13:40 +01:00
Muhammad Ibrahim
f3351d577d fixed rate limits into env 2025-09-21 03:58:22 +01:00
AdamT20054
e1b8e4458a Add signup functionality to Login component with email support and proxy middleware for API requests 2025-09-21 03:32:41 +01:00
AdamT20054
976ca79f57 Add public signup endpoint for user registration with validation 2025-09-21 03:28:39 +01:00
AdamT20054
01a8bd6c77 Add signup API endpoint for user registration 2025-09-21 03:23:55 +01:00
AdamT20054
d210d6adde Update Dockerfile to install OpenSSL and simplify startup command 2025-09-21 03:23:33 +01:00
AdamT20054
229ba4f7be Add Docker configuration for PostgreSQL, backend, and frontend services 2025-09-21 01:43:34 +01:00
Muhammad Ibrahim
9a3827dced Fix hardcoded 1.2.4 values - make frontend load current version from API on mount and update User-Agent dynamically 2025-09-20 14:49:05 +01:00
Muhammad Ibrahim
d687ec4e45 Make version detection dynamic - read from package.json instead of hardcoded values 2025-09-20 14:44:42 +01:00
183 changed files with 58043 additions and 27512 deletions

34
.dockerignore Normal file
View File

@@ -0,0 +1,34 @@
# Environment files
**/.env
**/.env.*
**/env.example
# Node modules
**/node_modules
# Logs
**/logs
**/*.log
# Git
**/.git
**/.gitignore
# IDE files
**/.vscode
**/.idea
**/*.swp
**/*.swo
# OS files
**/.DS_Store
**/Thumbs.db
# Build artifacts
**/dist
**/build
**/coverage
# Temporary files
**/tmp
**/temp

25
.github/workflows/app_build.yml vendored Normal file
View File

@@ -0,0 +1,25 @@
name: Build on Merge
on:
push:
branches:
- main
paths-ignore:
- 'docker/**'
jobs:
deploy:
runs-on: self-hosted
steps:
- name: Checkout code
uses: actions/checkout@v5
- name: Run rebuild script
run: /root/patchmon/platform/scripts/app_build.sh ${{ github.ref_name }}
rebuild-pmon:
runs-on: self-hosted
needs: deploy
if: github.ref_name == 'dev'
steps:
- name: Rebuild pmon
run: /root/patchmon/platform/scripts/manage_pmon_auto.sh

28
.github/workflows/code_quality.yml vendored Normal file
View File

@@ -0,0 +1,28 @@
name: Code quality
on:
push:
paths-ignore:
- 'docker/**'
pull_request:
paths-ignore:
- 'docker/**'
jobs:
check:
runs-on: ubuntu-latest
permissions:
contents: read
steps:
- name: Checkout
uses: actions/checkout@v5
with:
persist-credentials: false
- name: Setup Biome
uses: biomejs/setup-biome@v2
with:
version: latest
- name: Run Biome
run: biome ci .

76
.github/workflows/docker.yml vendored Normal file
View File

@@ -0,0 +1,76 @@
name: Build and Push Docker Images
on:
push:
branches:
- main
tags:
- 'v*'
pull_request:
branches:
- main
workflow_dispatch:
inputs:
push:
description: Push images to registry
required: false
type: boolean
default: false
env:
REGISTRY: ghcr.io
permissions:
contents: read
packages: write
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
image: [backend, frontend]
steps:
- name: Checkout repository
uses: actions/checkout@v5
- name: Log in to container registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Extract metadata (tags, labels)
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ github.repository }}-${{ matrix.image }}
tags: |
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=edge,branch=main
- name: Build and push ${{ matrix.image }} image
uses: docker/build-push-action@v6
with:
context: .
file: docker/${{ matrix.image }}.Dockerfile
platforms: linux/amd64,linux/arm64
# Push if:
# - Event is not workflow_dispatch OR input 'push' is true
# AND
# - Event is not pull_request OR the PR is from the same repository (to avoid pushing from forks)
push: ${{ (github.event_name != 'workflow_dispatch' || inputs.push == 'true') && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name == github.repository) }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha,scope=${{ matrix.image }}
cache-to: type=gha,mode=max,scope=${{ matrix.image }}

12
.gitignore vendored
View File

@@ -71,6 +71,13 @@ jspm_packages/
.cache/
public
# Exception: Allow frontend/public/assets for logo files
!frontend/public/
!frontend/public/assets/
!frontend/public/assets/*.png
!frontend/public/assets/*.svg
!frontend/public/assets/*.jpg
# Storybook build outputs
.out
.storybook-out
@@ -130,6 +137,9 @@ agents/*.log
test-results/
playwright-report/
test-results.xml
test_*.sh
test-*.sh
*.code-workspace
# Package manager lock files (uncomment if you want to ignore them)
# package-lock.json
@@ -140,7 +150,9 @@ test-results.xml
deploy-patchmon.sh
manage-instances.sh
manage-patchmon.sh
manage-patchmon-dev.sh
setup-installer-site.sh
install-server.*
notify-clients-upgrade.sh
debug-agent.sh
docker/compose_dev_*

674
LICENSE Normal file
View File

@@ -0,0 +1,674 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.

299
README.md
View File

@@ -1,3 +1,298 @@
Join my discord for Instructions, support and feedback :
# PatchMon - Linux Patch Monitoring made Simple
https://discord.gg/S7RXUHwg
[![Website](https://img.shields.io/badge/Website-patchmon.net-blue?style=for-the-badge)](https://patchmon.net)
[![Discord](https://img.shields.io/badge/Discord-Join%20Server-blue?style=for-the-badge&logo=discord)](https://patchmon.net/discord)
[![GitHub](https://img.shields.io/badge/GitHub-Repository-black?style=for-the-badge&logo=github)](https://github.com/9technologygroup/patchmon.net)
[![Roadmap](https://img.shields.io/badge/Roadmap-View%20Progress-green?style=for-the-badge&logo=github)](https://github.com/users/9technologygroup/projects/1)
[![Documentation](https://img.shields.io/badge/Documentation-docs.patchmon.net-blue?style=for-the-badge&logo=book)](https://docs.patchmon.net/)
---
## Please STAR this repo :D
## Purpose
PatchMon provides centralized patch management across diverse server environments. Agents communicate outbound-only to the PatchMon server, eliminating inbound ports on monitored hosts while delivering comprehensive visibility and safe automation.
![Dashboard Screenshot](https://raw.githubusercontent.com/PatchMon/PatchMon/main/dashboard.jpeg)
## Features
### Dashboard
- Customisable dashboard with peruser card layout and ordering
### Users & Authentication
- Multi-user accounts (admin and standard users)
- Roles, Permissions & RBAC
### Hosts & Inventory
- Host inventory/groups with key attributes and OS details
- Host grouping (create and manage host groups)
### Packages & Updates
- Package inventory across hosts
- Outdated packages overview and counts
- Repositories per host tracking
### Agent & Data Collection
- Agent version management and script content stored in DB
### Settings & Configuration
- Server URL/protocol/host/port
- Signup toggle and default user role selection
### API & Integrations
- REST API under `/api/v1` with JWT auth
- Proxmox LXC Auto-Enrollment - Automatically discover and enroll LXC containers from Proxmox hosts
### Security
- Rate limiting for general, auth, and agent endpoints
- Outboundonly agent model reduces attack surface
### Deployment & Operations
- Docker installation & Oneline selfhost installer (Ubuntu/Debian)
- systemd service for backend lifecycle
- nginx vhost for frontend + API proxy; optional Lets Encrypt integration
## Getting Started
### PatchMon Cloud (coming soon)
Managed, zero-maintenance PatchMon hosting. Stay tuned.
### Self-hosted Installation
#### Docker (preferred)
For getting started with Docker, see the [Docker documentation](https://github.com/PatchMon/PatchMon/blob/main/docker/README.md)
#### Native Install (advanced/non-docker)
Run on a clean Ubuntu/Debian server with internet access:
#### Debian:
```bash
apt update -y
apt upgrade -y
apt install curl -y
```
#### Ubuntu:
```bash
apt-get update -y
apt-get upgrade -y
apt install curl -y
```
#### Install Script
```bash
curl -fsSL -o setup.sh https://raw.githubusercontent.com/PatchMon/PatchMon/refs/heads/main/setup.sh && chmod +x setup.sh && bash setup.sh
```
#### Update Script (--update flag)
```bash
curl -fsSL -o setup.sh https://raw.githubusercontent.com/PatchMon/PatchMon/refs/heads/main/setup.sh && chmod +x setup.sh && bash setup.sh --update
```
#### Minimum specs for building : #####
CPU : 2 vCPU
RAM : 2GB
Disk : 15GB
During setup youll be asked:
- Domain/IP: public DNS or local IP (default: `patchmon.internal`)
- SSL/HTTPS: `y` for public deployments with a public IP, `n` for internal networks
- Email: only if SSL is enabled (for Lets Encrypt)
- Git Branch: default is `main` (press Enter)
The script will:
- Install prerequisites (Node.js, PostgreSQL, nginx)
- Clone the repo, install dependencies, build the frontend, run migrations
- Create a systemd service and nginx site vhost config
- Start the service and write a consolidated info file at:
- `/opt/<your-domain>/deployment-info.txt`
- Copies the full installer log to `/opt/<your-domain>/patchmon-install.log` from /var/log/patchmon-install.log
After installation:
- Visit `http(s)://<your-domain>` and complete first-time admin setup
- See all useful info in `deployment-info.txt`
## Forcing updates after host package changes
Should you perform a manual package update on your host and wish to see the results reflected in PatchMon quicker than the usual scheduled update, you can trigger the process manually by running:
```bash
/usr/local/bin/patchmon-agent.sh update
```
This will send the results immediately to PatchMon.
## Communication Model
- Outbound-only agents: servers initiate communication to PatchMon
- No inbound connections required on monitored servers
- Secure server-side API with JWT authentication and rate limiting
## Architecture
- Backend: Node.js/Express + Prisma + PostgreSQL
- Frontend: Vite + React
- Reverse proxy: nginx
- Database: PostgreSQL
- System service: systemd-managed backend
```mermaid
flowchart LR
A[End Users / Browser<br>Admin UI / Frontend] -- HTTPS --> B[nginx<br>serve FE, proxy API]
B -- HTTP --> C["Backend<br>(Node/Express)<br>/api, auth, Prisma"]
C -- TCP --> D[PostgreSQL<br>Database]
E["Agents on your servers (Outbound Only)"] -- HTTPS --> F["Backend API<br>(/api/v1)"]
```
Operational
- systemd manages backend service
- certbot/nginx for TLS (public)
- setup.sh bootstraps OS, app, DB, config
## Support
- Discord: [https://patchmon.net/discord](https://patchmon.net/discord)
- Email: support@patchmon.net
## Roadmap
- Roadmap board: https://github.com/orgs/PatchMon/projects/2
## License
- AGPLv3 (More information on this soon)
---
## 🤝 Contributing
We welcome contributions from the community! Here's how you can get involved:
### Development Setup
1. **Fork the Repository**
```bash
# Click the "Fork" button on GitHub, then clone your fork
git clone https://github.com/YOUR_USERNAME/patchmon.net.git
cd patchmon.net
```
2. **Create a Feature Branch**
```bash
git checkout -b feature/your-feature-name
# or
git checkout -b fix/your-bug-fix
```
4. **Install Dependencies and Setup Hooks**
```bash
npm install
npm run prepare
```
5. **Make Your Changes**
- Write clean, well-documented code
- Follow existing code style and patterns
- Add tests for new functionality
- Update documentation as needed
6. **Test Your Changes**
```bash
# Run backend tests
cd backend
npm test
# Run frontend tests
cd ../frontend
npm test
```
7. **Commit and Push**
```bash
git add .
git commit -m "Add: descriptive commit message"
git push origin feature/your-feature-name
```
8. **Create a Pull Request**
- Go to your fork on GitHub
- Click "New Pull Request"
- Provide a clear description of your changes
- Link any related issues
### Contribution Guidelines
- **Code Style**: Follow the existing code patterns and Biome configuration
- **Commits**: Use conventional commit messages (feat:, fix:, docs:, etc.)
- **Testing**: Ensure all tests pass and add tests for new features
- **Documentation**: Update README and code comments as needed
- **Issues**: Check existing issues before creating new ones
---
## 🏢 Enterprise & Custom Solutions
### PatchMon Cloud
- **Fully Managed**: We handle all infrastructure and maintenance
- **Scalable**: Grows with your organization
- **Secure**: Enterprise-grade security and compliance
- **Support**: Dedicated support team
### Custom Integrations
- **API Development**: Custom endpoints for your specific needs
- **Third-Party Integrations**: Connect with your existing tools
- **Custom Dashboards**: Tailored reporting and visualization
- **White-Label Solutions**: Brand PatchMon as your own
### Enterprise Deployment
- **On-Premises**: Deploy in your own data center
- **Air-Gapped**: Support for isolated environments
- **Compliance**: Meet industry-specific requirements
- **Training**: Comprehensive team training and onboarding
*Contact us at support@patchmon.net for enterprise inquiries*
---
---
## 🙏 Acknowledgments
### Special Thanks
- **Jonathan Higson** - For inspiration, ideas, and valuable feedback
- **@Adam20054** - For working on Docker Compose deployment
- **@tigattack** - For working on GitHub CI/CD pipelines
- **Cloud X** and **Crazy Dead** - For moderating our Discord server and keeping the community awesome
- **Beta Testers** - For keeping me awake at night
- **My family** - For understanding my passion
### Contributors
Thank you to all our contributors who help make PatchMon better every day!
## 🔗 Links
- **Website**: [patchmon.net](https://patchmon.net)
- **Discord**: [https://patchmon.net/discord](https://patchmon.net/discord)
- **Roadmap**: [GitHub Projects](https://github.com/users/9technologygroup/projects/1)
- **Documentation**: [https://docs.patchmon.net](https://docs.patchmon.net)
- **Support**: support@patchmon.net
---
<div align="center">
**Made with ❤️ by the PatchMon Team**
[![Discord](https://img.shields.io/badge/Discord-Join%20Server-blue?style=for-the-badge&logo=discord)](https://patchmon.net/discord)
[![GitHub](https://img.shields.io/badge/GitHub-Repository-black?style=for-the-badge&logo=github)](https://github.com/PatchMon/PatchMon)
</div>

View File

@@ -1,16 +1,21 @@
#!/bin/bash
# PatchMon Agent Script v1.2.5
# PatchMon Agent Script v1.2.8
# This script sends package update information to the PatchMon server using API credentials
# Configuration
PATCHMON_SERVER="${PATCHMON_SERVER:-http://localhost:3001}"
API_VERSION="v1"
AGENT_VERSION="1.2.5"
AGENT_VERSION="1.2.8"
CONFIG_FILE="/etc/patchmon/agent.conf"
CREDENTIALS_FILE="/etc/patchmon/credentials"
LOG_FILE="/var/log/patchmon-agent.log"
# This placeholder will be dynamically replaced by the server when serving this
# script based on the "ignore SSL self-signed" setting. If set to -k, curl will
# ignore certificate validation. Otherwise, it will be empty for secure default.
CURL_FLAGS=""
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
@@ -20,7 +25,10 @@ NC='\033[0m' # No Color
# Logging function
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
# Try to write to log file, but don't fail if we can't
if [[ -w "$(dirname "$LOG_FILE")" ]] 2>/dev/null; then
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" >> "$LOG_FILE" 2>/dev/null
fi
}
# Error handling
@@ -30,24 +38,46 @@ error() {
exit 1
}
# Info logging
# Info logging (cleaner output - only stdout, no duplicate logging)
info() {
echo -e "${BLUE}INFO: $1${NC}"
echo -e "${BLUE} $1${NC}"
log "INFO: $1"
}
# Success logging
# Success logging (cleaner output - only stdout, no duplicate logging)
success() {
echo -e "${GREEN}SUCCESS: $1${NC}"
echo -e "${GREEN} $1${NC}"
log "SUCCESS: $1"
}
# Warning logging
# Warning logging (cleaner output - only stdout, no duplicate logging)
warning() {
echo -e "${YELLOW}WARNING: $1${NC}"
echo -e "${YELLOW}⚠️ $1${NC}"
log "WARNING: $1"
}
# Get or generate machine ID
get_machine_id() {
# Try standard locations for machine-id
if [[ -f /etc/machine-id ]]; then
cat /etc/machine-id
elif [[ -f /var/lib/dbus/machine-id ]]; then
cat /var/lib/dbus/machine-id
else
# Fallback: generate from hardware UUID or hostname+MAC
if command -v dmidecode &> /dev/null; then
local uuid=$(dmidecode -s system-uuid 2>/dev/null | tr -d ' -' | tr '[:upper:]' '[:lower:]')
if [[ -n "$uuid" && "$uuid" != "notpresent" ]]; then
echo "$uuid"
return
fi
fi
# Last resort: hash hostname + primary MAC address
local primary_mac=$(ip link show | grep -oP '(?<=link/ether\s)[0-9a-f:]+' | head -1 | tr -d ':')
echo "$HOSTNAME-$primary_mac" | sha256sum | cut -d' ' -f1 | cut -c1-32
fi
}
# Check if running as root
check_root() {
if [[ $EUID -ne 0 ]]; then
@@ -55,6 +85,31 @@ check_root() {
fi
}
# Verify system datetime and timezone
verify_datetime() {
info "Verifying system datetime and timezone..."
# Get current system time
local system_time=$(date)
local timezone="Unknown"
# Try to get timezone with timeout protection
if command -v timedatectl >/dev/null 2>&1; then
timezone=$(timedatectl show --property=Timezone --value 2>/dev/null || echo "Unknown")
fi
# Log datetime info (non-blocking)
log "System datetime check - time: $system_time, timezone: $timezone" 2>/dev/null || true
# Simple check - just log the info, don't block execution
if [[ "$timezone" == "Unknown" ]] || [[ -z "$timezone" ]]; then
warning "System timezone not configured: $timezone"
log "WARNING: System timezone not configured - timezone: $timezone" 2>/dev/null || true
fi
return 0
}
# Create necessary directories
setup_directories() {
mkdir -p /etc/patchmon
@@ -144,7 +199,7 @@ EOF
test_credentials() {
load_credentials
local response=$(curl -s -X POST \
local response=$(curl $CURL_FLAGS -X POST \
-H "Content-Type: application/json" \
-H "X-API-ID: $API_ID" \
-H "X-API-KEY: $API_KEY" \
@@ -176,9 +231,14 @@ detect_os() {
"opensuse"|"opensuse-leap"|"opensuse-tumbleweed")
OS_TYPE="suse"
;;
"rocky"|"almalinux")
"almalinux")
OS_TYPE="rhel"
;;
"ol")
# Keep Oracle Linux as 'ol' for proper frontend identification
OS_TYPE="ol"
;;
# Rocky Linux keeps its own identity for proper frontend display
esac
elif [[ -f /etc/redhat-release ]]; then
@@ -206,7 +266,7 @@ get_repository_info() {
"ubuntu"|"debian")
get_apt_repositories repos_json first
;;
"centos"|"rhel"|"fedora")
"centos"|"rhel"|"fedora"|"ol"|"rocky")
get_yum_repositories repos_json first
;;
*)
@@ -514,14 +574,118 @@ get_yum_repositories() {
local -n first_ref=$2
# Parse yum/dnf repository configuration
local repo_info=""
if command -v dnf >/dev/null 2>&1; then
local repo_info=$(dnf repolist all --verbose 2>/dev/null | grep -E "^Repo-id|^Repo-baseurl|^Repo-name|^Repo-status")
repo_info=$(dnf repolist all --verbose 2>/dev/null | grep -E "^Repo-id|^Repo-baseurl|^Repo-mirrors|^Repo-name|^Repo-status")
elif command -v yum >/dev/null 2>&1; then
local repo_info=$(yum repolist all -v 2>/dev/null | grep -E "^Repo-id|^Repo-baseurl|^Repo-name|^Repo-status")
repo_info=$(yum repolist all -v 2>/dev/null | grep -E "^Repo-id|^Repo-baseurl|^Repo-mirrors|^Repo-name|^Repo-status")
fi
# This is a simplified implementation - would need more work for full YUM support
# For now, return empty for non-APT systems
if [[ -z "$repo_info" ]]; then
return
fi
# Parse repository information
local current_repo=""
local repo_id=""
local repo_name=""
local repo_url=""
local repo_mirrors=""
local repo_status=""
while IFS= read -r line; do
if [[ "$line" =~ ^Repo-id[[:space:]]+:[[:space:]]+(.+)$ ]]; then
# Process previous repository if we have one
if [[ -n "$current_repo" ]]; then
process_yum_repo repos_ref first_ref "$repo_id" "$repo_name" "$repo_url" "$repo_mirrors" "$repo_status"
fi
# Start new repository
repo_id="${BASH_REMATCH[1]}"
repo_name="$repo_id"
repo_url=""
repo_mirrors=""
repo_status=""
current_repo="$repo_id"
elif [[ "$line" =~ ^Repo-name[[:space:]]+:[[:space:]]+(.+)$ ]]; then
repo_name="${BASH_REMATCH[1]}"
elif [[ "$line" =~ ^Repo-baseurl[[:space:]]+:[[:space:]]+(.+)$ ]]; then
repo_url="${BASH_REMATCH[1]}"
elif [[ "$line" =~ ^Repo-mirrors[[:space:]]+:[[:space:]]+(.+)$ ]]; then
repo_mirrors="${BASH_REMATCH[1]}"
elif [[ "$line" =~ ^Repo-status[[:space:]]+:[[:space:]]+(.+)$ ]]; then
repo_status="${BASH_REMATCH[1]}"
fi
done <<< "$repo_info"
# Process the last repository
if [[ -n "$current_repo" ]]; then
process_yum_repo repos_ref first_ref "$repo_id" "$repo_name" "$repo_url" "$repo_mirrors" "$repo_status"
fi
}
# Process a single YUM repository and add it to the JSON
process_yum_repo() {
local -n _repos_ref=$1
local -n _first_ref=$2
local repo_id="$3"
local repo_name="$4"
local repo_url="$5"
local repo_mirrors="$6"
local repo_status="$7"
# Skip if we don't have essential info
if [[ -z "$repo_id" ]]; then
return
fi
# Determine if repository is enabled
local is_enabled=false
if [[ "$repo_status" == "enabled" ]]; then
is_enabled=true
fi
# Use baseurl if available, otherwise use mirrors URL
local final_url=""
if [[ -n "$repo_url" ]]; then
# Extract first URL if multiple are listed
final_url=$(echo "$repo_url" | head -n 1 | awk '{print $1}')
elif [[ -n "$repo_mirrors" ]]; then
final_url="$repo_mirrors"
fi
# Skip if we don't have any URL
if [[ -z "$final_url" ]]; then
return
fi
# Determine if repository uses HTTPS
local is_secure=false
if [[ "$final_url" =~ ^https:// ]]; then
is_secure=true
fi
# Generate repository name if not provided
if [[ -z "$repo_name" ]]; then
repo_name="$repo_id"
fi
# Clean up repository name and URL - escape quotes and backslashes
repo_name=$(echo "$repo_name" | sed 's/\\/\\\\/g' | sed 's/"/\\"/g')
final_url=$(echo "$final_url" | sed 's/\\/\\\\/g' | sed 's/"/\\"/g')
# Add to JSON
if [[ "$_first_ref" == true ]]; then
_first_ref=false
else
_repos_ref+=","
fi
_repos_ref+="{\"name\":\"$repo_name\",\"url\":\"$final_url\",\"distribution\":\"$OS_VERSION\",\"components\":\"main\",\"repoType\":\"rpm\",\"isEnabled\":$is_enabled,\"isSecure\":$is_secure}"
}
# Get package information based on OS
@@ -533,11 +697,11 @@ get_package_info() {
"ubuntu"|"debian")
get_apt_packages packages_json first
;;
"centos"|"rhel"|"fedora")
"centos"|"rhel"|"fedora"|"ol"|"rocky")
get_yum_packages packages_json first
;;
*)
error "Unsupported OS type: $OS_TYPE"
warning "Unsupported OS type: $OS_TYPE - returning empty package list"
;;
esac
@@ -550,41 +714,69 @@ get_apt_packages() {
local -n packages_ref=$1
local -n first_ref=$2
# Update package lists
apt-get update -qq
# Update package lists with retry logic for lock conflicts
local retry_count=0
local max_retries=3
local retry_delay=5
# Get upgradable packages
local upgradable=$(apt list --upgradable 2>/dev/null | grep -v "WARNING")
while [[ $retry_count -lt $max_retries ]]; do
if apt-get update -qq 2>/dev/null; then
break
else
retry_count=$((retry_count + 1))
if [[ $retry_count -lt $max_retries ]]; then
warning "APT lock detected, retrying in ${retry_delay} seconds... (attempt $retry_count/$max_retries)"
sleep $retry_delay
else
warning "APT lock persists after $max_retries attempts, continuing without update..."
fi
fi
done
# Determine upgradable packages using apt-get simulation (compatible with Ubuntu 18.04)
# Example line format:
# Inst bash [4.4.18-2ubuntu1] (4.4.18-2ubuntu1.2 Ubuntu:18.04/bionic-updates [amd64])
local upgradable_sim=$(apt-get -s -o Debug::NoLocking=1 upgrade 2>/dev/null | grep "^Inst ")
while IFS= read -r line; do
if [[ "$line" =~ ^([^/]+)/([^[:space:]]+)[[:space:]]+([^[:space:]]+)[[:space:]]+.*[[:space:]]([^[:space:]]+)[[:space:]]*(\[.*\])? ]]; then
# Extract package name, current version (in brackets), and available version (first token inside parentheses)
if [[ "$line" =~ ^Inst[[:space:]]+([^[:space:]]+)[[:space:]]+\[([^\]]+)\][[:space:]]+\(([^[:space:]]+) ]]; then
local package_name="${BASH_REMATCH[1]}"
local current_version="${BASH_REMATCH[4]}"
local current_version="${BASH_REMATCH[2]}"
local available_version="${BASH_REMATCH[3]}"
local is_security_update=false
# Check if it's a security update
if echo "$line" | grep -q "security"; then
# Mark as security update if the line references a security pocket
if echo "$line" | grep -qiE "(-|/)security"; then
is_security_update=true
fi
# Escape JSON special characters in package data
package_name=$(echo "$package_name" | sed 's/"/\\"/g' | sed 's/\\/\\\\/g')
current_version=$(echo "$current_version" | sed 's/"/\\"/g' | sed 's/\\/\\\\/g')
available_version=$(echo "$available_version" | sed 's/"/\\"/g' | sed 's/\\/\\\\/g')
if [[ "$first_ref" == true ]]; then
first_ref=false
else
packages_ref+=","
packages_ref+=","
fi
packages_ref+="{\"name\":\"$package_name\",\"currentVersion\":\"$current_version\",\"availableVersion\":\"$available_version\",\"needsUpdate\":true,\"isSecurityUpdate\":$is_security_update}"
fi
done <<< "$upgradable"
done <<< "$upgradable_sim"
# Get installed packages that are up to date
local installed=$(dpkg-query -W -f='${Package} ${Version}\n' | head -100)
local installed=$(dpkg-query -W -f='${Package} ${Version}\n')
while IFS=' ' read -r package_name version; do
if [[ -n "$package_name" && -n "$version" ]]; then
# Check if this package is not in the upgrade list
if ! echo "$upgradable" | grep -q "^$package_name/"; then
if ! echo "$upgradable_sim" | grep -q "^Inst $package_name "; then
# Escape JSON special characters in package data
package_name=$(echo "$package_name" | sed 's/"/\\"/g' | sed 's/\\/\\\\/g')
version=$(echo "$version" | sed 's/"/\\"/g' | sed 's/\\/\\\\/g')
if [[ "$first_ref" == true ]]; then
first_ref=false
else
@@ -608,16 +800,31 @@ get_yum_packages() {
fi
# Get upgradable packages
local upgradable=$($package_manager check-update 2>/dev/null | grep -v "^$" | grep -v "^Loaded" | grep -v "^Last metadata" | tail -n +2)
local upgradable=$($package_manager check-update 2>/dev/null | grep -v "^$" | grep -v "^Loaded" | grep -v "^Last metadata" | grep -v "^Security" | tail -n +2)
while IFS= read -r line; do
# Skip empty lines and lines with special characters
[[ -z "$line" ]] && continue
[[ "$line" =~ ^[[:space:]]*$ ]] && continue
if [[ "$line" =~ ^([^[:space:]]+)[[:space:]]+([^[:space:]]+)[[:space:]]+([^[:space:]]+) ]]; then
local package_name="${BASH_REMATCH[1]}"
local available_version="${BASH_REMATCH[2]}"
local repo="${BASH_REMATCH[3]}"
# Sanitize package name and versions (remove any control characters)
package_name=$(echo "$package_name" | tr -d '[:cntrl:]' | sed 's/[^a-zA-Z0-9._+-]//g')
available_version=$(echo "$available_version" | tr -d '[:cntrl:]' | sed 's/[^a-zA-Z0-9._+-]//g')
repo=$(echo "$repo" | tr -d '[:cntrl:]')
# Skip if package name is empty after sanitization
[[ -z "$package_name" ]] && continue
# Get current version
local current_version=$($package_manager list installed "$package_name" 2>/dev/null | grep "^$package_name" | awk '{print $2}')
local current_version=$($package_manager list installed "$package_name" 2>/dev/null | grep "^$package_name" | awk '{print $2}' | tr -d '[:cntrl:]' | sed 's/[^a-zA-Z0-9._+-]//g')
# Skip if we couldn't get current version
[[ -z "$current_version" ]] && current_version="unknown"
local is_security_update=false
if echo "$repo" | grep -q "security"; then
@@ -635,13 +842,25 @@ get_yum_packages() {
done <<< "$upgradable"
# Get some installed packages that are up to date
local installed=$($package_manager list installed 2>/dev/null | grep -v "^Loaded" | grep -v "^Installed" | head -100)
local installed=$($package_manager list installed 2>/dev/null | grep -v "^Loaded" | grep -v "^Installed")
while IFS= read -r line; do
# Skip empty lines
[[ -z "$line" ]] && continue
[[ "$line" =~ ^[[:space:]]*$ ]] && continue
if [[ "$line" =~ ^([^[:space:]]+)[[:space:]]+([^[:space:]]+) ]]; then
local package_name="${BASH_REMATCH[1]}"
local version="${BASH_REMATCH[2]}"
# Sanitize package name and version
package_name=$(echo "$package_name" | tr -d '[:cntrl:]' | sed 's/[^a-zA-Z0-9._+-]//g')
version=$(echo "$version" | tr -d '[:cntrl:]' | sed 's/[^a-zA-Z0-9._+-]//g')
# Skip if package name is empty after sanitization
[[ -z "$package_name" ]] && continue
[[ -z "$version" ]] && version="unknown"
# Check if this package is not in the upgrade list
if ! echo "$upgradable" | grep -q "^$package_name "; then
if [[ "$first_ref" == true ]]; then
@@ -675,11 +894,21 @@ get_hardware_info() {
# Memory Information
if command -v free >/dev/null 2>&1; then
ram_installed=$(free -g | grep "^Mem:" | awk '{print $2}')
swap_size=$(free -g | grep "^Swap:" | awk '{print $2}')
# Use free -m to get MB, then convert to GB with decimal precision
ram_installed=$(free -m | grep "^Mem:" | awk '{printf "%.2f", $2/1024}')
swap_size=$(free -m | grep "^Swap:" | awk '{printf "%.2f", $2/1024}')
elif [[ -f /proc/meminfo ]]; then
ram_installed=$(grep "MemTotal" /proc/meminfo | awk '{print int($2/1024/1024)}')
swap_size=$(grep "SwapTotal" /proc/meminfo | awk '{print int($2/1024/1024)}')
# Convert KB to GB with decimal precision
ram_installed=$(grep "MemTotal" /proc/meminfo | awk '{printf "%.2f", $2/1048576}')
swap_size=$(grep "SwapTotal" /proc/meminfo | awk '{printf "%.2f", $2/1048576}')
fi
# Ensure minimum value of 0.01GB to prevent 0 values
if (( $(echo "$ram_installed < 0.01" | bc -l) )); then
ram_installed="0.01"
fi
if (( $(echo "$swap_size < 0" | bc -l) )); then
swap_size="0"
fi
# Disk Information
@@ -737,8 +966,16 @@ get_system_info() {
# SELinux Status
if command -v getenforce >/dev/null 2>&1; then
selinux_status=$(getenforce 2>/dev/null | tr '[:upper:]' '[:lower:]')
# Map "enforcing" to "enabled" for server validation
if [[ "$selinux_status" == "enforcing" ]]; then
selinux_status="enabled"
fi
elif [[ -f /etc/selinux/config ]]; then
selinux_status=$(grep "^SELINUX=" /etc/selinux/config | cut -d'=' -f2 | tr '[:upper:]' '[:lower:]')
# Map "enforcing" to "enabled" for server validation
if [[ "$selinux_status" == "enforcing" ]]; then
selinux_status="enabled"
fi
else
selinux_status="disabled"
fi
@@ -768,27 +1005,43 @@ get_system_info() {
send_update() {
load_credentials
info "Collecting package information..."
local packages_json=$(get_package_info)
# Track execution start time
local start_time=$(date +%s.%N)
info "Collecting repository information..."
local repositories_json=$(get_repository_info)
info "Collecting hardware information..."
local hardware_json=$(get_hardware_info)
info "Collecting network information..."
local network_json=$(get_network_info)
# Verify datetime before proceeding
if ! verify_datetime; then
warning "Datetime verification failed, but continuing with update..."
fi
info "Collecting system information..."
local packages_json=$(get_package_info)
local repositories_json=$(get_repository_info)
local hardware_json=$(get_hardware_info)
local network_json=$(get_network_info)
local system_json=$(get_system_info)
# Validate JSON before sending
if ! echo "$packages_json" | jq empty 2>/dev/null; then
error "Invalid packages JSON generated: $packages_json"
fi
if ! echo "$repositories_json" | jq empty 2>/dev/null; then
error "Invalid repositories JSON generated: $repositories_json"
fi
info "Sending update to PatchMon server..."
# Merge all JSON objects into one
local merged_json=$(echo "$hardware_json $network_json $system_json" | jq -s '.[0] * .[1] * .[2]')
# Get machine ID
local machine_id=$(get_machine_id)
local payload=$(cat <<EOF
# Calculate execution time (in seconds with decimals)
local end_time=$(date +%s.%N)
local execution_time=$(echo "$end_time - $start_time" | bc)
# Create the base payload and merge with system info
local base_payload=$(cat <<EOF
{
"packages": $packages_json,
"repositories": $repositories_json,
@@ -798,72 +1051,72 @@ send_update() {
"ip": "$IP_ADDRESS",
"architecture": "$ARCHITECTURE",
"agentVersion": "$AGENT_VERSION",
$merged_json
"machineId": "$machine_id",
"executionTime": $execution_time
}
EOF
)
local response=$(curl -s -X POST \
# Merge the base payload with the system information
local payload=$(echo "$base_payload $merged_json" | jq -s '.[0] * .[1]')
# Write payload to temporary file to avoid "Argument list too long" error
local temp_payload_file=$(mktemp)
echo "$payload" > "$temp_payload_file"
# Debug: Show payload size
local payload_size=$(wc -c < "$temp_payload_file")
echo -e "${BLUE} 📊 Payload size: $payload_size bytes${NC}"
local response=$(curl $CURL_FLAGS -X POST \
-H "Content-Type: application/json" \
-H "X-API-ID: $API_ID" \
-H "X-API-KEY: $API_KEY" \
-d "$payload" \
"$PATCHMON_SERVER/api/$API_VERSION/hosts/update")
-d @"$temp_payload_file" \
"$PATCHMON_SERVER/api/$API_VERSION/hosts/update" 2>&1)
if [[ $? -eq 0 ]]; then
local curl_exit_code=$?
# Clean up temporary file
rm -f "$temp_payload_file"
if [[ $curl_exit_code -eq 0 ]]; then
if echo "$response" | grep -q "success"; then
success "Update sent successfully"
echo "$response" | grep -o '"packagesProcessed":[0-9]*' | cut -d':' -f2 | xargs -I {} info "Processed {} packages"
local packages_count=$(echo "$response" | grep -o '"packagesProcessed":[0-9]*' | cut -d':' -f2)
success "Update sent successfully (${packages_count} packages processed)"
# Check for auto-update instructions (look specifically in autoUpdate section)
if echo "$response" | grep -q '"autoUpdate":{'; then
local auto_update_section=$(echo "$response" | grep -o '"autoUpdate":{[^}]*}')
local should_update=$(echo "$auto_update_section" | grep -o '"shouldUpdate":true' | cut -d':' -f2)
if [[ "$should_update" == "true" ]]; then
local latest_version=$(echo "$auto_update_section" | grep -o '"latestVersion":"[^"]*' | cut -d'"' -f4)
local current_version=$(echo "$auto_update_section" | grep -o '"currentVersion":"[^"]*' | cut -d'"' -f4)
local update_message=$(echo "$auto_update_section" | grep -o '"message":"[^"]*' | cut -d'"' -f4)
info "Auto-update detected: $update_message"
info "Current version: $current_version, Latest version: $latest_version"
# Automatically run update-agent command
info "Automatically updating agent to latest version..."
# Check if auto-update is enabled and check for agent updates locally
if check_auto_update_enabled; then
info "Checking for agent updates..."
if check_agent_update_needed; then
info "Agent update available, updating..."
if "$0" update-agent; then
success "Agent auto-update completed successfully"
success "Agent updated successfully"
else
warning "Agent auto-update failed, but data was sent successfully"
warning "Agent update failed, but data was sent successfully"
fi
else
info "Agent is up to date"
fi
fi
# Check for crontab update instructions (look specifically in crontabUpdate section)
if echo "$response" | grep -q '"crontabUpdate":{'; then
local crontab_update_section=$(echo "$response" | grep -o '"crontabUpdate":{[^}]*}')
local should_update_crontab=$(echo "$crontab_update_section" | grep -o '"shouldUpdate":true' | cut -d':' -f2)
if [[ "$should_update_crontab" == "true" ]]; then
local crontab_message=$(echo "$crontab_update_section" | grep -o '"message":"[^"]*' | cut -d'"' -f4)
local crontab_command=$(echo "$crontab_update_section" | grep -o '"command":"[^"]*' | cut -d'"' -f4)
if [[ -n "$crontab_message" ]]; then
info "Crontab update detected: $crontab_message"
fi
if [[ "$crontab_command" == "update-crontab" ]]; then
info "Automatically updating crontab with new interval..."
if "$0" update-crontab; then
success "Crontab updated successfully"
else
warning "Crontab update failed, but data was sent successfully"
fi
fi
fi
# Automatically check if crontab needs updating based on server settings
info "Checking crontab configuration..."
"$0" update-crontab
local crontab_exit_code=$?
if [[ $crontab_exit_code -eq 0 ]]; then
success "Crontab updated successfully"
elif [[ $crontab_exit_code -eq 2 ]]; then
# Already up to date - no additional message needed
true
else
warning "Crontab update failed, but data was sent successfully"
fi
else
error "Update failed: $response"
fi
else
error "Failed to send update"
error "Failed to send update (curl exit code: $curl_exit_code): $response"
fi
}
@@ -871,7 +1124,7 @@ EOF
ping_server() {
load_credentials
local response=$(curl -s -X POST \
local response=$(curl $CURL_FLAGS -X POST \
-H "Content-Type: application/json" \
-H "X-API-ID: $API_ID" \
-H "X-API-KEY: $API_KEY" \
@@ -884,7 +1137,7 @@ ping_server() {
info "Connected as host: $hostname"
fi
# Check for crontab update trigger
# Check for crontab update instructions
local should_update_crontab=$(echo "$response" | grep -o '"shouldUpdate":true' | cut -d':' -f2)
if [[ "$should_update_crontab" == "true" ]]; then
local message=$(echo "$response" | grep -o '"message":"[^"]*' | cut -d'"' -f4)
@@ -895,11 +1148,16 @@ ping_server() {
fi
if [[ "$command" == "update-crontab" ]]; then
info "Automatically updating crontab with new interval..."
if "$0" update-crontab; then
info "Updating crontab with new interval..."
"$0" update-crontab
local crontab_exit_code=$?
if [[ $crontab_exit_code -eq 0 ]]; then
success "Crontab updated successfully"
elif [[ $crontab_exit_code -eq 2 ]]; then
# Already up to date - no additional message needed
true
else
warning "Crontab update failed, but ping was successful"
warning "Crontab update failed, but data was sent successfully"
fi
fi
fi
@@ -914,7 +1172,7 @@ check_version() {
info "Checking for agent updates..."
local response=$(curl -s -X GET "$PATCHMON_SERVER/api/$API_VERSION/hosts/agent/version")
local response=$(curl $CURL_FLAGS -H "X-API-ID: $API_ID" -H "X-API-KEY: $API_KEY" -X GET "$PATCHMON_SERVER/api/$API_VERSION/hosts/agent/version")
if [[ $? -eq 0 ]]; then
local current_version=$(echo "$response" | grep -o '"currentVersion":"[^"]*' | cut -d'"' -f4)
@@ -943,59 +1201,147 @@ check_version() {
fi
}
# Check if auto-update is enabled (both globally and for this host)
check_auto_update_enabled() {
# Get settings from server using API credentials
local response=$(curl $CURL_FLAGS -H "X-API-ID: $API_ID" -H "X-API-KEY: $API_KEY" -X GET "$PATCHMON_SERVER/api/$API_VERSION/hosts/settings" 2>/dev/null)
if [[ $? -ne 0 ]]; then
return 1
fi
# Check if both global and host auto-update are enabled
local global_auto_update=$(echo "$response" | grep -o '"auto_update":true' | cut -d':' -f2)
local host_auto_update=$(echo "$response" | grep -o '"host_auto_update":true' | cut -d':' -f2)
if [[ "$global_auto_update" == "true" && "$host_auto_update" == "true" ]]; then
return 0
else
return 1
fi
}
# Check if agent update is needed (internal function for auto-update)
check_agent_update_needed() {
# Get current agent timestamp
local current_timestamp=0
if [[ -f "$0" ]]; then
current_timestamp=$(stat -c %Y "$0" 2>/dev/null || stat -f %m "$0" 2>/dev/null || echo "0")
fi
# Get server agent info using API credentials
local response=$(curl $CURL_FLAGS -H "X-API-ID: $API_ID" -H "X-API-KEY: $API_KEY" -X GET "$PATCHMON_SERVER/api/$API_VERSION/hosts/agent/timestamp" 2>/dev/null)
if [[ $? -eq 0 ]]; then
local server_version=$(echo "$response" | grep -o '"version":"[^"]*' | cut -d'"' -f4)
local server_timestamp=$(echo "$response" | grep -o '"timestamp":[0-9]*' | cut -d':' -f2)
local server_exists=$(echo "$response" | grep -o '"exists":true' | cut -d':' -f2)
if [[ "$server_exists" != "true" ]]; then
return 1
fi
# Check if update is needed
if [[ "$server_version" != "$AGENT_VERSION" ]]; then
return 0 # Update needed due to version mismatch
elif [[ "$server_timestamp" -gt "$current_timestamp" ]]; then
return 0 # Update needed due to newer timestamp
else
return 1 # No update needed
fi
else
return 1 # Failed to check
fi
}
# Check for agent updates based on version and timestamp (interactive command)
check_agent_update() {
load_credentials
info "Checking for agent updates..."
# Get current agent timestamp
local current_timestamp=0
if [[ -f "$0" ]]; then
current_timestamp=$(stat -c %Y "$0" 2>/dev/null || stat -f %m "$0" 2>/dev/null || echo "0")
fi
# Get server agent info using API credentials
local response=$(curl $CURL_FLAGS -H "X-API-ID: $API_ID" -H "X-API-KEY: $API_KEY" -X GET "$PATCHMON_SERVER/api/$API_VERSION/hosts/agent/timestamp")
if [[ $? -eq 0 ]]; then
local server_version=$(echo "$response" | grep -o '"version":"[^"]*' | cut -d'"' -f4)
local server_timestamp=$(echo "$response" | grep -o '"timestamp":[0-9]*' | cut -d':' -f2)
local server_exists=$(echo "$response" | grep -o '"exists":true' | cut -d':' -f2)
if [[ "$server_exists" != "true" ]]; then
warning "No agent script found on server"
return 1
fi
info "Current agent version: $AGENT_VERSION (timestamp: $current_timestamp)"
info "Server agent version: $server_version (timestamp: $server_timestamp)"
# Check if update is needed
if [[ "$server_version" != "$AGENT_VERSION" ]]; then
info "Version mismatch detected - update needed"
return 0
elif [[ "$server_timestamp" -gt "$current_timestamp" ]]; then
info "Server script is newer - update needed"
return 0
else
info "Agent is up to date"
return 1
fi
else
error "Failed to check agent timestamp from server"
return 1
fi
}
# Update agent script
update_agent() {
load_credentials
info "Updating agent script..."
local response=$(curl -s -X GET "$PATCHMON_SERVER/api/$API_VERSION/hosts/agent/version")
local download_url="$PATCHMON_SERVER/api/$API_VERSION/hosts/agent/download"
if [[ $? -eq 0 ]]; then
local download_url=$(echo "$response" | grep -o '"downloadUrl":"[^"]*' | cut -d'"' -f4)
if [[ -z "$download_url" ]]; then
download_url="$PATCHMON_SERVER/api/$API_VERSION/hosts/agent/download"
elif [[ "$download_url" =~ ^/ ]]; then
# If download_url is relative, prepend the server URL
download_url="$PATCHMON_SERVER$download_url"
fi
info "Downloading latest agent from: $download_url"
# Create backup of current script
cp "$0" "$0.backup.$(date +%Y%m%d_%H%M%S)"
# Download new version
if curl -s -o "/tmp/patchmon-agent-new.sh" "$download_url"; then
# Verify the downloaded script is valid
if bash -n "/tmp/patchmon-agent-new.sh" 2>/dev/null; then
# Replace current script
mv "/tmp/patchmon-agent-new.sh" "$0"
chmod +x "$0"
success "Agent updated successfully"
info "Backup saved as: $0.backup.$(date +%Y%m%d_%H%M%S)"
# Get the new version number
local new_version=$(grep '^AGENT_VERSION=' "$0" | cut -d'"' -f2)
info "Updated to version: $new_version"
# Automatically run update to send new information to PatchMon
info "Sending updated information to PatchMon..."
if "$0" update; then
success "Successfully sent updated information to PatchMon"
else
warning "Failed to send updated information to PatchMon (this is not critical)"
fi
info "Downloading latest agent from: $download_url"
# Clean up old backups (keep only last 3)
ls -t "$0.backup."* 2>/dev/null | tail -n +4 | xargs -r rm -f
# Create backup of current script
local backup_file="$0.backup.$(date +%Y%m%d_%H%M%S)"
cp "$0" "$backup_file"
# Download new version using API credentials
if curl $CURL_FLAGS -H "X-API-ID: $API_ID" -H "X-API-KEY: $API_KEY" -o "/tmp/patchmon-agent-new.sh" "$download_url"; then
# Verify the downloaded script is valid
if bash -n "/tmp/patchmon-agent-new.sh" 2>/dev/null; then
# Replace current script
mv "/tmp/patchmon-agent-new.sh" "$0"
chmod +x "$0"
success "Agent updated successfully"
info "Backup saved as: $backup_file"
# Get the new version number
local new_version=$(grep '^AGENT_VERSION=' "$0" | cut -d'"' -f2)
info "Updated to version: $new_version"
# Automatically run update to send new information to PatchMon
info "Sending updated information to PatchMon..."
if "$0" update; then
success "Successfully sent updated information to PatchMon"
else
error "Downloaded script is invalid"
rm -f "/tmp/patchmon-agent-new.sh"
warning "Failed to send updated information to PatchMon (this is not critical)"
fi
else
error "Failed to download new agent script"
error "Downloaded script is invalid"
rm -f "/tmp/patchmon-agent-new.sh"
fi
else
error "Failed to get update information"
error "Failed to download new agent script"
fi
}
@@ -1003,32 +1349,59 @@ update_agent() {
update_crontab() {
load_credentials
info "Updating crontab with current policy..."
local response=$(curl -s -X GET "$PATCHMON_SERVER/api/$API_VERSION/settings/update-interval")
local response=$(curl $CURL_FLAGS -H "X-API-ID: $API_ID" -H "X-API-KEY: $API_KEY" -X GET "$PATCHMON_SERVER/api/$API_VERSION/settings/update-interval")
if [[ $? -eq 0 ]]; then
local update_interval=$(echo "$response" | grep -o '"updateInterval":[0-9]*' | cut -d':' -f2)
# Fallback if not found
if [[ -z "$update_interval" ]]; then
update_interval=60
fi
# Normalize interval: 5-59 valid, otherwise snap to hour presets
if [[ $update_interval -lt 5 ]]; then
update_interval=5
elif [[ $update_interval -gt 1440 ]]; then
update_interval=1440
fi
if [[ -n "$update_interval" ]]; then
# Generate the expected crontab entry
local expected_crontab=""
if [[ $update_interval -eq 60 ]]; then
# Hourly updates
expected_crontab="0 * * * * /usr/local/bin/patchmon-agent.sh update >/dev/null 2>&1"
else
# Custom interval updates
if [[ $update_interval -lt 60 ]]; then
# Every N minutes (5-59)
expected_crontab="*/$update_interval * * * * /usr/local/bin/patchmon-agent.sh update >/dev/null 2>&1"
else
# Hour-based schedules
if [[ $update_interval -eq 60 ]]; then
# Hourly updates starting at current minute to spread load
local current_minute=$(date +%M)
expected_crontab="$current_minute * * * * /usr/local/bin/patchmon-agent.sh update >/dev/null 2>&1"
else
# For 120, 180, 360, 720, 1440 -> every H hours at minute 0
local hours=$((update_interval / 60))
expected_crontab="0 */$hours * * * /usr/local/bin/patchmon-agent.sh update >/dev/null 2>&1"
fi
fi
# Get current crontab
local current_crontab=$(crontab -l 2>/dev/null | grep "patchmon-agent.sh update" | head -1)
# Get current crontab (without patchmon entries)
local current_crontab_without_patchmon=$(crontab -l 2>/dev/null | grep -v "/usr/local/bin/patchmon-agent.sh update" || true)
local current_patchmon_entry=$(crontab -l 2>/dev/null | grep "/usr/local/bin/patchmon-agent.sh update" | head -1)
# Check if crontab needs updating
if [[ "$current_crontab" == "$expected_crontab" ]]; then
if [[ "$current_patchmon_entry" == "$expected_crontab" ]]; then
info "Crontab is already up to date (interval: $update_interval minutes)"
return 0
return 2 # Special return code for "already up to date"
fi
info "Setting update interval to $update_interval minutes"
echo "$expected_crontab" | crontab -
success "Crontab updated successfully"
# Combine existing cron (without patchmon entries) + new patchmon entry
{
if [[ -n "$current_crontab_without_patchmon" ]]; then
echo "$current_crontab_without_patchmon"
fi
echo "$expected_crontab"
} | crontab -
success "Crontab updated successfully (duplicates removed)"
else
error "Could not determine update interval from server"
fi
@@ -1140,7 +1513,7 @@ main() {
check_root
setup_directories
load_config
configure_credentials "$2" "$3"
configure_credentials "$2" "$3" "$4"
;;
"test")
check_root
@@ -1171,6 +1544,11 @@ main() {
load_config
check_version
;;
"check-agent-update")
setup_directories
load_config
check_agent_update
;;
"update-agent")
check_root
setup_directories
@@ -1188,22 +1566,23 @@ main() {
;;
*)
echo "PatchMon Agent v$AGENT_VERSION - API Credential Based"
echo "Usage: $0 {configure|test|update|ping|config|check-version|update-agent|update-crontab|diagnostics}"
echo "Usage: $0 {configure|test|update|ping|config|check-version|check-agent-update|update-agent|update-crontab|diagnostics}"
echo ""
echo "Commands:"
echo " configure <API_ID> <API_KEY> - Configure API credentials for this host"
echo " configure <API_ID> <API_KEY> [SERVER_URL] - Configure API credentials for this host"
echo " test - Test API credentials connectivity"
echo " update - Send package update information to server"
echo " ping - Test connectivity to server"
echo " config - Show current configuration"
echo " check-version - Check for agent updates"
echo " check-agent-update - Check for agent updates using timestamp comparison"
echo " update-agent - Update agent to latest version"
echo " update-crontab - Update crontab with current policy"
echo " diagnostics - Show detailed system diagnostics"
echo ""
echo "Setup Process:"
echo " 1. Contact your PatchMon administrator to create a host entry"
echo " 2. Run: $0 configure <API_ID> <API_KEY> (provided by admin)"
echo " 2. Run: $0 configure <API_ID> <API_KEY> [SERVER_URL] (provided by admin)"
echo " 3. Run: $0 test (to verify connection)"
echo " 4. Run: $0 update (to send initial package data)"
echo ""
@@ -1216,4 +1595,4 @@ main() {
}
# Run main function
main "$@"
main "$@"

BIN
agents/patchmon-agent-linux-386 Executable file

Binary file not shown.

BIN
agents/patchmon-agent-linux-amd64 Executable file

Binary file not shown.

BIN
agents/patchmon-agent-linux-arm Executable file

Binary file not shown.

BIN
agents/patchmon-agent-linux-arm64 Executable file

Binary file not shown.

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

496
agents/patchmon-docker-agent.sh Executable file
View File

@@ -0,0 +1,496 @@
#!/bin/bash
# PatchMon Docker Agent Script v1.3.0
# This script collects Docker container and image information and sends it to PatchMon
# Configuration
PATCHMON_SERVER="${PATCHMON_SERVER:-http://localhost:3001}"
API_VERSION="v1"
AGENT_VERSION="1.3.0"
CONFIG_FILE="/etc/patchmon/agent.conf"
CREDENTIALS_FILE="/etc/patchmon/credentials"
LOG_FILE="/var/log/patchmon-docker-agent.log"
# Curl flags placeholder (replaced by server based on SSL settings)
CURL_FLAGS=""
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Logging function
log() {
if [[ -w "$(dirname "$LOG_FILE")" ]] 2>/dev/null; then
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" >> "$LOG_FILE" 2>/dev/null
fi
}
# Error handling
error() {
echo -e "${RED}ERROR: $1${NC}" >&2
log "ERROR: $1"
exit 1
}
# Info logging
info() {
echo -e "${BLUE} $1${NC}" >&2
log "INFO: $1"
}
# Success logging
success() {
echo -e "${GREEN}$1${NC}" >&2
log "SUCCESS: $1"
}
# Warning logging
warning() {
echo -e "${YELLOW}⚠️ $1${NC}" >&2
log "WARNING: $1"
}
# Check if Docker is installed and running
check_docker() {
if ! command -v docker &> /dev/null; then
error "Docker is not installed on this system"
fi
if ! docker info &> /dev/null; then
error "Docker daemon is not running or you don't have permission to access it. Try running with sudo."
fi
}
# Load credentials
load_credentials() {
if [[ ! -f "$CREDENTIALS_FILE" ]]; then
error "Credentials file not found at $CREDENTIALS_FILE. Please configure the main PatchMon agent first."
fi
source "$CREDENTIALS_FILE"
if [[ -z "$API_ID" ]] || [[ -z "$API_KEY" ]]; then
error "API credentials not found in $CREDENTIALS_FILE"
fi
# Use PATCHMON_URL from credentials if available, otherwise use default
if [[ -n "$PATCHMON_URL" ]]; then
PATCHMON_SERVER="$PATCHMON_URL"
fi
}
# Load configuration
load_config() {
if [[ -f "$CONFIG_FILE" ]]; then
source "$CONFIG_FILE"
if [[ -n "$SERVER_URL" ]]; then
PATCHMON_SERVER="$SERVER_URL"
fi
fi
}
# Collect Docker containers
collect_containers() {
info "Collecting Docker container information..."
local containers_json="["
local first=true
# Get all containers (running and stopped)
while IFS='|' read -r container_id name image status state created started ports; do
if [[ -z "$container_id" ]]; then
continue
fi
# Parse image name and tag
local image_name="${image%%:*}"
local image_tag="${image##*:}"
if [[ "$image_tag" == "$image_name" ]]; then
image_tag="latest"
fi
# Determine image source based on registry
local image_source="docker-hub"
if [[ "$image_name" == ghcr.io/* ]]; then
image_source="github"
elif [[ "$image_name" == registry.gitlab.com/* ]]; then
image_source="gitlab"
elif [[ "$image_name" == *"/"*"/"* ]]; then
image_source="private"
fi
# Get repository name (without registry prefix for common registries)
local image_repository="$image_name"
image_repository="${image_repository#ghcr.io/}"
image_repository="${image_repository#registry.gitlab.com/}"
# Get image ID
local full_image_id=$(docker inspect --format='{{.Image}}' "$container_id" 2>/dev/null || echo "unknown")
full_image_id="${full_image_id#sha256:}"
# Normalize status (extract just the status keyword)
local normalized_status="unknown"
if [[ "$status" =~ ^Up ]]; then
normalized_status="running"
elif [[ "$status" =~ ^Exited ]]; then
normalized_status="exited"
elif [[ "$status" =~ ^Created ]]; then
normalized_status="created"
elif [[ "$status" =~ ^Restarting ]]; then
normalized_status="restarting"
elif [[ "$status" =~ ^Paused ]]; then
normalized_status="paused"
elif [[ "$status" =~ ^Dead ]]; then
normalized_status="dead"
fi
# Parse ports
local ports_json="null"
if [[ -n "$ports" && "$ports" != "null" ]]; then
# Convert Docker port format to JSON
ports_json=$(echo "$ports" | jq -R -s -c 'split(",") | map(select(length > 0)) | map(split("->") | {(.[0]): .[1]}) | add // {}')
fi
# Convert dates to ISO 8601 format
# If date conversion fails, use null instead of invalid date string
local created_iso=$(date -d "$created" -Iseconds 2>/dev/null || echo "null")
local started_iso="null"
if [[ -n "$started" && "$started" != "null" ]]; then
started_iso=$(date -d "$started" -Iseconds 2>/dev/null || echo "null")
fi
# Add comma for JSON array
if [[ "$first" == false ]]; then
containers_json+=","
fi
first=false
# Build JSON object for this container
containers_json+="{\"container_id\":\"$container_id\","
containers_json+="\"name\":\"$name\","
containers_json+="\"image_name\":\"$image_name\","
containers_json+="\"image_tag\":\"$image_tag\","
containers_json+="\"image_repository\":\"$image_repository\","
containers_json+="\"image_source\":\"$image_source\","
containers_json+="\"image_id\":\"$full_image_id\","
containers_json+="\"status\":\"$normalized_status\","
containers_json+="\"state\":\"$state\","
containers_json+="\"ports\":$ports_json"
# Only add created_at if we have a valid date
if [[ "$created_iso" != "null" ]]; then
containers_json+=",\"created_at\":\"$created_iso\""
fi
# Only add started_at if we have a valid date
if [[ "$started_iso" != "null" ]]; then
containers_json+=",\"started_at\":\"$started_iso\""
fi
containers_json+="}"
done < <(docker ps -a --format '{{.ID}}|{{.Names}}|{{.Image}}|{{.Status}}|{{.State}}|{{.CreatedAt}}|{{.RunningFor}}|{{.Ports}}' 2>/dev/null)
containers_json+="]"
echo "$containers_json"
}
# Collect Docker images
collect_images() {
info "Collecting Docker image information..."
local images_json="["
local first=true
while IFS='|' read -r repository tag image_id created size digest; do
if [[ -z "$repository" || "$repository" == "<none>" ]]; then
continue
fi
# Clean up tag
if [[ -z "$tag" || "$tag" == "<none>" ]]; then
tag="latest"
fi
# Clean image ID
image_id="${image_id#sha256:}"
# Determine source
local source="docker-hub"
if [[ "$repository" == ghcr.io/* ]]; then
source="github"
elif [[ "$repository" == registry.gitlab.com/* ]]; then
source="gitlab"
elif [[ "$repository" == *"/"*"/"* ]]; then
source="private"
fi
# Convert size to bytes (approximate)
local size_bytes=0
if [[ "$size" =~ ([0-9.]+)([KMGT]?B) ]]; then
local num="${BASH_REMATCH[1]}"
local unit="${BASH_REMATCH[2]}"
case "$unit" in
KB) size_bytes=$(echo "$num * 1024" | bc | cut -d. -f1) ;;
MB) size_bytes=$(echo "$num * 1024 * 1024" | bc | cut -d. -f1) ;;
GB) size_bytes=$(echo "$num * 1024 * 1024 * 1024" | bc | cut -d. -f1) ;;
TB) size_bytes=$(echo "$num * 1024 * 1024 * 1024 * 1024" | bc | cut -d. -f1) ;;
B) size_bytes=$(echo "$num" | cut -d. -f1) ;;
esac
fi
# Convert created date to ISO 8601
# If date conversion fails, use null instead of invalid date string
local created_iso=$(date -d "$created" -Iseconds 2>/dev/null || echo "null")
# Add comma for JSON array
if [[ "$first" == false ]]; then
images_json+=","
fi
first=false
# Build JSON object for this image
images_json+="{\"repository\":\"$repository\","
images_json+="\"tag\":\"$tag\","
images_json+="\"image_id\":\"$image_id\","
images_json+="\"source\":\"$source\","
images_json+="\"size_bytes\":$size_bytes"
# Only add created_at if we have a valid date
if [[ "$created_iso" != "null" ]]; then
images_json+=",\"created_at\":\"$created_iso\""
fi
# Only add digest if present
if [[ -n "$digest" && "$digest" != "<none>" ]]; then
images_json+=",\"digest\":\"$digest\""
fi
images_json+="}"
done < <(docker images --format '{{.Repository}}|{{.Tag}}|{{.ID}}|{{.CreatedAt}}|{{.Size}}|{{.Digest}}' --no-trunc 2>/dev/null)
images_json+="]"
echo "$images_json"
}
# Check for image updates
check_image_updates() {
info "Checking for image updates..."
local updates_json="["
local first=true
local update_count=0
# Get all images
while IFS='|' read -r repository tag image_id digest; do
if [[ -z "$repository" || "$repository" == "<none>" || "$tag" == "<none>" ]]; then
continue
fi
# Skip checking 'latest' tag as it's always considered current by name
# We'll still check digest though
local full_image="${repository}:${tag}"
# Try to get remote digest from registry
# Use docker manifest inspect to avoid pulling the image
local remote_digest=$(docker manifest inspect "$full_image" 2>/dev/null | jq -r '.config.digest // .manifests[0].digest // empty' 2>/dev/null)
if [[ -z "$remote_digest" ]]; then
# If manifest inspect fails, try buildx imagetools inspect (works for more registries)
remote_digest=$(docker buildx imagetools inspect "$full_image" 2>/dev/null | grep -oP 'Digest:\s*\K\S+' | head -1)
fi
# Clean up digests for comparison
local local_digest="${digest#sha256:}"
remote_digest="${remote_digest#sha256:}"
# If we got a remote digest and it's different from local, there's an update
if [[ -n "$remote_digest" && -n "$local_digest" && "$remote_digest" != "$local_digest" ]]; then
if [[ "$first" == false ]]; then
updates_json+=","
fi
first=false
# Build update JSON object
updates_json+="{\"repository\":\"$repository\","
updates_json+="\"current_tag\":\"$tag\","
updates_json+="\"available_tag\":\"$tag\","
updates_json+="\"current_digest\":\"$local_digest\","
updates_json+="\"available_digest\":\"$remote_digest\","
updates_json+="\"image_id\":\"${image_id#sha256:}\""
updates_json+="}"
((update_count++))
fi
done < <(docker images --format '{{.Repository}}|{{.Tag}}|{{.ID}}|{{.Digest}}' --no-trunc 2>/dev/null)
updates_json+="]"
info "Found $update_count image update(s) available"
echo "$updates_json"
}
# Send Docker data to server
send_docker_data() {
load_credentials
info "Collecting Docker data..."
local containers=$(collect_containers)
local images=$(collect_images)
local updates=$(check_image_updates)
# Count collected items
local container_count=$(echo "$containers" | jq '. | length' 2>/dev/null || echo "0")
local image_count=$(echo "$images" | jq '. | length' 2>/dev/null || echo "0")
local update_count=$(echo "$updates" | jq '. | length' 2>/dev/null || echo "0")
info "Found $container_count containers, $image_count images, and $update_count update(s) available"
# Build payload
local payload="{\"apiId\":\"$API_ID\",\"apiKey\":\"$API_KEY\",\"containers\":$containers,\"images\":$images,\"updates\":$updates}"
# Send to server
info "Sending Docker data to PatchMon server..."
local response=$(curl $CURL_FLAGS -s -w "\n%{http_code}" -X POST \
-H "Content-Type: application/json" \
-d "$payload" \
"${PATCHMON_SERVER}/api/${API_VERSION}/docker/collect" 2>&1)
local http_code=$(echo "$response" | tail -n1)
local response_body=$(echo "$response" | head -n-1)
if [[ "$http_code" == "200" ]]; then
success "Docker data sent successfully!"
log "Docker data sent: $container_count containers, $image_count images"
return 0
else
error "Failed to send Docker data. HTTP Status: $http_code\nResponse: $response_body"
fi
}
# Test Docker data collection without sending
test_collection() {
check_docker
info "Testing Docker data collection (dry run)..."
echo ""
local containers=$(collect_containers)
local images=$(collect_images)
local updates=$(check_image_updates)
local container_count=$(echo "$containers" | jq '. | length' 2>/dev/null || echo "0")
local image_count=$(echo "$images" | jq '. | length' 2>/dev/null || echo "0")
local update_count=$(echo "$updates" | jq '. | length' 2>/dev/null || echo "0")
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo -e "${GREEN}Docker Data Collection Results${NC}"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo -e "Containers found: ${GREEN}$container_count${NC}"
echo -e "Images found: ${GREEN}$image_count${NC}"
echo -e "Updates available: ${YELLOW}$update_count${NC}"
echo ""
if command -v jq &> /dev/null; then
echo "━━━ Containers ━━━"
echo "$containers" | jq -r '.[] | "\(.name) (\(.status)) - \(.image_name):\(.image_tag)"' | head -10
if [[ $container_count -gt 10 ]]; then
echo "... and $((container_count - 10)) more"
fi
echo ""
echo "━━━ Images ━━━"
echo "$images" | jq -r '.[] | "\(.repository):\(.tag) (\(.size_bytes / 1024 / 1024 | floor)MB)"' | head -10
if [[ $image_count -gt 10 ]]; then
echo "... and $((image_count - 10)) more"
fi
if [[ $update_count -gt 0 ]]; then
echo ""
echo "━━━ Available Updates ━━━"
echo "$updates" | jq -r '.[] | "\(.repository):\(.current_tag) → \(.available_tag)"'
fi
fi
echo ""
success "Test collection completed successfully!"
}
# Show help
show_help() {
cat << EOF
PatchMon Docker Agent v${AGENT_VERSION}
This agent collects Docker container and image information and sends it to PatchMon.
USAGE:
$0 <command>
COMMANDS:
collect Collect and send Docker data to PatchMon server
test Test Docker data collection without sending (dry run)
help Show this help message
REQUIREMENTS:
- Docker must be installed and running
- Main PatchMon agent must be configured first
- Credentials file must exist at $CREDENTIALS_FILE
EXAMPLES:
# Test collection (dry run)
sudo $0 test
# Collect and send Docker data
sudo $0 collect
SCHEDULING:
To run this agent automatically, add a cron job:
# Run every 5 minutes
*/5 * * * * /usr/local/bin/patchmon-docker-agent.sh collect
# Run every hour
0 * * * * /usr/local/bin/patchmon-docker-agent.sh collect
FILES:
Config: $CONFIG_FILE
Credentials: $CREDENTIALS_FILE
Log: $LOG_FILE
EOF
}
# Main function
main() {
case "$1" in
"collect")
check_docker
load_config
send_docker_data
;;
"test")
check_docker
load_config
test_collection
;;
"help"|"--help"|"-h"|"")
show_help
;;
*)
error "Unknown command: $1\n\nRun '$0 help' for usage information."
;;
esac
}
# Run main function
main "$@"

View File

@@ -1,10 +1,15 @@
#!/bin/bash
# PatchMon Agent Installation Script
# Usage: curl -sSL {PATCHMON_URL}/api/v1/hosts/install | bash -s -- {PATCHMON_URL} {API_ID} {API_KEY}
# Usage: curl -s {PATCHMON_URL}/api/v1/hosts/install -H "X-API-ID: {API_ID}" -H "X-API-KEY: {API_KEY}" | bash
set -e
# This placeholder will be dynamically replaced by the server when serving this
# script based on the "ignore SSL self-signed" setting. If set to -k, curl will
# ignore certificate validation. Otherwise, it will be empty for secure default.
# CURL_FLAGS is now set via environment variables by the backend
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
@@ -35,122 +40,523 @@ if [[ $EUID -ne 0 ]]; then
error "This script must be run as root (use sudo)"
fi
# Default server URL (will be replaced by backend with configured URL)
PATCHMON_URL="http://localhost:3001"
# Parse arguments
if [[ $# -ne 3 ]]; then
echo "Usage: curl -sSL {PATCHMON_URL}/api/v1/hosts/install | bash -s -- {PATCHMON_URL} {API_ID} {API_KEY}"
# Verify system datetime and timezone
verify_datetime() {
info "🕐 Verifying system datetime and timezone..."
# Get current system time
local system_time=$(date)
local timezone=$(timedatectl show --property=Timezone --value 2>/dev/null || echo "Unknown")
# Display current datetime info
echo ""
echo "Example:"
echo "curl -sSL http://patchmon.example.com/api/v1/hosts/install | bash -s -- http://patchmon.example.com patchmon_1a2b3c4d abcd1234567890abcdef1234567890abcdef1234567890abcdef1234567890"
echo -e "${BLUE}📅 Current System Date/Time:${NC}"
echo " • Date/Time: $system_time"
echo " • Timezone: $timezone"
echo ""
echo "Contact your PatchMon administrator to get your API credentials."
exit 1
# Check if we can read from stdin (interactive terminal)
if [[ -t 0 ]]; then
# Interactive terminal - ask user
read -p "Does this date/time look correct to you? (y/N): " -r response
if [[ "$response" =~ ^[Yy]$ ]]; then
success "✅ Date/time verification passed"
echo ""
return 0
else
echo ""
echo -e "${RED}❌ Date/time verification failed${NC}"
echo ""
echo -e "${YELLOW}💡 Please fix the date/time and re-run the installation script:${NC}"
echo " sudo timedatectl set-time 'YYYY-MM-DD HH:MM:SS'"
echo " sudo timedatectl set-timezone 'America/New_York' # or your timezone"
echo " sudo timedatectl list-timezones # to see available timezones"
echo ""
echo -e "${BLUE} After fixing the date/time, re-run this installation script.${NC}"
error "Installation cancelled - please fix date/time and re-run"
fi
else
# Non-interactive (piped from curl) - show warning and continue
echo -e "${YELLOW}⚠️ Non-interactive installation detected${NC}"
echo ""
echo "Please verify the date/time shown above is correct."
echo "If the date/time is incorrect, it may cause issues with:"
echo " • Logging timestamps"
echo " • Scheduled updates"
echo " • Data synchronization"
echo ""
echo -e "${GREEN}✅ Continuing with installation...${NC}"
success "✅ Date/time verification completed (assumed correct)"
echo ""
fi
}
# Run datetime verification
verify_datetime
# Clean up old files (keep only last 3 of each type)
cleanup_old_files() {
# Clean up old credential backups
ls -t /etc/patchmon/credentials.yml.backup.* 2>/dev/null | tail -n +4 | xargs -r rm -f
# Clean up old config backups
ls -t /etc/patchmon/config.yml.backup.* 2>/dev/null | tail -n +4 | xargs -r rm -f
# Clean up old agent backups
ls -t /usr/local/bin/patchmon-agent.backup.* 2>/dev/null | tail -n +4 | xargs -r rm -f
# Clean up old log files
ls -t /etc/patchmon/logs/patchmon-agent.log.old.* 2>/dev/null | tail -n +4 | xargs -r rm -f
# Clean up old shell script backups (if any exist)
ls -t /usr/local/bin/patchmon-agent.sh.backup.* 2>/dev/null | tail -n +4 | xargs -r rm -f
# Clean up old credentials backups (if any exist)
ls -t /etc/patchmon/credentials.backup.* 2>/dev/null | tail -n +4 | xargs -r rm -f
}
# Run cleanup at start
cleanup_old_files
# Generate or retrieve machine ID
get_machine_id() {
# Try multiple sources for machine ID
if [[ -f /etc/machine-id ]]; then
cat /etc/machine-id
elif [[ -f /var/lib/dbus/machine-id ]]; then
cat /var/lib/dbus/machine-id
else
# Fallback: generate from hardware info (less ideal but works)
echo "patchmon-$(cat /sys/class/dmi/id/product_uuid 2>/dev/null || cat /proc/sys/kernel/random/uuid)"
fi
}
# Parse arguments from environment (passed via HTTP headers)
if [[ -z "$PATCHMON_URL" ]] || [[ -z "$API_ID" ]] || [[ -z "$API_KEY" ]]; then
error "Missing required parameters. This script should be called via the PatchMon web interface."
fi
PATCHMON_URL="$1"
API_ID="$2"
API_KEY="$3"
# Validate inputs
if [[ ! "$PATCHMON_URL" =~ ^https?:// ]]; then
error "Invalid URL format. Must start with http:// or https://"
# Parse architecture parameter (default to amd64)
ARCHITECTURE="${ARCHITECTURE:-amd64}"
if [[ "$ARCHITECTURE" != "amd64" && "$ARCHITECTURE" != "386" && "$ARCHITECTURE" != "arm64" ]]; then
error "Invalid architecture '$ARCHITECTURE'. Must be one of: amd64, 386, arm64"
fi
if [[ ! "$API_ID" =~ ^patchmon_[a-f0-9]{16}$ ]]; then
error "Invalid API ID format. API ID should be in format: patchmon_xxxxxxxxxxxxxxxx"
# Check if --force flag is set (for bypassing broken packages)
FORCE_INSTALL="${FORCE_INSTALL:-false}"
if [[ "$*" == *"--force"* ]] || [[ "$FORCE_INSTALL" == "true" ]]; then
FORCE_INSTALL="true"
warning "⚠️ Force mode enabled - will bypass broken packages"
fi
if [[ ! "$API_KEY" =~ ^[a-f0-9]{64}$ ]]; then
error "Invalid API Key format. API Key should be 64 hexadecimal characters."
# Get unique machine ID for this host
MACHINE_ID=$(get_machine_id)
export MACHINE_ID
info "🚀 Starting PatchMon Agent Installation..."
info "📋 Server: $PATCHMON_URL"
info "🔑 API ID: ${API_ID:0:16}..."
info "🆔 Machine ID: ${MACHINE_ID:0:16}..."
info "🏗️ Architecture: $ARCHITECTURE"
# Display diagnostic information
echo ""
echo -e "${BLUE}🔧 Installation Diagnostics:${NC}"
echo " • URL: $PATCHMON_URL"
echo " • CURL FLAGS: $CURL_FLAGS"
echo " • API ID: ${API_ID:0:16}..."
echo " • API Key: ${API_KEY:0:16}..."
echo " • Architecture: $ARCHITECTURE"
echo ""
# Install required dependencies
info "📦 Installing required dependencies..."
echo ""
# Function to check if a command exists
command_exists() {
command -v "$1" >/dev/null 2>&1
}
# Function to install packages with error handling
install_apt_packages() {
local packages=("$@")
local missing_packages=()
# Check which packages are missing
for pkg in "${packages[@]}"; do
if ! command_exists "$pkg"; then
missing_packages+=("$pkg")
fi
done
if [ ${#missing_packages[@]} -eq 0 ]; then
success "All required packages are already installed"
return 0
fi
info "Need to install: ${missing_packages[*]}"
# Build apt-get command based on force mode
local apt_cmd="apt-get install ${missing_packages[*]} -y"
if [[ "$FORCE_INSTALL" == "true" ]]; then
info "Using force mode - bypassing broken packages..."
apt_cmd="$apt_cmd -o APT::Get::Fix-Broken=false -o DPkg::Options::=\"--force-confold\" -o DPkg::Options::=\"--force-confdef\""
fi
# Try to install packages
if eval "$apt_cmd" 2>&1 | tee /tmp/patchmon_apt_install.log; then
success "Packages installed successfully"
return 0
else
warning "Package installation encountered issues, checking if required tools are available..."
# Verify critical dependencies are actually available
local all_ok=true
for pkg in "${packages[@]}"; do
if ! command_exists "$pkg"; then
if [[ "$FORCE_INSTALL" == "true" ]]; then
error "Critical dependency '$pkg' is not available even with --force. Please install manually."
else
error "Critical dependency '$pkg' is not available. Try again with --force flag or install manually: apt-get install $pkg"
fi
all_ok=false
fi
done
if $all_ok; then
success "All required tools are available despite installation warnings"
return 0
else
return 1
fi
fi
}
# Detect package manager and install jq and curl
if command -v apt-get >/dev/null 2>&1; then
# Debian/Ubuntu
info "Detected apt-get (Debian/Ubuntu)"
echo ""
# Check for broken packages
if dpkg -l | grep -q "^iH\|^iF" 2>/dev/null; then
if [[ "$FORCE_INSTALL" == "true" ]]; then
warning "Detected broken packages on system - force mode will work around them"
else
warning "⚠️ Broken packages detected on system"
warning "If installation fails, retry with: curl -s {URL}/api/v1/hosts/install --force -H ..."
fi
fi
info "Updating package lists..."
apt-get update || true
echo ""
info "Installing jq, curl, and bc..."
install_apt_packages jq curl bc
elif command -v yum >/dev/null 2>&1; then
# CentOS/RHEL 7
info "Detected yum (CentOS/RHEL 7)"
echo ""
info "Installing jq, curl, and bc..."
yum install -y jq curl bc
elif command -v dnf >/dev/null 2>&1; then
# CentOS/RHEL 8+/Fedora
info "Detected dnf (CentOS/RHEL 8+/Fedora)"
echo ""
info "Installing jq, curl, and bc..."
dnf install -y jq curl bc
elif command -v zypper >/dev/null 2>&1; then
# openSUSE
info "Detected zypper (openSUSE)"
echo ""
info "Installing jq, curl, and bc..."
zypper install -y jq curl bc
elif command -v pacman >/dev/null 2>&1; then
# Arch Linux
info "Detected pacman (Arch Linux)"
echo ""
info "Installing jq, curl, and bc..."
pacman -S --noconfirm jq curl bc
elif command -v apk >/dev/null 2>&1; then
# Alpine Linux
info "Detected apk (Alpine Linux)"
echo ""
info "Installing jq, curl, and bc..."
apk add --no-cache jq curl bc
else
warning "Could not detect package manager. Please ensure 'jq', 'curl', and 'bc' are installed manually."
fi
info "🚀 Installing PatchMon Agent..."
info " Server: $PATCHMON_URL"
info " API ID: $API_ID"
echo ""
success "Dependencies installation completed"
echo ""
# Create patchmon directory
info "📁 Creating configuration directory..."
mkdir -p /etc/patchmon
# Step 1: Handle existing configuration directory
info "📁 Setting up configuration directory..."
# Download the agent script
info "📥 Downloading PatchMon agent script..."
curl -sSL "$PATCHMON_URL/api/v1/hosts/agent/download" -o /usr/local/bin/patchmon-agent.sh
chmod +x /usr/local/bin/patchmon-agent.sh
# Check if configuration directory already exists
if [[ -d "/etc/patchmon" ]]; then
warning "⚠️ Configuration directory already exists at /etc/patchmon"
warning "⚠️ Preserving existing configuration files"
# List existing files for user awareness
info "📋 Existing files in /etc/patchmon:"
ls -la /etc/patchmon/ 2>/dev/null | grep -v "^total" | while read -r line; do
echo " $line"
done
else
info "📁 Creating new configuration directory..."
mkdir -p /etc/patchmon
fi
# Get the agent version from the downloaded script
AGENT_VERSION=$(grep '^AGENT_VERSION=' /usr/local/bin/patchmon-agent.sh | cut -d'"' -f2)
# Step 2: Create configuration files
info "🔐 Creating configuration files..."
# Check if config file already exists
if [[ -f "/etc/patchmon/config.yml" ]]; then
warning "⚠️ Config file already exists at /etc/patchmon/config.yml"
warning "⚠️ Moving existing file out of the way for fresh installation"
# Clean up old config backups (keep only last 3)
ls -t /etc/patchmon/config.yml.backup.* 2>/dev/null | tail -n +4 | xargs -r rm -f
# Move existing file out of the way
mv /etc/patchmon/config.yml /etc/patchmon/config.yml.backup.$(date +%Y%m%d_%H%M%S)
info "📋 Moved existing config to: /etc/patchmon/config.yml.backup.$(date +%Y%m%d_%H%M%S)"
fi
# Check if credentials file already exists
if [[ -f "/etc/patchmon/credentials.yml" ]]; then
warning "⚠️ Credentials file already exists at /etc/patchmon/credentials.yml"
warning "⚠️ Moving existing file out of the way for fresh installation"
# Clean up old credential backups (keep only last 3)
ls -t /etc/patchmon/credentials.yml.backup.* 2>/dev/null | tail -n +4 | xargs -r rm -f
# Move existing file out of the way
mv /etc/patchmon/credentials.yml /etc/patchmon/credentials.yml.backup.$(date +%Y%m%d_%H%M%S)
info "📋 Moved existing credentials to: /etc/patchmon/credentials.yml.backup.$(date +%Y%m%d_%H%M%S)"
fi
# Clean up old credentials file if it exists (from previous installations)
if [[ -f "/etc/patchmon/credentials" ]]; then
warning "⚠️ Found old credentials file, removing it..."
rm -f /etc/patchmon/credentials
info "📋 Removed old credentials file"
fi
# Create main config file
cat > /etc/patchmon/config.yml << EOF
# PatchMon Agent Configuration
# Generated on $(date)
patchmon_server: "$PATCHMON_URL"
api_version: "v1"
credentials_file: "/etc/patchmon/credentials.yml"
log_file: "/etc/patchmon/logs/patchmon-agent.log"
log_level: "info"
EOF
# Create credentials file
cat > /etc/patchmon/credentials.yml << EOF
# PatchMon API Credentials
# Generated on $(date)
api_id: "$API_ID"
api_key: "$API_KEY"
EOF
chmod 600 /etc/patchmon/config.yml
chmod 600 /etc/patchmon/credentials.yml
# Step 3: Download the PatchMon agent binary using API credentials
info "📥 Downloading PatchMon agent binary..."
# Determine the binary filename based on architecture
BINARY_NAME="patchmon-agent-linux-${ARCHITECTURE}"
# Check if agent binary already exists
if [[ -f "/usr/local/bin/patchmon-agent" ]]; then
warning "⚠️ Agent binary already exists at /usr/local/bin/patchmon-agent"
warning "⚠️ Moving existing file out of the way for fresh installation"
# Clean up old agent backups (keep only last 3)
ls -t /usr/local/bin/patchmon-agent.backup.* 2>/dev/null | tail -n +4 | xargs -r rm -f
# Move existing file out of the way
mv /usr/local/bin/patchmon-agent /usr/local/bin/patchmon-agent.backup.$(date +%Y%m%d_%H%M%S)
info "📋 Moved existing agent to: /usr/local/bin/patchmon-agent.backup.$(date +%Y%m%d_%H%M%S)"
fi
# Clean up old shell script if it exists (from previous installations)
if [[ -f "/usr/local/bin/patchmon-agent.sh" ]]; then
warning "⚠️ Found old shell script agent, removing it..."
rm -f /usr/local/bin/patchmon-agent.sh
info "📋 Removed old shell script agent"
fi
# Download the binary
curl $CURL_FLAGS \
-H "X-API-ID: $API_ID" \
-H "X-API-KEY: $API_KEY" \
"$PATCHMON_URL/api/v1/hosts/agent/download?arch=$ARCHITECTURE&force=binary" \
-o /usr/local/bin/patchmon-agent
chmod +x /usr/local/bin/patchmon-agent
# Get the agent version from the binary
AGENT_VERSION=$(/usr/local/bin/patchmon-agent version 2>/dev/null || echo "Unknown")
info "📋 Agent version: $AGENT_VERSION"
# Get expected agent version from server
EXPECTED_VERSION=$(curl -s "$PATCHMON_URL/api/v1/hosts/agent/version" | grep -o '"currentVersion":"[^"]*' | cut -d'"' -f4 2>/dev/null || echo "Unknown")
if [[ "$EXPECTED_VERSION" != "Unknown" ]]; then
info "📋 Expected version: $EXPECTED_VERSION"
if [[ "$AGENT_VERSION" != "$EXPECTED_VERSION" ]]; then
warning "⚠️ Agent version mismatch! Installed: $AGENT_VERSION, Expected: $EXPECTED_VERSION"
# Handle existing log files and create log directory
info "📁 Setting up log directory..."
# Create log directory if it doesn't exist
mkdir -p /etc/patchmon/logs
# Handle existing log files
if [[ -f "/etc/patchmon/logs/patchmon-agent.log" ]]; then
warning "⚠️ Existing log file found at /etc/patchmon/logs/patchmon-agent.log"
warning "⚠️ Rotating log file for fresh start"
# Rotate the log file
mv /etc/patchmon/logs/patchmon-agent.log /etc/patchmon/logs/patchmon-agent.log.old.$(date +%Y%m%d_%H%M%S)
info "📋 Log file rotated to: /etc/patchmon/logs/patchmon-agent.log.old.$(date +%Y%m%d_%H%M%S)"
fi
# Step 4: Test the configuration
# Check if this machine is already enrolled
info "🔍 Checking if machine is already enrolled..."
existing_check=$(curl $CURL_FLAGS -s -X POST \
-H "X-API-ID: $API_ID" \
-H "X-API-KEY: $API_KEY" \
-H "Content-Type: application/json" \
-d "{\"machine_id\": \"$MACHINE_ID\"}" \
"$PATCHMON_URL/api/v1/hosts/check-machine-id" \
-w "\n%{http_code}" 2>&1)
http_code=$(echo "$existing_check" | tail -n 1)
response_body=$(echo "$existing_check" | sed '$d')
if [[ "$http_code" == "200" ]]; then
already_enrolled=$(echo "$response_body" | jq -r '.exists' 2>/dev/null || echo "false")
if [[ "$already_enrolled" == "true" ]]; then
warning "⚠️ This machine is already enrolled in PatchMon"
info "Machine ID: $MACHINE_ID"
info "Existing host: $(echo "$response_body" | jq -r '.host.friendly_name' 2>/dev/null)"
info ""
info "The agent will be reinstalled/updated with existing credentials."
echo ""
else
success "✅ Machine not yet enrolled - proceeding with installation"
fi
fi
# Get update interval policy from server
UPDATE_INTERVAL=$(curl -s "$PATCHMON_URL/api/v1/settings/update-interval" | grep -o '"updateInterval":[0-9]*' | cut -d':' -f2 2>/dev/null || echo "60")
info "📋 Update interval: $UPDATE_INTERVAL minutes"
info "🧪 Testing API credentials and connectivity..."
if /usr/local/bin/patchmon-agent ping; then
success "✅ TEST: API credentials are valid and server is reachable"
else
error "❌ Failed to validate API credentials or reach server"
fi
# Create credentials file
info "🔐 Setting up API credentials..."
cat > /etc/patchmon/credentials << EOF
# PatchMon API Credentials
# Generated on $(date)
PATCHMON_URL="$PATCHMON_URL"
API_ID="$API_ID"
API_KEY="$API_KEY"
# Step 5: Send initial data and setup systemd service
info "📊 Sending initial package data to server..."
if /usr/local/bin/patchmon-agent report; then
success "✅ UPDATE: Initial package data sent successfully"
else
warning "⚠️ Failed to send initial data. You can retry later with: /usr/local/bin/patchmon-agent report"
fi
# Step 6: Setup systemd service for WebSocket connection
info "🔧 Setting up systemd service..."
# Stop and disable existing service if it exists
if systemctl is-active --quiet patchmon-agent.service 2>/dev/null; then
warning "⚠️ Stopping existing PatchMon agent service..."
systemctl stop patchmon-agent.service
fi
if systemctl is-enabled --quiet patchmon-agent.service 2>/dev/null; then
warning "⚠️ Disabling existing PatchMon agent service..."
systemctl disable patchmon-agent.service
fi
# Create systemd service file
cat > /etc/systemd/system/patchmon-agent.service << EOF
[Unit]
Description=PatchMon Agent Service
After=network.target
Wants=network.target
[Service]
Type=simple
User=root
ExecStart=/usr/local/bin/patchmon-agent serve
Restart=always
RestartSec=10
WorkingDirectory=/etc/patchmon
# Logging
StandardOutput=journal
StandardError=journal
SyslogIdentifier=patchmon-agent
[Install]
WantedBy=multi-user.target
EOF
chmod 600 /etc/patchmon/credentials
# Clean up old crontab entries if they exist (from previous installations)
if crontab -l 2>/dev/null | grep -q "patchmon-agent"; then
warning "⚠️ Found old crontab entries, removing them..."
crontab -l 2>/dev/null | grep -v "patchmon-agent" | crontab -
info "📋 Removed old crontab entries"
fi
# Test the configuration
info "🧪 Testing configuration..."
if /usr/local/bin/patchmon-agent.sh test; then
success "Configuration test passed!"
# Reload systemd and enable/start the service
systemctl daemon-reload
systemctl enable patchmon-agent.service
systemctl start patchmon-agent.service
# Check if service started successfully
if systemctl is-active --quiet patchmon-agent.service; then
success "✅ PatchMon Agent service started successfully"
info "🔗 WebSocket connection established"
else
error "Configuration test failed. Please check your credentials."
warning "⚠️ Service may have failed to start. Check status with: systemctl status patchmon-agent"
fi
# Send initial update
info "📊 Sending initial package data..."
if /usr/local/bin/patchmon-agent.sh update; then
success "Initial package data sent successfully!"
else
warning "Initial package data failed, but agent is configured. You can run 'patchmon-agent.sh update' manually."
fi
# Setup crontab for automatic updates
info "⏰ Setting up automatic updates every $UPDATE_INTERVAL minutes..."
if [[ $UPDATE_INTERVAL -eq 60 ]]; then
# Hourly updates
echo "0 * * * * /usr/local/bin/patchmon-agent.sh update >/dev/null 2>&1" | crontab -
else
# Custom interval updates
echo "*/$UPDATE_INTERVAL * * * * /usr/local/bin/patchmon-agent.sh update >/dev/null 2>&1" | crontab -
fi
success "🎉 PatchMon Agent installation complete!"
# Installation complete
success "🎉 PatchMon Agent installation completed successfully!"
echo ""
echo "📋 Installation Summary:"
echo " • Agent installed: /usr/local/bin/patchmon-agent.sh"
echo " • Agent version: $AGENT_VERSION"
if [[ "$EXPECTED_VERSION" != "Unknown" ]]; then
echo "Expected version: $EXPECTED_VERSION"
fi
echo " • Config directory: /etc/patchmon/"
echo " • Credentials file: /etc/patchmon/credentials"
echo "Automatic updates: Every $UPDATE_INTERVAL minutes via crontab"
echo " • View logs: tail -f /var/log/patchmon-agent.sh"
echo ""
echo "🔧 Manual commands:"
echo " • Test connection: patchmon-agent.sh test"
echo " • Send update: patchmon-agent.sh update"
echo " • Check status: patchmon-agent.sh ping"
echo ""
success "Your host is now connected to PatchMon!"
echo -e "${GREEN}📋 Installation Summary:${NC}"
echo " • Configuration directory: /etc/patchmon"
echo " • Agent binary installed: /usr/local/bin/patchmon-agent"
echo " • Architecture: $ARCHITECTURE"
echo " • Dependencies installed: jq, curl, bc"
echo " • Systemd service configured and running"
echo " • API credentials configured and tested"
echo " • WebSocket connection established"
echo " • Logs directory: /etc/patchmon/logs"
# Check for moved files and show them
MOVED_FILES=$(ls /etc/patchmon/credentials.yml.backup.* /etc/patchmon/config.yml.backup.* /usr/local/bin/patchmon-agent.backup.* /etc/patchmon/logs/patchmon-agent.log.old.* /usr/local/bin/patchmon-agent.sh.backup.* /etc/patchmon/credentials.backup.* 2>/dev/null || true)
if [[ -n "$MOVED_FILES" ]]; then
echo ""
echo -e "${YELLOW}📋 Files Moved for Fresh Installation:${NC}"
echo "$MOVED_FILES" | while read -r moved_file; do
echo "$moved_file"
done
echo ""
echo -e "${BLUE}💡 Note: Old files are automatically cleaned up (keeping last 3)${NC}"
fi
echo ""
echo -e "${BLUE}🔧 Management Commands:${NC}"
echo " • Test connection: /usr/local/bin/patchmon-agent ping"
echo " • Manual report: /usr/local/bin/patchmon-agent report"
echo " • Check status: /usr/local/bin/patchmon-agent diagnostics"
echo " • Service status: systemctl status patchmon-agent"
echo " • Service logs: journalctl -u patchmon-agent -f"
echo " • Restart service: systemctl restart patchmon-agent"
echo ""
success "✅ Your system is now being monitored by PatchMon!"

222
agents/patchmon_remove.sh Executable file
View File

@@ -0,0 +1,222 @@
#!/bin/bash
# PatchMon Agent Removal Script
# Usage: curl -s {PATCHMON_URL}/api/v1/hosts/remove | bash
# This script completely removes PatchMon from the system
set -e
# This placeholder will be dynamically replaced by the server when serving this
# script based on the "ignore SSL self-signed" setting for any curl calls in
# future (left for consistency with install script).
CURL_FLAGS=""
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Functions
error() {
echo -e "${RED}❌ ERROR: $1${NC}" >&2
exit 1
}
info() {
echo -e "${BLUE} $1${NC}"
}
success() {
echo -e "${GREEN}$1${NC}"
}
warning() {
echo -e "${YELLOW}⚠️ $1${NC}"
}
# Check if running as root
if [[ $EUID -ne 0 ]]; then
error "This script must be run as root (use sudo)"
fi
info "🗑️ Starting PatchMon Agent Removal..."
echo ""
# Step 1: Stop any running PatchMon processes
info "🛑 Stopping PatchMon processes..."
if pgrep -f "patchmon-agent.sh" >/dev/null; then
warning "Found running PatchMon processes, stopping them..."
pkill -f "patchmon-agent.sh" || true
sleep 2
success "PatchMon processes stopped"
else
info "No running PatchMon processes found"
fi
# Step 2: Remove crontab entries
info "📅 Removing PatchMon crontab entries..."
if crontab -l 2>/dev/null | grep -q "patchmon-agent.sh"; then
warning "Found PatchMon crontab entries, removing them..."
crontab -l 2>/dev/null | grep -v "patchmon-agent.sh" | crontab -
success "Crontab entries removed"
else
info "No PatchMon crontab entries found"
fi
# Step 3: Remove agent script
info "📄 Removing agent script..."
if [[ -f "/usr/local/bin/patchmon-agent.sh" ]]; then
warning "Removing agent script: /usr/local/bin/patchmon-agent.sh"
rm -f /usr/local/bin/patchmon-agent.sh
success "Agent script removed"
else
info "Agent script not found"
fi
# Step 4: Remove configuration directory and files
info "📁 Removing configuration files..."
if [[ -d "/etc/patchmon" ]]; then
warning "Removing configuration directory: /etc/patchmon"
# Show what's being removed
info "📋 Files in /etc/patchmon:"
ls -la /etc/patchmon/ 2>/dev/null | grep -v "^total" | while read -r line; do
echo " $line"
done
# Remove the directory
rm -rf /etc/patchmon
success "Configuration directory removed"
else
info "Configuration directory not found"
fi
# Step 5: Remove log files
info "📝 Removing log files..."
if [[ -f "/var/log/patchmon-agent.log" ]]; then
warning "Removing log file: /var/log/patchmon-agent.log"
rm -f /var/log/patchmon-agent.log
success "Log file removed"
else
info "Log file not found"
fi
# Step 6: Clean up backup files (optional)
info "🧹 Cleaning up backup files..."
BACKUP_COUNT=0
# Count credential backups
CRED_BACKUPS=$(ls /etc/patchmon/credentials.backup.* 2>/dev/null | wc -l || echo "0")
if [[ $CRED_BACKUPS -gt 0 ]]; then
BACKUP_COUNT=$((BACKUP_COUNT + CRED_BACKUPS))
fi
# Count agent backups
AGENT_BACKUPS=$(ls /usr/local/bin/patchmon-agent.sh.backup.* 2>/dev/null | wc -l || echo "0")
if [[ $AGENT_BACKUPS -gt 0 ]]; then
BACKUP_COUNT=$((BACKUP_COUNT + AGENT_BACKUPS))
fi
# Count log backups
LOG_BACKUPS=$(ls /var/log/patchmon-agent.log.old.* 2>/dev/null | wc -l || echo "0")
if [[ $LOG_BACKUPS -gt 0 ]]; then
BACKUP_COUNT=$((BACKUP_COUNT + LOG_BACKUPS))
fi
if [[ $BACKUP_COUNT -gt 0 ]]; then
warning "Found $BACKUP_COUNT backup files"
echo ""
echo -e "${YELLOW}📋 Backup files found:${NC}"
# Show credential backups
if [[ $CRED_BACKUPS -gt 0 ]]; then
echo " Credential backups:"
ls /etc/patchmon/credentials.backup.* 2>/dev/null | while read -r file; do
echo "$file"
done
fi
# Show agent backups
if [[ $AGENT_BACKUPS -gt 0 ]]; then
echo " Agent script backups:"
ls /usr/local/bin/patchmon-agent.sh.backup.* 2>/dev/null | while read -r file; do
echo "$file"
done
fi
# Show log backups
if [[ $LOG_BACKUPS -gt 0 ]]; then
echo " Log file backups:"
ls /var/log/patchmon-agent.log.old.* 2>/dev/null | while read -r file; do
echo "$file"
done
fi
echo ""
echo -e "${BLUE}💡 Note: Backup files are preserved for safety${NC}"
echo -e "${BLUE}💡 You can remove them manually if not needed${NC}"
else
info "No backup files found"
fi
# Step 7: Remove dependencies (optional)
info "📦 Checking for PatchMon-specific dependencies..."
if command -v jq >/dev/null 2>&1; then
warning "jq is installed (used by PatchMon)"
echo -e "${BLUE}💡 Note: jq may be used by other applications${NC}"
echo -e "${BLUE}💡 Consider keeping it unless you're sure it's not needed${NC}"
else
info "jq not found"
fi
if command -v curl >/dev/null 2>&1; then
warning "curl is installed (used by PatchMon)"
echo -e "${BLUE}💡 Note: curl is commonly used by many applications${NC}"
echo -e "${BLUE}💡 Consider keeping it unless you're sure it's not needed${NC}"
else
info "curl not found"
fi
# Step 8: Final verification
info "🔍 Verifying removal..."
REMAINING_FILES=0
if [[ -f "/usr/local/bin/patchmon-agent.sh" ]]; then
REMAINING_FILES=$((REMAINING_FILES + 1))
fi
if [[ -d "/etc/patchmon" ]]; then
REMAINING_FILES=$((REMAINING_FILES + 1))
fi
if [[ -f "/var/log/patchmon-agent.log" ]]; then
REMAINING_FILES=$((REMAINING_FILES + 1))
fi
if crontab -l 2>/dev/null | grep -q "patchmon-agent.sh"; then
REMAINING_FILES=$((REMAINING_FILES + 1))
fi
if [[ $REMAINING_FILES -eq 0 ]]; then
success "✅ PatchMon has been completely removed from the system!"
else
warning "⚠️ Some PatchMon files may still remain ($REMAINING_FILES items)"
echo -e "${BLUE}💡 You may need to remove them manually${NC}"
fi
echo ""
echo -e "${GREEN}📋 Removal Summary:${NC}"
echo " • Agent script: Removed"
echo " • Configuration files: Removed"
echo " • Log files: Removed"
echo " • Crontab entries: Removed"
echo " • Running processes: Stopped"
echo " • Backup files: Preserved (if any)"
echo ""
echo -e "${BLUE}🔧 Manual cleanup (if needed):${NC}"
echo " • Remove backup files: rm /etc/patchmon/credentials.backup.* /usr/local/bin/patchmon-agent.sh.backup.* /var/log/patchmon-agent.log.old.*"
echo " • Remove dependencies: apt remove jq curl (if not needed by other apps)"
echo ""
success "🎉 PatchMon removal completed!"

465
agents/proxmox_auto_enroll.sh Executable file
View File

@@ -0,0 +1,465 @@
#!/bin/bash
set -eo pipefail # Exit on error, pipe failures (removed -u as we handle unset vars explicitly)
# Trap to catch errors only (not normal exits)
trap 'echo "[ERROR] Script failed at line $LINENO with exit code $?"' ERR
SCRIPT_VERSION="2.0.0"
echo "[DEBUG] Script Version: $SCRIPT_VERSION ($(date +%Y-%m-%d\ %H:%M:%S))"
# =============================================================================
# PatchMon Proxmox LXC Auto-Enrollment Script
# =============================================================================
# This script discovers LXC containers on a Proxmox host and automatically
# enrolls them into PatchMon for patch management.
#
# Usage:
# 1. Set environment variables or edit configuration below
# 2. Run: bash proxmox_auto_enroll.sh
#
# Requirements:
# - Must run on Proxmox host (requires 'pct' command)
# - Auto-enrollment token from PatchMon
# - Network access to PatchMon server
# =============================================================================
# ===== CONFIGURATION =====
PATCHMON_URL="${PATCHMON_URL:-https://patchmon.example.com}"
AUTO_ENROLLMENT_KEY="${AUTO_ENROLLMENT_KEY:-}"
AUTO_ENROLLMENT_SECRET="${AUTO_ENROLLMENT_SECRET:-}"
CURL_FLAGS="${CURL_FLAGS:--s}"
DRY_RUN="${DRY_RUN:-false}"
HOST_PREFIX="${HOST_PREFIX:-}"
SKIP_STOPPED="${SKIP_STOPPED:-true}"
PARALLEL_INSTALL="${PARALLEL_INSTALL:-false}"
MAX_PARALLEL="${MAX_PARALLEL:-5}"
FORCE_INSTALL="${FORCE_INSTALL:-false}"
# ===== COLOR OUTPUT =====
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# ===== LOGGING FUNCTIONS =====
info() { echo -e "${GREEN}[INFO]${NC} $1"; return 0; }
warn() { echo -e "${YELLOW}[WARN]${NC} $1"; return 0; }
error() { echo -e "${RED}[ERROR]${NC} $1"; exit 1; }
success() { echo -e "${GREEN}[SUCCESS]${NC} $1"; return 0; }
debug() { [[ "${DEBUG:-false}" == "true" ]] && echo -e "${BLUE}[DEBUG]${NC} $1" || true; return 0; }
# ===== BANNER =====
cat << "EOF"
╔═══════════════════════════════════════════════════════════════╗
║ ║
║ ____ _ _ __ __ ║
| _ \ __ _| |_ ___| |__ | \/ | ___ _ __ ║
| |_) / _` | __/ __| '_ \| |\/| |/ _ \| '_ \
| __/ (_| | || (__| | | | | | | (_) | | | |
|_| \__,_|\__\___|_| |_|_| |_|\___/|_| |_|
║ ║
║ Proxmox LXC Auto-Enrollment Script ║
║ ║
╚═══════════════════════════════════════════════════════════════╝
EOF
echo ""
# ===== VALIDATION =====
info "Validating configuration..."
if [[ -z "$AUTO_ENROLLMENT_KEY" ]] || [[ -z "$AUTO_ENROLLMENT_SECRET" ]]; then
error "AUTO_ENROLLMENT_KEY and AUTO_ENROLLMENT_SECRET must be set"
fi
if [[ -z "$PATCHMON_URL" ]]; then
error "PATCHMON_URL must be set"
fi
# Check if running on Proxmox
if ! command -v pct &> /dev/null; then
error "This script must run on a Proxmox host (pct command not found)"
fi
# Check for required commands
for cmd in curl jq; do
if ! command -v $cmd &> /dev/null; then
error "Required command '$cmd' not found. Please install it first."
fi
done
info "Configuration validated successfully"
info "PatchMon Server: $PATCHMON_URL"
info "Dry Run Mode: $DRY_RUN"
info "Skip Stopped Containers: $SKIP_STOPPED"
echo ""
# ===== DISCOVER LXC CONTAINERS =====
info "Discovering LXC containers..."
lxc_list=$(pct list | tail -n +2) # Skip header
if [[ -z "$lxc_list" ]]; then
warn "No LXC containers found on this Proxmox host"
exit 0
fi
# Count containers
total_containers=$(echo "$lxc_list" | wc -l)
info "Found $total_containers LXC container(s)"
echo ""
info "Initializing statistics..."
# ===== STATISTICS =====
enrolled_count=0
skipped_count=0
failed_count=0
# Track containers with dpkg errors for later recovery
declare -A dpkg_error_containers
# Track all failed containers for summary
declare -A failed_containers
info "Statistics initialized"
# ===== PROCESS CONTAINERS =====
info "Starting container processing loop..."
while IFS= read -r line; do
info "[DEBUG] Read line from lxc_list"
vmid=$(echo "$line" | awk '{print $1}')
status=$(echo "$line" | awk '{print $2}')
name=$(echo "$line" | awk '{print $3}')
info "Processing LXC $vmid: $name (status: $status)"
# Skip stopped containers if configured
if [[ "$status" != "running" ]] && [[ "$SKIP_STOPPED" == "true" ]]; then
warn " Skipping $name - container not running"
((skipped_count++)) || true
echo ""
continue
fi
# Check if container is stopped
if [[ "$status" != "running" ]]; then
warn " Container $name is stopped - cannot gather info or install agent"
((skipped_count++)) || true
echo ""
continue
fi
# Get container details
debug " Gathering container information..."
hostname=$(timeout 5 pct exec "$vmid" -- hostname 2>/dev/null </dev/null || echo "$name")
ip_address=$(timeout 5 pct exec "$vmid" -- hostname -I 2>/dev/null </dev/null | awk '{print $1}' || echo "unknown")
os_info=$(timeout 5 pct exec "$vmid" -- cat /etc/os-release 2>/dev/null </dev/null | grep "^PRETTY_NAME=" | cut -d'"' -f2 || echo "unknown")
# Detect container architecture
debug " Detecting container architecture..."
arch_raw=$(timeout 5 pct exec "$vmid" -- uname -m 2>/dev/null </dev/null || echo "unknown")
# Map architecture to supported values
case "$arch_raw" in
"x86_64")
architecture="amd64"
;;
"i386"|"i686")
architecture="386"
;;
"aarch64"|"arm64")
architecture="arm64"
;;
"armv7l"|"armv6l"|"arm")
architecture="arm"
;;
*)
warn " ⚠ Unknown architecture '$arch_raw', defaulting to amd64"
architecture="amd64"
;;
esac
debug " Detected architecture: $arch_raw -> $architecture"
# Get machine ID from container
machine_id=$(timeout 5 pct exec "$vmid" -- bash -c "cat /etc/machine-id 2>/dev/null || cat /var/lib/dbus/machine-id 2>/dev/null || echo 'proxmox-lxc-$vmid-'$(cat /proc/sys/kernel/random/uuid)" </dev/null 2>/dev/null || echo "proxmox-lxc-$vmid-unknown")
friendly_name="${HOST_PREFIX}${hostname}"
info " Hostname: $hostname"
info " IP Address: $ip_address"
info " OS: $os_info"
info " Architecture: $architecture ($arch_raw)"
info " Machine ID: ${machine_id:0:16}..."
if [[ "$DRY_RUN" == "true" ]]; then
info " [DRY RUN] Would enroll: $friendly_name"
((enrolled_count++)) || true
echo ""
continue
fi
# Call PatchMon auto-enrollment API
info " Enrolling $friendly_name in PatchMon..."
response=$(curl $CURL_FLAGS -X POST \
-H "X-Auto-Enrollment-Key: $AUTO_ENROLLMENT_KEY" \
-H "X-Auto-Enrollment-Secret: $AUTO_ENROLLMENT_SECRET" \
-H "Content-Type: application/json" \
-d "{
\"friendly_name\": \"$friendly_name\",
\"machine_id\": \"$machine_id\",
\"metadata\": {
\"vmid\": \"$vmid\",
\"proxmox_node\": \"$(hostname)\",
\"ip_address\": \"$ip_address\",
\"os_info\": \"$os_info\"
}
}" \
"$PATCHMON_URL/api/v1/auto-enrollment/enroll" \
-w "\n%{http_code}" 2>&1)
http_code=$(echo "$response" | tail -n 1)
body=$(echo "$response" | sed '$d')
if [[ "$http_code" == "201" ]]; then
api_id=$(echo "$body" | jq -r '.host.api_id' 2>/dev/null || echo "")
api_key=$(echo "$body" | jq -r '.host.api_key' 2>/dev/null || echo "")
if [[ -z "$api_id" ]] || [[ -z "$api_key" ]]; then
error " Failed to parse API credentials from response"
fi
info " ✓ Host enrolled successfully: $api_id"
# Ensure curl is installed in the container
info " Checking for curl in container..."
curl_check=$(timeout 10 pct exec "$vmid" -- bash -c "command -v curl >/dev/null 2>&1 && echo 'installed' || echo 'missing'" 2>/dev/null </dev/null || echo "error")
if [[ "$curl_check" == "missing" ]]; then
info " Installing curl in container..."
# Detect package manager and install curl
curl_install_output=$(timeout 60 pct exec "$vmid" -- bash -c "
if command -v apt-get >/dev/null 2>&1; then
export DEBIAN_FRONTEND=noninteractive
apt-get update -qq && apt-get install -y -qq curl
elif command -v yum >/dev/null 2>&1; then
yum install -y -q curl
elif command -v dnf >/dev/null 2>&1; then
dnf install -y -q curl
elif command -v apk >/dev/null 2>&1; then
apk add --no-cache curl
else
echo 'ERROR: No supported package manager found'
exit 1
fi
" 2>&1 </dev/null) || true
if [[ "$curl_install_output" == *"ERROR: No supported package manager"* ]]; then
warn " ✗ Could not install curl - no supported package manager found"
failed_containers["$vmid"]="$friendly_name|No package manager for curl|$curl_install_output"
((failed_count++)) || true
echo ""
sleep 1
continue
else
info " ✓ curl installed successfully"
fi
else
info " ✓ curl already installed"
fi
# Install PatchMon agent in container
info " Installing PatchMon agent..."
# Build install URL with force flag and architecture if enabled
install_url="$PATCHMON_URL/api/v1/hosts/install?arch=$architecture"
if [[ "$FORCE_INSTALL" == "true" ]]; then
install_url="$install_url&force=true"
info " Using force mode - will bypass broken packages"
fi
info " Using architecture: $architecture"
# Reset exit code for this container
install_exit_code=0
# Download and execute in separate steps to avoid stdin issues with piping
install_output=$(timeout 180 pct exec "$vmid" -- bash -c "
cd /tmp
curl $CURL_FLAGS \
-H \"X-API-ID: $api_id\" \
-H \"X-API-KEY: $api_key\" \
-o patchmon-install.sh \
'$install_url' && \
bash patchmon-install.sh && \
rm -f patchmon-install.sh
" 2>&1 </dev/null) || install_exit_code=$?
# Check both exit code AND success message in output for reliability
if [[ $install_exit_code -eq 0 ]] || [[ "$install_output" == *"PatchMon Agent installation completed successfully"* ]]; then
info " ✓ Agent installed successfully in $friendly_name"
((enrolled_count++)) || true
elif [[ $install_exit_code -eq 124 ]]; then
warn " ⏱ Agent installation timed out (>180s) in $friendly_name"
info " Install output: $install_output"
# Store failure details
failed_containers["$vmid"]="$friendly_name|Timeout (>180s)|$install_output"
((failed_count++)) || true
else
# Check if it's a dpkg error
if [[ "$install_output" == *"dpkg was interrupted"* ]] || [[ "$install_output" == *"dpkg --configure -a"* ]]; then
warn " ⚠ Failed due to dpkg error in $friendly_name (can be fixed)"
dpkg_error_containers["$vmid"]="$friendly_name:$api_id:$api_key"
# Store failure details
failed_containers["$vmid"]="$friendly_name|dpkg error|$install_output"
else
warn " ✗ Failed to install agent in $friendly_name (exit: $install_exit_code)"
# Store failure details
failed_containers["$vmid"]="$friendly_name|Exit code $install_exit_code|$install_output"
fi
info " Install output: $install_output"
((failed_count++)) || true
fi
elif [[ "$http_code" == "409" ]]; then
warn " ⊘ Host $friendly_name already enrolled - skipping"
((skipped_count++)) || true
elif [[ "$http_code" == "429" ]]; then
error " ✗ Rate limit exceeded - maximum hosts per day reached"
failed_containers["$vmid"]="$friendly_name|Rate limit exceeded|$body"
((failed_count++)) || true
else
error " ✗ Failed to enroll $friendly_name - HTTP $http_code"
debug " Response: $body"
failed_containers["$vmid"]="$friendly_name|HTTP $http_code enrollment failed|$body"
((failed_count++)) || true
fi
echo ""
sleep 1 # Rate limiting between containers
done <<< "$lxc_list"
# ===== SUMMARY =====
echo ""
echo "╔═══════════════════════════════════════════════════════════════╗"
echo "║ ENROLLMENT SUMMARY ║"
echo "╚═══════════════════════════════════════════════════════════════╝"
echo ""
info "Total Containers Found: $total_containers"
info "Successfully Enrolled: $enrolled_count"
info "Skipped: $skipped_count"
info "Failed: $failed_count"
echo ""
# ===== FAILURE DETAILS =====
if [[ ${#failed_containers[@]} -gt 0 ]]; then
echo "╔═══════════════════════════════════════════════════════════════╗"
echo "║ FAILURE DETAILS ║"
echo "╚═══════════════════════════════════════════════════════════════╝"
echo ""
for vmid in "${!failed_containers[@]}"; do
IFS='|' read -r name reason output <<< "${failed_containers[$vmid]}"
warn "Container $vmid: $name"
info " Reason: $reason"
info " Last 5 lines of output:"
# Get last 5 lines of output
last_5_lines=$(echo "$output" | tail -n 5)
# Display each line with proper indentation
while IFS= read -r line; do
echo " $line"
done <<< "$last_5_lines"
echo ""
done
fi
if [[ "$DRY_RUN" == "true" ]]; then
warn "This was a DRY RUN - no actual changes were made"
warn "Set DRY_RUN=false to perform actual enrollment"
fi
# ===== DPKG ERROR RECOVERY =====
if [[ ${#dpkg_error_containers[@]} -gt 0 ]]; then
echo ""
echo "╔═══════════════════════════════════════════════════════════════╗"
echo "║ DPKG ERROR RECOVERY AVAILABLE ║"
echo "╚═══════════════════════════════════════════════════════════════╝"
echo ""
warn "Detected ${#dpkg_error_containers[@]} container(s) with dpkg errors:"
for vmid in "${!dpkg_error_containers[@]}"; do
IFS=':' read -r name api_id api_key <<< "${dpkg_error_containers[$vmid]}"
info " • Container $vmid: $name"
done
echo ""
# Ask user if they want to fix dpkg errors
read -p "Would you like to fix dpkg errors and retry installation? (y/N): " -n 1 -r
echo ""
if [[ $REPLY =~ ^[Yy]$ ]]; then
echo ""
info "Starting dpkg recovery process..."
echo ""
recovered_count=0
for vmid in "${!dpkg_error_containers[@]}"; do
IFS=':' read -r name api_id api_key <<< "${dpkg_error_containers[$vmid]}"
info "Fixing dpkg in container $vmid ($name)..."
# Run dpkg --configure -a
dpkg_output=$(timeout 60 pct exec "$vmid" -- dpkg --configure -a 2>&1 </dev/null || true)
if [[ $? -eq 0 ]]; then
info " ✓ dpkg fixed successfully"
# Retry agent installation
info " Retrying agent installation..."
install_exit_code=0
install_output=$(timeout 180 pct exec "$vmid" -- bash -c "
cd /tmp
curl $CURL_FLAGS \
-H \"X-API-ID: $api_id\" \
-H \"X-API-KEY: $api_key\" \
-o patchmon-install.sh \
'$PATCHMON_URL/api/v1/hosts/install?arch=$architecture' && \
bash patchmon-install.sh && \
rm -f patchmon-install.sh
" 2>&1 </dev/null) || install_exit_code=$?
if [[ $install_exit_code -eq 0 ]] || [[ "$install_output" == *"PatchMon Agent installation completed successfully"* ]]; then
info " ✓ Agent installed successfully in $name"
((recovered_count++)) || true
((enrolled_count++)) || true
((failed_count--)) || true
else
warn " ✗ Agent installation still failed (exit: $install_exit_code)"
fi
else
warn " ✗ Failed to fix dpkg in $name"
info " dpkg output: $dpkg_output"
fi
echo ""
done
echo ""
info "Recovery complete: $recovered_count container(s) recovered"
echo ""
fi
fi
if [[ $failed_count -gt 0 ]]; then
warn "Some containers failed to enroll. Check the logs above for details."
exit 1
fi
info "Auto-enrollment complete! ✓"
exit 0

View File

@@ -1 +0,0 @@

View File

@@ -1,67 +0,0 @@
const { PrismaClient } = require('@prisma/client');
const prisma = new PrismaClient();
async function checkAgentVersion() {
try {
// Check current agent version in database
const agentVersion = await prisma.agentVersion.findFirst({
where: { version: '1.2.5' }
});
if (agentVersion) {
console.log('✅ Agent version 1.2.5 found in database');
console.log('Version:', agentVersion.version);
console.log('Is Default:', agentVersion.isDefault);
console.log('Script Content Length:', agentVersion.scriptContent?.length || 0);
console.log('Created At:', agentVersion.createdAt);
console.log('Updated At:', agentVersion.updatedAt);
// Check if script content contains the current version
if (agentVersion.scriptContent && agentVersion.scriptContent.includes('AGENT_VERSION="1.2.5"')) {
console.log('✅ Script content contains correct version 1.2.5');
} else {
console.log('❌ Script content does not contain version 1.2.5');
}
// Check if script content contains system info functions
if (agentVersion.scriptContent && agentVersion.scriptContent.includes('get_hardware_info()')) {
console.log('✅ Script content contains hardware info function');
} else {
console.log('❌ Script content missing hardware info function');
}
if (agentVersion.scriptContent && agentVersion.scriptContent.includes('get_network_info()')) {
console.log('✅ Script content contains network info function');
} else {
console.log('❌ Script content missing network info function');
}
if (agentVersion.scriptContent && agentVersion.scriptContent.includes('get_system_info()')) {
console.log('✅ Script content contains system info function');
} else {
console.log('❌ Script content missing system info function');
}
} else {
console.log('❌ Agent version 1.2.5 not found in database');
}
// List all agent versions
console.log('\n=== All Agent Versions ===');
const allVersions = await prisma.agentVersion.findMany({
orderBy: { createdAt: 'desc' }
});
allVersions.forEach(version => {
console.log(`Version: ${version.version}, Default: ${version.isDefault}, Length: ${version.scriptContent?.length || 0}`);
});
} catch (error) {
console.error('❌ Error checking agent version:', error);
} finally {
await prisma.$disconnect();
}
}
checkAgentVersion();

View File

@@ -1 +0,0 @@

View File

@@ -1 +0,0 @@

View File

@@ -1 +0,0 @@

View File

@@ -1 +0,0 @@

View File

@@ -1,17 +1,49 @@
# Database Configuration
DATABASE_URL="postgresql://patchmon_user:p@tchm0n_p@55@localhost:5432/patchmon_db"
DATABASE_URL="postgresql://patchmon_user:your-password-here@localhost:5432/patchmon_db"
PM_DB_CONN_MAX_ATTEMPTS=30
PM_DB_CONN_WAIT_INTERVAL=2
# JWT Configuration
JWT_SECRET=your-secure-random-secret-key-change-this-in-production
JWT_EXPIRES_IN=1h
JWT_REFRESH_EXPIRES_IN=7d
# Server Configuration
PORT=3001
NODE_ENV=development
NODE_ENV=production
# API Configuration
API_VERSION=v1
# CORS Configuration
CORS_ORIGIN=http://localhost:3000
# Rate Limiting
# Session Configuration
SESSION_INACTIVITY_TIMEOUT_MINUTES=30
# User Configuration
DEFAULT_USER_ROLE=user
# Rate Limiting (times in milliseconds)
RATE_LIMIT_WINDOW_MS=900000
RATE_LIMIT_MAX=100
RATE_LIMIT_MAX=5000
AUTH_RATE_LIMIT_WINDOW_MS=600000
AUTH_RATE_LIMIT_MAX=500
AGENT_RATE_LIMIT_WINDOW_MS=60000
AGENT_RATE_LIMIT_MAX=1000
# Redis Configuration
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_USER=your-redis-username-here
REDIS_PASSWORD=your-redis-password-here
REDIS_DB=0
# Logging
LOG_LEVEL=info
LOG_LEVEL=info
ENABLE_LOGGING=true
# TFA Configuration (optional - used if TFA is enabled)
TFA_REMEMBER_ME_EXPIRES_IN=30d
TFA_MAX_REMEMBER_SESSIONS=5
TFA_SUSPICIOUS_ACTIVITY_THRESHOLD=3

View File

@@ -1,38 +1,47 @@
{
"name": "patchmon-backend",
"version": "1.2.5",
"description": "Backend API for Linux Patch Monitoring System",
"main": "src/server.js",
"scripts": {
"dev": "nodemon src/server.js",
"start": "node src/server.js",
"build": "echo 'No build step needed for Node.js'",
"db:generate": "prisma generate",
"db:migrate": "prisma migrate dev",
"db:push": "prisma db push",
"db:studio": "prisma studio"
},
"dependencies": {
"@prisma/client": "^5.7.0",
"bcryptjs": "^2.4.3",
"cors": "^2.8.5",
"dotenv": "^16.3.1",
"express": "^4.18.2",
"express-rate-limit": "^7.1.5",
"express-validator": "^7.0.1",
"helmet": "^7.1.0",
"jsonwebtoken": "^9.0.2",
"moment": "^2.30.1",
"qrcode": "^1.5.4",
"speakeasy": "^2.0.0",
"uuid": "^9.0.1",
"winston": "^3.11.0"
},
"devDependencies": {
"nodemon": "^3.0.2",
"prisma": "^5.7.0"
},
"engines": {
"node": ">=18.0.0"
}
"name": "patchmon-backend",
"version": "1.3.0",
"description": "Backend API for Linux Patch Monitoring System",
"license": "AGPL-3.0",
"main": "src/server.js",
"scripts": {
"dev": "nodemon src/server.js",
"start": "node src/server.js",
"build": "echo 'No build step needed for Node.js'",
"db:generate": "prisma generate",
"db:migrate": "prisma migrate dev",
"db:push": "prisma db push",
"db:studio": "prisma studio"
},
"dependencies": {
"@bull-board/api": "^6.13.1",
"@bull-board/express": "^6.13.1",
"@prisma/client": "^6.1.0",
"axios": "^1.7.9",
"bcryptjs": "^2.4.3",
"bullmq": "^5.61.0",
"cookie-parser": "^1.4.7",
"cors": "^2.8.5",
"dotenv": "^16.4.7",
"express": "^4.21.2",
"express-rate-limit": "^7.5.0",
"express-validator": "^7.2.0",
"helmet": "^8.0.0",
"ioredis": "^5.8.1",
"jsonwebtoken": "^9.0.2",
"moment": "^2.30.1",
"qrcode": "^1.5.4",
"speakeasy": "^2.0.0",
"uuid": "^11.0.3",
"winston": "^3.17.0",
"ws": "^8.18.0"
},
"devDependencies": {
"@types/bcryptjs": "^2.4.6",
"nodemon": "^3.1.9",
"prisma": "^6.1.0"
},
"engines": {
"node": ">=18.0.0"
}
}

View File

@@ -0,0 +1,2 @@
-- AlterTable
ALTER TABLE "settings" DROP COLUMN "frontend_url";

View File

@@ -0,0 +1,2 @@
-- AlterTable
ALTER TABLE "settings" ADD COLUMN "signup_enabled" BOOLEAN NOT NULL DEFAULT false;

View File

@@ -0,0 +1,3 @@
-- AlterTable
ALTER TABLE "users" ADD COLUMN "first_name" TEXT,
ADD COLUMN "last_name" TEXT;

View File

@@ -0,0 +1,2 @@
-- AlterTable
ALTER TABLE "settings" ADD COLUMN "default_user_role" TEXT NOT NULL DEFAULT 'user';

View File

@@ -0,0 +1,65 @@
-- Initialize default dashboard preferences for all existing users
-- This migration ensures that all users have proper role-based dashboard preferences
-- Function to create default dashboard preferences for a user
CREATE OR REPLACE FUNCTION init_user_dashboard_preferences(user_id TEXT, user_role TEXT)
RETURNS VOID AS $$
DECLARE
pref_record RECORD;
BEGIN
-- Delete any existing preferences for this user
DELETE FROM dashboard_preferences WHERE dashboard_preferences.user_id = init_user_dashboard_preferences.user_id;
-- Insert role-based preferences
IF user_role = 'admin' THEN
-- Admin gets full access to all cards (iby's preferred layout)
INSERT INTO dashboard_preferences (id, user_id, card_id, enabled, "order", created_at, updated_at)
VALUES
(gen_random_uuid(), user_id, 'totalHosts', true, 0, NOW(), NOW()),
(gen_random_uuid(), user_id, 'hostsNeedingUpdates', true, 1, NOW(), NOW()),
(gen_random_uuid(), user_id, 'totalOutdatedPackages', true, 2, NOW(), NOW()),
(gen_random_uuid(), user_id, 'securityUpdates', true, 3, NOW(), NOW()),
(gen_random_uuid(), user_id, 'totalHostGroups', true, 4, NOW(), NOW()),
(gen_random_uuid(), user_id, 'upToDateHosts', true, 5, NOW(), NOW()),
(gen_random_uuid(), user_id, 'totalRepos', true, 6, NOW(), NOW()),
(gen_random_uuid(), user_id, 'totalUsers', true, 7, NOW(), NOW()),
(gen_random_uuid(), user_id, 'osDistribution', true, 8, NOW(), NOW()),
(gen_random_uuid(), user_id, 'osDistributionBar', true, 9, NOW(), NOW()),
(gen_random_uuid(), user_id, 'recentCollection', true, 10, NOW(), NOW()),
(gen_random_uuid(), user_id, 'updateStatus', true, 11, NOW(), NOW()),
(gen_random_uuid(), user_id, 'packagePriority', true, 12, NOW(), NOW()),
(gen_random_uuid(), user_id, 'recentUsers', true, 13, NOW(), NOW()),
(gen_random_uuid(), user_id, 'quickStats', true, 14, NOW(), NOW());
ELSE
-- Regular users get comprehensive layout but without user management cards
INSERT INTO dashboard_preferences (id, user_id, card_id, enabled, "order", created_at, updated_at)
VALUES
(gen_random_uuid(), user_id, 'totalHosts', true, 0, NOW(), NOW()),
(gen_random_uuid(), user_id, 'hostsNeedingUpdates', true, 1, NOW(), NOW()),
(gen_random_uuid(), user_id, 'totalOutdatedPackages', true, 2, NOW(), NOW()),
(gen_random_uuid(), user_id, 'securityUpdates', true, 3, NOW(), NOW()),
(gen_random_uuid(), user_id, 'totalHostGroups', true, 4, NOW(), NOW()),
(gen_random_uuid(), user_id, 'upToDateHosts', true, 5, NOW(), NOW()),
(gen_random_uuid(), user_id, 'totalRepos', true, 6, NOW(), NOW()),
(gen_random_uuid(), user_id, 'osDistribution', true, 7, NOW(), NOW()),
(gen_random_uuid(), user_id, 'osDistributionBar', true, 8, NOW(), NOW()),
(gen_random_uuid(), user_id, 'recentCollection', true, 9, NOW(), NOW()),
(gen_random_uuid(), user_id, 'updateStatus', true, 10, NOW(), NOW()),
(gen_random_uuid(), user_id, 'packagePriority', true, 11, NOW(), NOW()),
(gen_random_uuid(), user_id, 'quickStats', true, 12, NOW(), NOW());
END IF;
END;
$$ LANGUAGE plpgsql;
-- Apply default preferences to all existing users
DO $$
DECLARE
user_record RECORD;
BEGIN
FOR user_record IN SELECT id, role FROM users LOOP
PERFORM init_user_dashboard_preferences(user_record.id, user_record.role);
END LOOP;
END $$;
-- Drop the temporary function
DROP FUNCTION init_user_dashboard_preferences(TEXT, TEXT);

View File

@@ -0,0 +1,12 @@
-- Remove dashboard preferences population
-- This migration clears all existing dashboard preferences so they can be recreated
-- with the correct default order by server.js initialization
-- Clear all existing dashboard preferences
-- This ensures users get the correct default order from server.js
DELETE FROM dashboard_preferences;
-- Recreate indexes for better performance
CREATE INDEX IF NOT EXISTS "dashboard_preferences_user_id_idx" ON "dashboard_preferences"("user_id");
CREATE INDEX IF NOT EXISTS "dashboard_preferences_card_id_idx" ON "dashboard_preferences"("card_id");
CREATE INDEX IF NOT EXISTS "dashboard_preferences_user_card_idx" ON "dashboard_preferences"("user_id", "card_id");

View File

@@ -0,0 +1,10 @@
-- Fix dashboard preferences unique constraint
-- This migration fixes the unique constraint on dashboard_preferences table
-- Drop existing indexes if they exist
DROP INDEX IF EXISTS "dashboard_preferences_card_id_key";
DROP INDEX IF EXISTS "dashboard_preferences_user_id_card_id_key";
DROP INDEX IF EXISTS "dashboard_preferences_user_id_key";
-- Add the correct unique constraint
ALTER TABLE "dashboard_preferences" ADD CONSTRAINT "dashboard_preferences_user_id_card_id_key" UNIQUE ("user_id", "card_id");

View File

@@ -0,0 +1,2 @@
-- DropTable
DROP TABLE "agent_versions";

View File

@@ -0,0 +1,4 @@
-- Add ignore_ssl_self_signed column to settings table
-- This allows users to configure whether curl commands should ignore SSL certificate validation
ALTER TABLE "settings" ADD COLUMN "ignore_ssl_self_signed" BOOLEAN NOT NULL DEFAULT false;

View File

@@ -0,0 +1,3 @@
-- AlterTable
ALTER TABLE "hosts" ADD COLUMN "notes" TEXT;

View File

@@ -0,0 +1,37 @@
-- CreateTable
CREATE TABLE "auto_enrollment_tokens" (
"id" TEXT NOT NULL,
"token_name" TEXT NOT NULL,
"token_key" TEXT NOT NULL,
"token_secret" TEXT NOT NULL,
"created_by_user_id" TEXT,
"is_active" BOOLEAN NOT NULL DEFAULT true,
"allowed_ip_ranges" TEXT[],
"max_hosts_per_day" INTEGER NOT NULL DEFAULT 100,
"hosts_created_today" INTEGER NOT NULL DEFAULT 0,
"last_reset_date" DATE NOT NULL DEFAULT CURRENT_TIMESTAMP,
"default_host_group_id" TEXT,
"created_at" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updated_at" TIMESTAMP(3) NOT NULL,
"last_used_at" TIMESTAMP(3),
"expires_at" TIMESTAMP(3),
"metadata" JSONB,
CONSTRAINT "auto_enrollment_tokens_pkey" PRIMARY KEY ("id")
);
-- CreateIndex
CREATE UNIQUE INDEX "auto_enrollment_tokens_token_key_key" ON "auto_enrollment_tokens"("token_key");
-- CreateIndex
CREATE INDEX "auto_enrollment_tokens_token_key_idx" ON "auto_enrollment_tokens"("token_key");
-- CreateIndex
CREATE INDEX "auto_enrollment_tokens_is_active_idx" ON "auto_enrollment_tokens"("is_active");
-- AddForeignKey
ALTER TABLE "auto_enrollment_tokens" ADD CONSTRAINT "auto_enrollment_tokens_created_by_user_id_fkey" FOREIGN KEY ("created_by_user_id") REFERENCES "users"("id") ON DELETE SET NULL ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "auto_enrollment_tokens" ADD CONSTRAINT "auto_enrollment_tokens_default_host_group_id_fkey" FOREIGN KEY ("default_host_group_id") REFERENCES "host_groups"("id") ON DELETE SET NULL ON UPDATE CASCADE;

View File

@@ -0,0 +1,20 @@
-- Add machine_id column as nullable first
ALTER TABLE "hosts" ADD COLUMN "machine_id" TEXT;
-- Generate machine_ids for existing hosts using their API ID as a fallback
UPDATE "hosts" SET "machine_id" = 'migrated-' || "api_id" WHERE "machine_id" IS NULL;
-- Remove the unique constraint from friendly_name
ALTER TABLE "hosts" DROP CONSTRAINT IF EXISTS "hosts_friendly_name_key";
-- Also drop the unique index if it exists (constraint and index can exist separately)
DROP INDEX IF EXISTS "hosts_friendly_name_key";
-- Now make machine_id NOT NULL and add unique constraint
ALTER TABLE "hosts" ALTER COLUMN "machine_id" SET NOT NULL;
ALTER TABLE "hosts" ADD CONSTRAINT "hosts_machine_id_key" UNIQUE ("machine_id");
-- Create indexes for better query performance
CREATE INDEX "hosts_machine_id_idx" ON "hosts"("machine_id");
CREATE INDEX "hosts_friendly_name_idx" ON "hosts"("friendly_name");

View File

@@ -0,0 +1,4 @@
-- AddLogoFieldsToSettings
ALTER TABLE "settings" ADD COLUMN "logo_dark" VARCHAR(255) DEFAULT '/assets/logo_dark.png';
ALTER TABLE "settings" ADD COLUMN "logo_light" VARCHAR(255) DEFAULT '/assets/logo_light.png';
ALTER TABLE "settings" ADD COLUMN "favicon" VARCHAR(255) DEFAULT '/assets/logo_square.svg';

View File

@@ -0,0 +1,64 @@
-- Reconcile user_sessions migration from 1.2.7 to 1.2.8+
-- This migration handles the case where 1.2.7 had 'add_user_sessions' without timestamp
-- and 1.2.8+ renamed it to '20251005000000_add_user_sessions' with timestamp
DO $$
DECLARE
table_exists boolean := false;
migration_exists boolean := false;
BEGIN
-- Check if user_sessions table exists
SELECT EXISTS (
SELECT 1 FROM information_schema.tables
WHERE table_schema = 'public'
AND table_name = 'user_sessions'
) INTO table_exists;
-- Check if the migration record already exists
SELECT EXISTS (
SELECT 1 FROM _prisma_migrations
WHERE migration_name = '20251005000000_add_user_sessions'
) INTO migration_exists;
-- If table exists but no migration record, create one
IF table_exists AND NOT migration_exists THEN
RAISE NOTICE 'Table exists but no migration record found - creating migration record for 1.2.7 upgrade';
-- Insert a successful migration record for the existing table
INSERT INTO _prisma_migrations (
id,
checksum,
finished_at,
migration_name,
logs,
rolled_back_at,
started_at,
applied_steps_count
) VALUES (
gen_random_uuid()::text,
'', -- Empty checksum since we're reconciling
NOW(),
'20251005000000_add_user_sessions',
'Reconciled from 1.2.7 - table already exists',
NULL,
NOW(),
1
);
RAISE NOTICE 'Migration record created for existing table';
ELSIF table_exists AND migration_exists THEN
RAISE NOTICE 'Table exists and migration record exists - no action needed';
ELSE
RAISE NOTICE 'Table does not exist - migration will proceed normally';
END IF;
-- Additional check: If we have any old migration names, update them
IF EXISTS (SELECT 1 FROM _prisma_migrations WHERE migration_name = 'add_user_sessions') THEN
RAISE NOTICE 'Found old migration name - updating to new format';
UPDATE _prisma_migrations
SET migration_name = '20251005000000_add_user_sessions'
WHERE migration_name = 'add_user_sessions';
RAISE NOTICE 'Old migration name updated';
END IF;
END $$;

View File

@@ -0,0 +1,96 @@
-- Reconcile user_sessions migration from 1.2.7 to 1.2.8+
-- This migration handles the case where 1.2.7 had 'add_user_sessions' without timestamp
-- and 1.2.8+ renamed it to '20251005000000_add_user_sessions' with timestamp
DO $$
DECLARE
old_migration_exists boolean := false;
table_exists boolean := false;
failed_migration_exists boolean := false;
BEGIN
-- Check if the old migration name exists
SELECT EXISTS (
SELECT 1 FROM _prisma_migrations
WHERE migration_name = 'add_user_sessions'
) INTO old_migration_exists;
-- Check if user_sessions table exists
SELECT EXISTS (
SELECT 1 FROM information_schema.tables
WHERE table_schema = 'public'
AND table_name = 'user_sessions'
) INTO table_exists;
-- Check if there's a failed migration attempt
SELECT EXISTS (
SELECT 1 FROM _prisma_migrations
WHERE migration_name = '20251005000000_add_user_sessions'
AND finished_at IS NULL
) INTO failed_migration_exists;
-- Scenario 1: Old migration exists, table exists, no failed migration
-- This means 1.2.7 was installed and we need to update the migration name
IF old_migration_exists AND table_exists AND NOT failed_migration_exists THEN
RAISE NOTICE 'Found 1.2.7 migration "add_user_sessions" - updating to timestamped version';
-- Update the old migration name to the new timestamped version
UPDATE _prisma_migrations
SET migration_name = '20251005000000_add_user_sessions'
WHERE migration_name = 'add_user_sessions';
RAISE NOTICE 'Migration name updated: add_user_sessions -> 20251005000000_add_user_sessions';
END IF;
-- Scenario 2: Failed migration exists (upgrade attempt gone wrong)
IF failed_migration_exists THEN
RAISE NOTICE 'Found failed migration attempt - cleaning up';
-- If table exists, it means the migration partially succeeded
IF table_exists THEN
RAISE NOTICE 'Table exists - marking migration as applied';
-- Delete the failed migration record
DELETE FROM _prisma_migrations
WHERE migration_name = '20251005000000_add_user_sessions'
AND finished_at IS NULL;
-- Insert a successful migration record
INSERT INTO _prisma_migrations (
id,
checksum,
finished_at,
migration_name,
logs,
rolled_back_at,
started_at,
applied_steps_count
) VALUES (
gen_random_uuid()::text,
'', -- Empty checksum since we're reconciling
NOW(),
'20251005000000_add_user_sessions',
NULL,
NULL,
NOW(),
1
);
RAISE NOTICE 'Migration marked as successfully applied';
ELSE
RAISE NOTICE 'Table does not exist - removing failed migration to allow retry';
-- Just delete the failed migration to allow it to retry
DELETE FROM _prisma_migrations
WHERE migration_name = '20251005000000_add_user_sessions'
AND finished_at IS NULL;
RAISE NOTICE 'Failed migration removed - will retry on next migration run';
END IF;
END IF;
-- Scenario 3: Everything is clean (fresh install or already reconciled)
IF NOT old_migration_exists AND NOT failed_migration_exists THEN
RAISE NOTICE 'No migration reconciliation needed';
END IF;
END $$;

View File

@@ -0,0 +1,106 @@
-- CreateTable (with existence check for 1.2.7 compatibility)
DO $$
BEGIN
-- Check if table already exists (from 1.2.7 installation)
IF NOT EXISTS (
SELECT 1 FROM information_schema.tables
WHERE table_schema = 'public'
AND table_name = 'user_sessions'
) THEN
-- Table doesn't exist, create it
CREATE TABLE "user_sessions" (
"id" TEXT NOT NULL,
"user_id" TEXT NOT NULL,
"refresh_token" TEXT NOT NULL,
"access_token_hash" TEXT,
"ip_address" TEXT,
"user_agent" TEXT,
"last_activity" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"expires_at" TIMESTAMP(3) NOT NULL,
"created_at" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"is_revoked" BOOLEAN NOT NULL DEFAULT false,
CONSTRAINT "user_sessions_pkey" PRIMARY KEY ("id")
);
RAISE NOTICE 'Created user_sessions table';
ELSE
RAISE NOTICE 'user_sessions table already exists, skipping creation';
END IF;
END $$;
-- CreateIndex (with existence check)
DO $$
BEGIN
IF NOT EXISTS (
SELECT 1 FROM pg_indexes
WHERE tablename = 'user_sessions'
AND indexname = 'user_sessions_refresh_token_key'
) THEN
CREATE UNIQUE INDEX "user_sessions_refresh_token_key" ON "user_sessions"("refresh_token");
RAISE NOTICE 'Created user_sessions_refresh_token_key index';
ELSE
RAISE NOTICE 'user_sessions_refresh_token_key index already exists, skipping';
END IF;
END $$;
-- CreateIndex (with existence check)
DO $$
BEGIN
IF NOT EXISTS (
SELECT 1 FROM pg_indexes
WHERE tablename = 'user_sessions'
AND indexname = 'user_sessions_user_id_idx'
) THEN
CREATE INDEX "user_sessions_user_id_idx" ON "user_sessions"("user_id");
RAISE NOTICE 'Created user_sessions_user_id_idx index';
ELSE
RAISE NOTICE 'user_sessions_user_id_idx index already exists, skipping';
END IF;
END $$;
-- CreateIndex (with existence check)
DO $$
BEGIN
IF NOT EXISTS (
SELECT 1 FROM pg_indexes
WHERE tablename = 'user_sessions'
AND indexname = 'user_sessions_refresh_token_idx'
) THEN
CREATE INDEX "user_sessions_refresh_token_idx" ON "user_sessions"("refresh_token");
RAISE NOTICE 'Created user_sessions_refresh_token_idx index';
ELSE
RAISE NOTICE 'user_sessions_refresh_token_idx index already exists, skipping';
END IF;
END $$;
-- CreateIndex (with existence check)
DO $$
BEGIN
IF NOT EXISTS (
SELECT 1 FROM pg_indexes
WHERE tablename = 'user_sessions'
AND indexname = 'user_sessions_expires_at_idx'
) THEN
CREATE INDEX "user_sessions_expires_at_idx" ON "user_sessions"("expires_at");
RAISE NOTICE 'Created user_sessions_expires_at_idx index';
ELSE
RAISE NOTICE 'user_sessions_expires_at_idx index already exists, skipping';
END IF;
END $$;
-- AddForeignKey (with existence check)
DO $$
BEGIN
IF NOT EXISTS (
SELECT 1 FROM information_schema.table_constraints
WHERE table_name = 'user_sessions'
AND constraint_name = 'user_sessions_user_id_fkey'
) THEN
ALTER TABLE "user_sessions" ADD CONSTRAINT "user_sessions_user_id_fkey" FOREIGN KEY ("user_id") REFERENCES "users"("id") ON DELETE CASCADE ON UPDATE CASCADE;
RAISE NOTICE 'Created user_sessions_user_id_fkey foreign key';
ELSE
RAISE NOTICE 'user_sessions_user_id_fkey foreign key already exists, skipping';
END IF;
END $$;

View File

@@ -0,0 +1,6 @@
-- Add TFA remember me fields to user_sessions table
ALTER TABLE "user_sessions" ADD COLUMN "tfa_remember_me" BOOLEAN NOT NULL DEFAULT false;
ALTER TABLE "user_sessions" ADD COLUMN "tfa_bypass_until" TIMESTAMP(3);
-- Create index for TFA bypass until field for efficient querying
CREATE INDEX "user_sessions_tfa_bypass_until_idx" ON "user_sessions"("tfa_bypass_until");

View File

@@ -0,0 +1,7 @@
-- Add security fields to user_sessions table for production-ready remember me
ALTER TABLE "user_sessions" ADD COLUMN "device_fingerprint" TEXT;
ALTER TABLE "user_sessions" ADD COLUMN "login_count" INTEGER NOT NULL DEFAULT 1;
ALTER TABLE "user_sessions" ADD COLUMN "last_login_ip" TEXT;
-- Create index for device fingerprint for efficient querying
CREATE INDEX "user_sessions_device_fingerprint_idx" ON "user_sessions"("device_fingerprint");

View File

@@ -0,0 +1,3 @@
-- AlterTable
ALTER TABLE "update_history" ADD COLUMN "total_packages" INTEGER;

View File

@@ -0,0 +1,4 @@
-- AlterTable
ALTER TABLE "update_history" ADD COLUMN "payload_size_kb" DOUBLE PRECISION;
ALTER TABLE "update_history" ADD COLUMN "execution_time" DOUBLE PRECISION;

View File

@@ -0,0 +1,30 @@
-- Add indexes to host_packages table for performance optimization
-- These indexes will dramatically speed up queries filtering by host_id, package_id, needs_update, and is_security_update
-- Index for queries filtering by host_id (very common - used when viewing packages for a specific host)
CREATE INDEX IF NOT EXISTS "host_packages_host_id_idx" ON "host_packages"("host_id");
-- Index for queries filtering by package_id (used when finding hosts for a specific package)
CREATE INDEX IF NOT EXISTS "host_packages_package_id_idx" ON "host_packages"("package_id");
-- Index for queries filtering by needs_update (used when finding outdated packages)
CREATE INDEX IF NOT EXISTS "host_packages_needs_update_idx" ON "host_packages"("needs_update");
-- Index for queries filtering by is_security_update (used when finding security updates)
CREATE INDEX IF NOT EXISTS "host_packages_is_security_update_idx" ON "host_packages"("is_security_update");
-- Composite index for the most common query pattern: host_id + needs_update
-- This is optimal for "show me outdated packages for this host"
CREATE INDEX IF NOT EXISTS "host_packages_host_id_needs_update_idx" ON "host_packages"("host_id", "needs_update");
-- Composite index for host_id + needs_update + is_security_update
-- This is optimal for "show me security updates for this host"
CREATE INDEX IF NOT EXISTS "host_packages_host_id_needs_update_security_idx" ON "host_packages"("host_id", "needs_update", "is_security_update");
-- Index for queries filtering by package_id + needs_update
-- This is optimal for "show me hosts where this package needs updates"
CREATE INDEX IF NOT EXISTS "host_packages_package_id_needs_update_idx" ON "host_packages"("package_id", "needs_update");
-- Index on last_checked for cleanup/maintenance queries
CREATE INDEX IF NOT EXISTS "host_packages_last_checked_idx" ON "host_packages"("last_checked");

View File

@@ -0,0 +1,94 @@
-- CreateTable
CREATE TABLE "docker_images" (
"id" TEXT NOT NULL,
"repository" TEXT NOT NULL,
"tag" TEXT NOT NULL DEFAULT 'latest',
"image_id" TEXT NOT NULL,
"digest" TEXT,
"size_bytes" BIGINT,
"source" TEXT NOT NULL DEFAULT 'docker-hub',
"created_at" TIMESTAMP(3) NOT NULL,
"last_pulled" TIMESTAMP(3),
"last_checked" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updated_at" TIMESTAMP(3) NOT NULL,
CONSTRAINT "docker_images_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "docker_containers" (
"id" TEXT NOT NULL,
"host_id" TEXT NOT NULL,
"container_id" TEXT NOT NULL,
"name" TEXT NOT NULL,
"image_id" TEXT,
"image_name" TEXT NOT NULL,
"image_tag" TEXT NOT NULL DEFAULT 'latest',
"status" TEXT NOT NULL,
"state" TEXT,
"ports" JSONB,
"created_at" TIMESTAMP(3) NOT NULL,
"started_at" TIMESTAMP(3),
"updated_at" TIMESTAMP(3) NOT NULL,
"last_checked" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT "docker_containers_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "docker_image_updates" (
"id" TEXT NOT NULL,
"image_id" TEXT NOT NULL,
"current_tag" TEXT NOT NULL,
"available_tag" TEXT NOT NULL,
"is_security_update" BOOLEAN NOT NULL DEFAULT false,
"severity" TEXT,
"changelog_url" TEXT,
"created_at" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updated_at" TIMESTAMP(3) NOT NULL,
CONSTRAINT "docker_image_updates_pkey" PRIMARY KEY ("id")
);
-- CreateIndex
CREATE INDEX "docker_images_repository_idx" ON "docker_images"("repository");
-- CreateIndex
CREATE INDEX "docker_images_source_idx" ON "docker_images"("source");
-- CreateIndex
CREATE INDEX "docker_images_repository_tag_idx" ON "docker_images"("repository", "tag");
-- CreateIndex
CREATE UNIQUE INDEX "docker_images_repository_tag_image_id_key" ON "docker_images"("repository", "tag", "image_id");
-- CreateIndex
CREATE INDEX "docker_containers_host_id_idx" ON "docker_containers"("host_id");
-- CreateIndex
CREATE INDEX "docker_containers_image_id_idx" ON "docker_containers"("image_id");
-- CreateIndex
CREATE INDEX "docker_containers_status_idx" ON "docker_containers"("status");
-- CreateIndex
CREATE INDEX "docker_containers_name_idx" ON "docker_containers"("name");
-- CreateIndex
CREATE UNIQUE INDEX "docker_containers_host_id_container_id_key" ON "docker_containers"("host_id", "container_id");
-- CreateIndex
CREATE INDEX "docker_image_updates_image_id_idx" ON "docker_image_updates"("image_id");
-- CreateIndex
CREATE INDEX "docker_image_updates_is_security_update_idx" ON "docker_image_updates"("is_security_update");
-- CreateIndex
CREATE UNIQUE INDEX "docker_image_updates_image_id_available_tag_key" ON "docker_image_updates"("image_id", "available_tag");
-- AddForeignKey
ALTER TABLE "docker_containers" ADD CONSTRAINT "docker_containers_image_id_fkey" FOREIGN KEY ("image_id") REFERENCES "docker_images"("id") ON DELETE SET NULL ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "docker_image_updates" ADD CONSTRAINT "docker_image_updates_image_id_fkey" FOREIGN KEY ("image_id") REFERENCES "docker_images"("id") ON DELETE CASCADE ON UPDATE CASCADE;

View File

@@ -0,0 +1,40 @@
-- CreateTable
CREATE TABLE "job_history" (
"id" TEXT NOT NULL,
"job_id" TEXT NOT NULL,
"queue_name" TEXT NOT NULL,
"job_name" TEXT NOT NULL,
"host_id" TEXT,
"api_id" TEXT,
"status" TEXT NOT NULL,
"attempt_number" INTEGER NOT NULL DEFAULT 1,
"error_message" TEXT,
"output" JSONB,
"created_at" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updated_at" TIMESTAMP(3) NOT NULL,
"completed_at" TIMESTAMP(3),
CONSTRAINT "job_history_pkey" PRIMARY KEY ("id")
);
-- CreateIndex
CREATE INDEX "job_history_job_id_idx" ON "job_history"("job_id");
-- CreateIndex
CREATE INDEX "job_history_queue_name_idx" ON "job_history"("queue_name");
-- CreateIndex
CREATE INDEX "job_history_host_id_idx" ON "job_history"("host_id");
-- CreateIndex
CREATE INDEX "job_history_api_id_idx" ON "job_history"("api_id");
-- CreateIndex
CREATE INDEX "job_history_status_idx" ON "job_history"("status");
-- CreateIndex
CREATE INDEX "job_history_created_at_idx" ON "job_history"("created_at");
-- AddForeignKey
ALTER TABLE "job_history" ADD CONSTRAINT "job_history_host_id_fkey" FOREIGN KEY ("host_id") REFERENCES "hosts"("id") ON DELETE SET NULL ON UPDATE CASCADE;

View File

@@ -0,0 +1,43 @@
-- CreateTable
CREATE TABLE "host_group_memberships" (
"id" TEXT NOT NULL,
"host_id" TEXT NOT NULL,
"host_group_id" TEXT NOT NULL,
"created_at" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT "host_group_memberships_pkey" PRIMARY KEY ("id")
);
-- CreateIndex
CREATE UNIQUE INDEX "host_group_memberships_host_id_host_group_id_key" ON "host_group_memberships"("host_id", "host_group_id");
-- CreateIndex
CREATE INDEX "host_group_memberships_host_id_idx" ON "host_group_memberships"("host_id");
-- CreateIndex
CREATE INDEX "host_group_memberships_host_group_id_idx" ON "host_group_memberships"("host_group_id");
-- Migrate existing data from hosts.host_group_id to host_group_memberships
INSERT INTO "host_group_memberships" ("id", "host_id", "host_group_id", "created_at")
SELECT
gen_random_uuid()::text as "id",
"id" as "host_id",
"host_group_id" as "host_group_id",
CURRENT_TIMESTAMP as "created_at"
FROM "hosts"
WHERE "host_group_id" IS NOT NULL;
-- AddForeignKey
ALTER TABLE "host_group_memberships" ADD CONSTRAINT "host_group_memberships_host_id_fkey" FOREIGN KEY ("host_id") REFERENCES "hosts"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "host_group_memberships" ADD CONSTRAINT "host_group_memberships_host_group_id_fkey" FOREIGN KEY ("host_group_id") REFERENCES "host_groups"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- DropForeignKey
ALTER TABLE "hosts" DROP CONSTRAINT IF EXISTS "hosts_host_group_id_fkey";
-- DropIndex
DROP INDEX IF EXISTS "hosts_host_group_id_idx";
-- AlterTable
ALTER TABLE "hosts" DROP COLUMN "host_group_id";

View File

@@ -1,6 +1,3 @@
// This is your Prisma schema file,
// learn more about it in the docs: https://pris.ly/d/prisma-schema
generator client {
provider = "prisma-client-js"
}
@@ -10,239 +7,357 @@ datasource db {
url = env("DATABASE_URL")
}
model User {
id String @id @default(cuid())
username String @unique
email String @unique
passwordHash String @map("password_hash")
role String @default("admin") // admin, user
isActive Boolean @default(true) @map("is_active")
lastLogin DateTime? @map("last_login")
createdAt DateTime @map("created_at") @default(now())
updatedAt DateTime @map("updated_at") @updatedAt
// Two-Factor Authentication
tfaEnabled Boolean @default(false) @map("tfa_enabled")
tfaSecret String? @map("tfa_secret")
tfaBackupCodes String? @map("tfa_backup_codes") // JSON array of backup codes
// Relationships
dashboardPreferences DashboardPreferences[]
@@map("users")
model dashboard_preferences {
id String @id
user_id String
card_id String
enabled Boolean @default(true)
order Int @default(0)
created_at DateTime @default(now())
updated_at DateTime
users users @relation(fields: [user_id], references: [id], onDelete: Cascade)
@@unique([user_id, card_id])
}
model RolePermissions {
id String @id @default(cuid())
role String @unique // admin, user, custom roles
canViewDashboard Boolean @default(true) @map("can_view_dashboard")
canViewHosts Boolean @default(true) @map("can_view_hosts")
canManageHosts Boolean @default(false) @map("can_manage_hosts")
canViewPackages Boolean @default(true) @map("can_view_packages")
canManagePackages Boolean @default(false) @map("can_manage_packages")
canViewUsers Boolean @default(false) @map("can_view_users")
canManageUsers Boolean @default(false) @map("can_manage_users")
canViewReports Boolean @default(true) @map("can_view_reports")
canExportData Boolean @default(false) @map("can_export_data")
canManageSettings Boolean @default(false) @map("can_manage_settings")
createdAt DateTime @map("created_at") @default(now())
updatedAt DateTime @map("updated_at") @updatedAt
@@map("role_permissions")
model host_groups {
id String @id
name String @unique
description String?
color String? @default("#3B82F6")
created_at DateTime @default(now())
updated_at DateTime
host_group_memberships host_group_memberships[]
auto_enrollment_tokens auto_enrollment_tokens[]
}
model HostGroup {
id String @id @default(cuid())
name String @unique
description String?
color String? @default("#3B82F6") // Hex color for UI display
createdAt DateTime @map("created_at") @default(now())
updatedAt DateTime @map("updated_at") @updatedAt
// Relationships
hosts Host[]
@@map("host_groups")
model host_group_memberships {
id String @id
host_id String
host_group_id String
created_at DateTime @default(now())
hosts hosts @relation(fields: [host_id], references: [id], onDelete: Cascade)
host_groups host_groups @relation(fields: [host_group_id], references: [id], onDelete: Cascade)
@@unique([host_id, host_group_id])
@@index([host_id])
@@index([host_group_id])
}
model Host {
id String @id @default(cuid())
friendlyName String @unique @map("friendly_name")
hostname String? // Actual system hostname from agent
ip String?
osType String @map("os_type")
osVersion String @map("os_version")
architecture String?
lastUpdate DateTime @map("last_update") @default(now())
status String @default("active") // active, inactive, error
apiId String @unique @map("api_id") // New API ID for authentication
apiKey String @unique @map("api_key") // New API Key for authentication
hostGroupId String? @map("host_group_id") // Optional group association
agentVersion String? @map("agent_version") // Agent script version
autoUpdate Boolean @map("auto_update") @default(true) // Enable auto-update for this host
// Hardware Information
cpuModel String? @map("cpu_model") // CPU model name
cpuCores Int? @map("cpu_cores") // Number of CPU cores
ramInstalled Int? @map("ram_installed") // RAM in GB
swapSize Int? @map("swap_size") // Swap size in GB
diskDetails Json? @map("disk_details") // Array of disk objects
// Network Information
gatewayIp String? @map("gateway_ip") // Gateway IP address
dnsServers Json? @map("dns_servers") // Array of DNS servers
networkInterfaces Json? @map("network_interfaces") // Array of network interface objects
// System Information
kernelVersion String? @map("kernel_version") // Kernel version
selinuxStatus String? @map("selinux_status") // SELinux status (enabled/disabled/permissive)
systemUptime String? @map("system_uptime") // System uptime
loadAverage Json? @map("load_average") // Load average (1min, 5min, 15min)
createdAt DateTime @map("created_at") @default(now())
updatedAt DateTime @map("updated_at") @updatedAt
// Relationships
hostPackages HostPackage[]
updateHistory UpdateHistory[]
hostRepositories HostRepository[]
hostGroup HostGroup? @relation(fields: [hostGroupId], references: [id], onDelete: SetNull)
@@map("hosts")
model host_packages {
id String @id
host_id String
package_id String
current_version String
available_version String?
needs_update Boolean @default(false)
is_security_update Boolean @default(false)
last_checked DateTime @default(now())
hosts hosts @relation(fields: [host_id], references: [id], onDelete: Cascade)
packages packages @relation(fields: [package_id], references: [id], onDelete: Cascade)
@@unique([host_id, package_id])
@@index([host_id])
@@index([package_id])
@@index([needs_update])
@@index([is_security_update])
@@index([host_id, needs_update])
@@index([host_id, needs_update, is_security_update])
@@index([package_id, needs_update])
@@index([last_checked])
}
model Package {
id String @id @default(cuid())
name String @unique
description String?
category String? // system, security, development, etc.
latestVersion String? @map("latest_version")
createdAt DateTime @map("created_at") @default(now())
updatedAt DateTime @map("updated_at") @updatedAt
// Relationships
hostPackages HostPackage[]
@@map("packages")
model host_repositories {
id String @id
host_id String
repository_id String
is_enabled Boolean @default(true)
last_checked DateTime @default(now())
hosts hosts @relation(fields: [host_id], references: [id], onDelete: Cascade)
repositories repositories @relation(fields: [repository_id], references: [id], onDelete: Cascade)
@@unique([host_id, repository_id])
}
model HostPackage {
id String @id @default(cuid())
hostId String @map("host_id")
packageId String @map("package_id")
currentVersion String @map("current_version")
availableVersion String? @map("available_version")
needsUpdate Boolean @map("needs_update") @default(false)
isSecurityUpdate Boolean @map("is_security_update") @default(false)
lastChecked DateTime @map("last_checked") @default(now())
// Relationships
host Host @relation(fields: [hostId], references: [id], onDelete: Cascade)
package Package @relation(fields: [packageId], references: [id], onDelete: Cascade)
@@unique([hostId, packageId])
@@map("host_packages")
model hosts {
id String @id
machine_id String @unique
friendly_name String
ip String?
os_type String
os_version String
architecture String?
last_update DateTime @default(now())
status String @default("active")
created_at DateTime @default(now())
updated_at DateTime
api_id String @unique
api_key String @unique
agent_version String?
auto_update Boolean @default(true)
cpu_cores Int?
cpu_model String?
disk_details Json?
dns_servers Json?
gateway_ip String?
hostname String?
kernel_version String?
load_average Json?
network_interfaces Json?
ram_installed Int?
selinux_status String?
swap_size Int?
system_uptime String?
notes String?
host_packages host_packages[]
host_repositories host_repositories[]
host_group_memberships host_group_memberships[]
update_history update_history[]
job_history job_history[]
@@index([machine_id])
@@index([friendly_name])
@@index([hostname])
}
model UpdateHistory {
id String @id @default(cuid())
hostId String @map("host_id")
packagesCount Int @map("packages_count")
securityCount Int @map("security_count")
timestamp DateTime @default(now())
status String @default("success") // success, error
errorMessage String? @map("error_message")
// Relationships
host Host @relation(fields: [hostId], references: [id], onDelete: Cascade)
@@map("update_history")
model packages {
id String @id
name String @unique
description String?
category String?
latest_version String?
created_at DateTime @default(now())
updated_at DateTime
host_packages host_packages[]
@@index([name])
@@index([category])
}
model Repository {
id String @id @default(cuid())
name String // Repository name (e.g., "focal", "focal-updates")
url String // Repository URL
distribution String // Distribution (e.g., "focal", "jammy")
components String // Components (e.g., "main restricted universe multiverse")
repoType String @map("repo_type") // "deb" or "deb-src"
isActive Boolean @map("is_active") @default(true)
isSecure Boolean @map("is_secure") @default(true) // HTTPS vs HTTP
priority Int? // Repository priority
description String? // Optional description
createdAt DateTime @map("created_at") @default(now())
updatedAt DateTime @map("updated_at") @updatedAt
// Relationships
hostRepositories HostRepository[]
model repositories {
id String @id
name String
url String
distribution String
components String
repo_type String
is_active Boolean @default(true)
is_secure Boolean @default(true)
priority Int?
description String?
created_at DateTime @default(now())
updated_at DateTime
host_repositories host_repositories[]
@@unique([url, distribution, components])
@@map("repositories")
}
model HostRepository {
id String @id @default(cuid())
hostId String @map("host_id")
repositoryId String @map("repository_id")
isEnabled Boolean @map("is_enabled") @default(true)
lastChecked DateTime @map("last_checked") @default(now())
// Relationships
host Host @relation(fields: [hostId], references: [id], onDelete: Cascade)
repository Repository @relation(fields: [repositoryId], references: [id], onDelete: Cascade)
@@unique([hostId, repositoryId])
@@map("host_repositories")
model role_permissions {
id String @id
role String @unique
can_view_dashboard Boolean @default(true)
can_view_hosts Boolean @default(true)
can_manage_hosts Boolean @default(false)
can_view_packages Boolean @default(true)
can_manage_packages Boolean @default(false)
can_view_users Boolean @default(false)
can_manage_users Boolean @default(false)
can_view_reports Boolean @default(true)
can_export_data Boolean @default(false)
can_manage_settings Boolean @default(false)
created_at DateTime @default(now())
updated_at DateTime
}
model Settings {
id String @id @default(cuid())
serverUrl String @map("server_url") @default("http://localhost:3001")
serverProtocol String @map("server_protocol") @default("http") // http, https
serverHost String @map("server_host") @default("localhost")
serverPort Int @map("server_port") @default(3001)
frontendUrl String @map("frontend_url") @default("http://localhost:3000")
updateInterval Int @map("update_interval") @default(60) // Update interval in minutes
autoUpdate Boolean @map("auto_update") @default(false) // Enable automatic agent updates
githubRepoUrl String @map("github_repo_url") @default("git@github.com:9technologygroup/patchmon.net.git") // GitHub repository URL for version checking
repositoryType String @map("repository_type") @default("public") // "public" or "private"
sshKeyPath String? @map("ssh_key_path") // Optional SSH key path for deploy key authentication
lastUpdateCheck DateTime? @map("last_update_check") // When the system last checked for updates
updateAvailable Boolean @map("update_available") @default(false) // Whether an update is available
latestVersion String? @map("latest_version") // Latest available version
createdAt DateTime @map("created_at") @default(now())
updatedAt DateTime @map("updated_at") @updatedAt
@@map("settings")
model settings {
id String @id
server_url String @default("http://localhost:3001")
server_protocol String @default("http")
server_host String @default("localhost")
server_port Int @default(3001)
created_at DateTime @default(now())
updated_at DateTime
update_interval Int @default(60)
auto_update Boolean @default(false)
github_repo_url String @default("https://github.com/PatchMon/PatchMon.git")
ssh_key_path String?
repository_type String @default("public")
last_update_check DateTime?
latest_version String?
update_available Boolean @default(false)
signup_enabled Boolean @default(false)
default_user_role String @default("user")
ignore_ssl_self_signed Boolean @default(false)
logo_dark String? @default("/assets/logo_dark.png")
logo_light String? @default("/assets/logo_light.png")
favicon String? @default("/assets/logo_square.svg")
}
model DashboardPreferences {
id String @id @default(cuid())
userId String @map("user_id")
cardId String @map("card_id") // e.g., "totalHosts", "securityUpdates", etc.
enabled Boolean @default(true)
order Int @default(0)
createdAt DateTime @map("created_at") @default(now())
updatedAt DateTime @map("updated_at") @updatedAt
// Relationships
user User @relation(fields: [userId], references: [id], onDelete: Cascade)
@@unique([userId, cardId])
@@map("dashboard_preferences")
model update_history {
id String @id
host_id String
packages_count Int
security_count Int
total_packages Int?
payload_size_kb Float?
execution_time Float?
timestamp DateTime @default(now())
status String @default("success")
error_message String?
hosts hosts @relation(fields: [host_id], references: [id], onDelete: Cascade)
}
model AgentVersion {
id String @id @default(cuid())
version String @unique // e.g., "1.0.0", "1.1.0"
isCurrent Boolean @default(false) @map("is_current") // Only one version can be current
releaseNotes String? @map("release_notes")
downloadUrl String? @map("download_url") // URL to download the agent script
minServerVersion String? @map("min_server_version") // Minimum server version required
scriptContent String? @map("script_content") // The actual agent script content
isDefault Boolean @default(false) @map("is_default") // Default version for new installations
createdAt DateTime @map("created_at") @default(now())
updatedAt DateTime @map("updated_at") @updatedAt
@@map("agent_versions")
}
model users {
id String @id
username String @unique
email String @unique
password_hash String
role String @default("admin")
is_active Boolean @default(true)
last_login DateTime?
created_at DateTime @default(now())
updated_at DateTime
tfa_backup_codes String?
tfa_enabled Boolean @default(false)
tfa_secret String?
first_name String?
last_name String?
dashboard_preferences dashboard_preferences[]
user_sessions user_sessions[]
auto_enrollment_tokens auto_enrollment_tokens[]
}
model user_sessions {
id String @id
user_id String
refresh_token String @unique
access_token_hash String?
ip_address String?
user_agent String?
device_fingerprint String?
last_activity DateTime @default(now())
expires_at DateTime
created_at DateTime @default(now())
is_revoked Boolean @default(false)
tfa_remember_me Boolean @default(false)
tfa_bypass_until DateTime?
login_count Int @default(1)
last_login_ip String?
users users @relation(fields: [user_id], references: [id], onDelete: Cascade)
@@index([user_id])
@@index([refresh_token])
@@index([expires_at])
@@index([tfa_bypass_until])
@@index([device_fingerprint])
}
model auto_enrollment_tokens {
id String @id
token_name String
token_key String @unique
token_secret String
created_by_user_id String?
is_active Boolean @default(true)
allowed_ip_ranges String[]
max_hosts_per_day Int @default(100)
hosts_created_today Int @default(0)
last_reset_date DateTime @default(now()) @db.Date
default_host_group_id String?
created_at DateTime @default(now())
updated_at DateTime
last_used_at DateTime?
expires_at DateTime?
metadata Json?
users users? @relation(fields: [created_by_user_id], references: [id], onDelete: SetNull)
host_groups host_groups? @relation(fields: [default_host_group_id], references: [id], onDelete: SetNull)
@@index([token_key])
@@index([is_active])
}
model docker_containers {
id String @id
host_id String
container_id String
name String
image_id String?
image_name String
image_tag String @default("latest")
status String
state String?
ports Json?
created_at DateTime
started_at DateTime?
updated_at DateTime
last_checked DateTime @default(now())
docker_images docker_images? @relation(fields: [image_id], references: [id], onDelete: SetNull)
@@unique([host_id, container_id])
@@index([host_id])
@@index([image_id])
@@index([status])
@@index([name])
}
model docker_images {
id String @id
repository String
tag String @default("latest")
image_id String
digest String?
size_bytes BigInt?
source String @default("docker-hub")
created_at DateTime
last_pulled DateTime?
last_checked DateTime @default(now())
updated_at DateTime
docker_containers docker_containers[]
docker_image_updates docker_image_updates[]
@@unique([repository, tag, image_id])
@@index([repository])
@@index([source])
@@index([repository, tag])
}
model docker_image_updates {
id String @id
image_id String
current_tag String
available_tag String
is_security_update Boolean @default(false)
severity String?
changelog_url String?
created_at DateTime @default(now())
updated_at DateTime
docker_images docker_images @relation(fields: [image_id], references: [id], onDelete: Cascade)
@@unique([image_id, available_tag])
@@index([image_id])
@@index([is_security_update])
}
model job_history {
id String @id
job_id String
queue_name String
job_name String
host_id String?
api_id String?
status String
attempt_number Int @default(1)
error_message String?
output Json?
created_at DateTime @default(now())
updated_at DateTime
completed_at DateTime?
hosts hosts? @relation(fields: [host_id], references: [id], onDelete: SetNull)
@@index([job_id])
@@index([queue_name])
@@index([host_id])
@@index([api_id])
@@index([status])
@@index([created_at])
}

View File

@@ -1,80 +0,0 @@
/**
* Database configuration for multiple instances
* Optimizes connection pooling to prevent "too many connections" errors
*/
const { PrismaClient } = require('@prisma/client');
// Parse DATABASE_URL and add connection pooling parameters
function getOptimizedDatabaseUrl() {
const originalUrl = process.env.DATABASE_URL;
if (!originalUrl) {
throw new Error('DATABASE_URL environment variable is required');
}
// Parse the URL
const url = new URL(originalUrl);
// Add connection pooling parameters for multiple instances
url.searchParams.set('connection_limit', '5'); // Reduced from default 10
url.searchParams.set('pool_timeout', '10'); // 10 seconds
url.searchParams.set('connect_timeout', '10'); // 10 seconds
url.searchParams.set('idle_timeout', '300'); // 5 minutes
url.searchParams.set('max_lifetime', '1800'); // 30 minutes
return url.toString();
}
// Create optimized Prisma client
function createPrismaClient() {
const optimizedUrl = getOptimizedDatabaseUrl();
return new PrismaClient({
datasources: {
db: {
url: optimizedUrl
}
},
log: process.env.NODE_ENV === 'development'
? ['query', 'info', 'warn', 'error']
: ['warn', 'error'],
errorFormat: 'pretty'
});
}
// Connection health check
async function checkDatabaseConnection(prisma) {
try {
await prisma.$queryRaw`SELECT 1`;
return true;
} catch (error) {
console.error('Database connection failed:', error.message);
return false;
}
}
// Graceful disconnect with retry
async function disconnectPrisma(prisma, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
await prisma.$disconnect();
console.log('Database disconnected successfully');
return;
} catch (error) {
console.error(`Disconnect attempt ${i + 1} failed:`, error.message);
if (i === maxRetries - 1) {
console.error('Failed to disconnect from database after all retries');
} else {
await new Promise(resolve => setTimeout(resolve, 1000)); // Wait 1 second
}
}
}
}
module.exports = {
createPrismaClient,
checkDatabaseConnection,
disconnectPrisma,
getOptimizedDatabaseUrl
};

View File

@@ -0,0 +1,149 @@
/**
* Centralized Prisma Client Singleton
* Prevents multiple Prisma clients from creating connection leaks
*/
const { PrismaClient } = require("@prisma/client");
// Parse DATABASE_URL and add connection pooling parameters
function getOptimizedDatabaseUrl() {
const originalUrl = process.env.DATABASE_URL;
if (!originalUrl) {
throw new Error("DATABASE_URL environment variable is required");
}
// Parse the URL
const url = new URL(originalUrl);
// Add connection pooling parameters for multiple instances
url.searchParams.set("connection_limit", "5"); // Reduced from default 10
url.searchParams.set("pool_timeout", "10"); // 10 seconds
url.searchParams.set("connect_timeout", "10"); // 10 seconds
url.searchParams.set("idle_timeout", "300"); // 5 minutes
url.searchParams.set("max_lifetime", "1800"); // 30 minutes
return url.toString();
}
// Singleton Prisma client instance
let prismaInstance = null;
function getPrismaClient() {
if (!prismaInstance) {
const optimizedUrl = getOptimizedDatabaseUrl();
prismaInstance = new PrismaClient({
datasources: {
db: {
url: optimizedUrl,
},
},
log:
process.env.PRISMA_LOG_QUERIES === "true"
? ["query", "info", "warn", "error"]
: ["warn", "error"],
errorFormat: "pretty",
});
// Handle graceful shutdown
process.on("beforeExit", async () => {
await prismaInstance.$disconnect();
});
process.on("SIGINT", async () => {
await prismaInstance.$disconnect();
process.exit(0);
});
process.on("SIGTERM", async () => {
await prismaInstance.$disconnect();
process.exit(0);
});
}
return prismaInstance;
}
// Connection health check
async function checkDatabaseConnection(prisma) {
try {
await prisma.$queryRaw`SELECT 1`;
return true;
} catch (error) {
console.error("Database connection check failed:", error.message);
return false;
}
}
// Wait for database to be available with retry logic
async function waitForDatabase(prisma, options = {}) {
const maxAttempts =
options.maxAttempts ||
parseInt(process.env.PM_DB_CONN_MAX_ATTEMPTS, 10) ||
30;
const waitInterval =
options.waitInterval ||
parseInt(process.env.PM_DB_CONN_WAIT_INTERVAL, 10) ||
2;
if (process.env.ENABLE_LOGGING === "true") {
console.log(
`Waiting for database connection (max ${maxAttempts} attempts, ${waitInterval}s interval)...`,
);
}
for (let attempt = 1; attempt <= maxAttempts; attempt++) {
try {
const isConnected = await checkDatabaseConnection(prisma);
if (isConnected) {
if (process.env.ENABLE_LOGGING === "true") {
console.log(
`Database connected successfully after ${attempt} attempt(s)`,
);
}
return true;
}
} catch {
// checkDatabaseConnection already logs the error
}
if (attempt < maxAttempts) {
if (process.env.ENABLE_LOGGING === "true") {
console.log(
`⏳ Database not ready (attempt ${attempt}/${maxAttempts}), retrying in ${waitInterval}s...`,
);
}
await new Promise((resolve) => setTimeout(resolve, waitInterval * 1000));
}
}
throw new Error(
`❌ Database failed to become available after ${maxAttempts} attempts`,
);
}
// Graceful disconnect with retry
async function disconnectPrisma(prisma, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
await prisma.$disconnect();
console.log("Database disconnected successfully");
return;
} catch (error) {
console.error(`Disconnect attempt ${i + 1} failed:`, error.message);
if (i === maxRetries - 1) {
console.error("Failed to disconnect from database after all retries");
} else {
await new Promise((resolve) => setTimeout(resolve, 1000)); // Wait 1 second
}
}
}
}
module.exports = {
getPrismaClient,
checkDatabaseConnection,
waitForDatabase,
disconnectPrisma,
};

View File

@@ -1,98 +1,151 @@
const jwt = require('jsonwebtoken');
const { PrismaClient } = require('@prisma/client');
const jwt = require("jsonwebtoken");
const { getPrismaClient } = require("../config/prisma");
const {
validate_session,
update_session_activity,
is_tfa_bypassed,
} = require("../utils/session_manager");
const prisma = new PrismaClient();
const prisma = getPrismaClient();
// Middleware to verify JWT token
// Middleware to verify JWT token with session validation
const authenticateToken = async (req, res, next) => {
try {
const authHeader = req.headers['authorization'];
const token = authHeader && authHeader.split(' ')[1]; // Bearer TOKEN
try {
const authHeader = req.headers.authorization;
const token = authHeader?.split(" ")[1]; // Bearer TOKEN
if (!token) {
return res.status(401).json({ error: 'Access token required' });
}
if (!token) {
return res.status(401).json({ error: "Access token required" });
}
// Verify token
const decoded = jwt.verify(token, process.env.JWT_SECRET || 'your-secret-key');
// Get user from database
const user = await prisma.user.findUnique({
where: { id: decoded.userId },
select: {
id: true,
username: true,
email: true,
role: true,
isActive: true,
lastLogin: true
}
});
// Verify token
if (!process.env.JWT_SECRET) {
throw new Error("JWT_SECRET environment variable is required");
}
const decoded = jwt.verify(token, process.env.JWT_SECRET);
if (!user || !user.isActive) {
return res.status(401).json({ error: 'Invalid or inactive user' });
}
// Validate session and check inactivity timeout
const validation = await validate_session(decoded.sessionId, token);
// Update last login
await prisma.user.update({
where: { id: user.id },
data: { lastLogin: new Date() }
});
if (!validation.valid) {
const error_messages = {
"Session not found": "Session not found",
"Session revoked": "Session has been revoked",
"Session expired": "Session has expired",
"Session inactive":
validation.message || "Session timed out due to inactivity",
"Token mismatch": "Invalid token",
"User inactive": "User account is inactive",
};
req.user = user;
next();
} catch (error) {
if (error.name === 'JsonWebTokenError') {
return res.status(401).json({ error: 'Invalid token' });
}
if (error.name === 'TokenExpiredError') {
return res.status(401).json({ error: 'Token expired' });
}
console.error('Auth middleware error:', error);
return res.status(500).json({ error: 'Authentication failed' });
}
return res.status(401).json({
error: error_messages[validation.reason] || "Authentication failed",
reason: validation.reason,
});
}
// Update session activity timestamp
await update_session_activity(decoded.sessionId);
// Check if TFA is bypassed for this session
const tfa_bypassed = await is_tfa_bypassed(decoded.sessionId);
// Update last login (only on successful authentication)
await prisma.users.update({
where: { id: validation.user.id },
data: {
last_login: new Date(),
updated_at: new Date(),
},
});
req.user = validation.user;
req.session_id = decoded.sessionId;
req.tfa_bypassed = tfa_bypassed;
next();
} catch (error) {
if (error.name === "JsonWebTokenError") {
return res.status(401).json({ error: "Invalid token" });
}
if (error.name === "TokenExpiredError") {
return res.status(401).json({ error: "Token expired" });
}
console.error("Auth middleware error:", error);
return res.status(500).json({ error: "Authentication failed" });
}
};
// Middleware to check admin role
const requireAdmin = (req, res, next) => {
if (req.user.role !== 'admin') {
return res.status(403).json({ error: 'Admin access required' });
}
next();
if (req.user.role !== "admin") {
return res.status(403).json({ error: "Admin access required" });
}
next();
};
// Middleware to check if user is authenticated (optional)
const optionalAuth = async (req, res, next) => {
try {
const authHeader = req.headers['authorization'];
const token = authHeader && authHeader.split(' ')[1];
const optionalAuth = async (req, _res, next) => {
try {
const authHeader = req.headers.authorization;
const token = authHeader?.split(" ")[1];
if (token) {
const decoded = jwt.verify(token, process.env.JWT_SECRET || 'your-secret-key');
const user = await prisma.user.findUnique({
where: { id: decoded.userId },
select: {
id: true,
username: true,
email: true,
role: true,
isActive: true
}
});
if (token) {
if (!process.env.JWT_SECRET) {
throw new Error("JWT_SECRET environment variable is required");
}
const decoded = jwt.verify(token, process.env.JWT_SECRET);
const user = await prisma.users.findUnique({
where: { id: decoded.userId },
select: {
id: true,
username: true,
email: true,
role: true,
is_active: true,
last_login: true,
created_at: true,
updated_at: true,
},
});
if (user && user.isActive) {
req.user = user;
}
}
next();
} catch (error) {
// Continue without authentication for optional auth
next();
}
if (user?.is_active) {
req.user = user;
}
}
next();
} catch {
// Continue without authentication for optional auth
next();
}
};
// Middleware to check if TFA is required for sensitive operations
const requireTfaIfEnabled = async (req, res, next) => {
try {
// Check if user has TFA enabled
const user = await prisma.users.findUnique({
where: { id: req.user.id },
select: { tfa_enabled: true },
});
// If TFA is enabled and not bypassed, require TFA verification
if (user?.tfa_enabled && !req.tfa_bypassed) {
return res.status(403).json({
error: "Two-factor authentication required for this operation",
requires_tfa: true,
});
}
next();
} catch (error) {
console.error("TFA requirement check error:", error);
return res.status(500).json({ error: "Authentication check failed" });
}
};
module.exports = {
authenticateToken,
requireAdmin,
optionalAuth
authenticateToken,
requireAdmin,
optionalAuth,
requireTfaIfEnabled,
};

View File

@@ -1,59 +1,61 @@
const { PrismaClient } = require('@prisma/client');
const prisma = new PrismaClient();
const { getPrismaClient } = require("../config/prisma");
const prisma = getPrismaClient();
// Permission middleware factory
const requirePermission = (permission) => {
return async (req, res, next) => {
try {
// Get user's role permissions
const rolePermissions = await prisma.rolePermissions.findUnique({
where: { role: req.user.role }
});
return async (req, res, next) => {
try {
// Get user's role permissions
const rolePermissions = await prisma.role_permissions.findUnique({
where: { role: req.user.role },
});
// If no specific permissions found, default to admin permissions (for backward compatibility)
if (!rolePermissions) {
console.warn(`No permissions found for role: ${req.user.role}, defaulting to admin access`);
return next();
}
// If no specific permissions found, default to admin permissions (for backward compatibility)
if (!rolePermissions) {
console.warn(
`No permissions found for role: ${req.user.role}, defaulting to admin access`,
);
return next();
}
// Check if user has the required permission
if (!rolePermissions[permission]) {
return res.status(403).json({
error: 'Insufficient permissions',
message: `You don't have permission to ${permission.replace('can', '').toLowerCase()}`
});
}
// Check if user has the required permission
if (!rolePermissions[permission]) {
return res.status(403).json({
error: "Insufficient permissions",
message: `You don't have permission to ${permission.replace("can_", "").replace("_", " ")}`,
});
}
next();
} catch (error) {
console.error('Permission check error:', error);
res.status(500).json({ error: 'Permission check failed' });
}
};
next();
} catch (error) {
console.error("Permission check error:", error);
res.status(500).json({ error: "Permission check failed" });
}
};
};
// Specific permission middlewares
const requireViewDashboard = requirePermission('canViewDashboard');
const requireViewHosts = requirePermission('canViewHosts');
const requireManageHosts = requirePermission('canManageHosts');
const requireViewPackages = requirePermission('canViewPackages');
const requireManagePackages = requirePermission('canManagePackages');
const requireViewUsers = requirePermission('canViewUsers');
const requireManageUsers = requirePermission('canManageUsers');
const requireViewReports = requirePermission('canViewReports');
const requireExportData = requirePermission('canExportData');
const requireManageSettings = requirePermission('canManageSettings');
// Specific permission middlewares - using snake_case field names
const requireViewDashboard = requirePermission("can_view_dashboard");
const requireViewHosts = requirePermission("can_view_hosts");
const requireManageHosts = requirePermission("can_manage_hosts");
const requireViewPackages = requirePermission("can_view_packages");
const requireManagePackages = requirePermission("can_manage_packages");
const requireViewUsers = requirePermission("can_view_users");
const requireManageUsers = requirePermission("can_manage_users");
const requireViewReports = requirePermission("can_view_reports");
const requireExportData = requirePermission("can_export_data");
const requireManageSettings = requirePermission("can_manage_settings");
module.exports = {
requirePermission,
requireViewDashboard,
requireViewHosts,
requireManageHosts,
requireViewPackages,
requireManagePackages,
requireViewUsers,
requireManageUsers,
requireViewReports,
requireExportData,
requireManageSettings
requirePermission,
requireViewDashboard,
requireViewHosts,
requireManageHosts,
requireViewPackages,
requireManagePackages,
requireViewUsers,
requireManageUsers,
requireViewReports,
requireExportData,
requireManageSettings,
};

View File

@@ -0,0 +1,419 @@
const express = require("express");
const router = express.Router();
const agentVersionService = require("../services/agentVersionService");
const { authenticateToken } = require("../middleware/auth");
const { requirePermission } = require("../middleware/permissions");
// Test GitHub API connectivity
router.get(
"/test-github",
authenticateToken,
requirePermission("can_manage_settings"),
async (_req, res) => {
try {
const axios = require("axios");
const response = await axios.get(
"https://api.github.com/repos/PatchMon/PatchMon-agent/releases",
{
timeout: 10000,
headers: {
"User-Agent": "PatchMon-Server/1.0",
Accept: "application/vnd.github.v3+json",
},
},
);
res.json({
success: true,
status: response.status,
releasesFound: response.data.length,
latestRelease: response.data[0]?.tag_name || "No releases",
rateLimitRemaining: response.headers["x-ratelimit-remaining"],
rateLimitLimit: response.headers["x-ratelimit-limit"],
});
} catch (error) {
console.error("❌ GitHub API test failed:", error.message);
res.status(500).json({
success: false,
error: error.message,
status: error.response?.status,
statusText: error.response?.statusText,
rateLimitRemaining: error.response?.headers["x-ratelimit-remaining"],
rateLimitLimit: error.response?.headers["x-ratelimit-limit"],
});
}
},
);
// Get current version information
router.get("/version", authenticateToken, async (_req, res) => {
try {
const versionInfo = await agentVersionService.getVersionInfo();
console.log(
"📊 Version info response:",
JSON.stringify(versionInfo, null, 2),
);
res.json(versionInfo);
} catch (error) {
console.error("❌ Failed to get version info:", error.message);
res.status(500).json({
error: "Failed to get version information",
details: error.message,
status: "error",
});
}
});
// Refresh current version by executing agent binary
router.post(
"/version/refresh",
authenticateToken,
requirePermission("can_manage_settings"),
async (_req, res) => {
try {
console.log("🔄 Refreshing current agent version...");
const currentVersion = await agentVersionService.refreshCurrentVersion();
console.log("📊 Refreshed current version:", currentVersion);
res.json({
success: true,
currentVersion: currentVersion,
message: currentVersion
? `Current version refreshed: ${currentVersion}`
: "No agent binary found",
});
} catch (error) {
console.error("❌ Failed to refresh current version:", error.message);
res.status(500).json({
success: false,
error: "Failed to refresh current version",
details: error.message,
});
}
},
);
// Download latest update
router.post(
"/version/download",
authenticateToken,
requirePermission("can_manage_settings"),
async (_req, res) => {
try {
console.log("🔄 Downloading latest agent update...");
const downloadResult = await agentVersionService.downloadLatestUpdate();
console.log(
"📊 Download result:",
JSON.stringify(downloadResult, null, 2),
);
res.json(downloadResult);
} catch (error) {
console.error("❌ Failed to download latest update:", error.message);
res.status(500).json({
success: false,
error: "Failed to download latest update",
details: error.message,
});
}
},
);
// Check for updates
router.post(
"/version/check",
authenticateToken,
requirePermission("can_manage_settings"),
async (_req, res) => {
try {
console.log("🔄 Manual update check triggered");
const updateInfo = await agentVersionService.checkForUpdates();
console.log(
"📊 Update check result:",
JSON.stringify(updateInfo, null, 2),
);
res.json(updateInfo);
} catch (error) {
console.error("❌ Failed to check for updates:", error.message);
res.status(500).json({ error: "Failed to check for updates" });
}
},
);
// Get available versions
router.get("/versions", authenticateToken, async (_req, res) => {
try {
const versions = await agentVersionService.getAvailableVersions();
console.log(
"📦 Available versions response:",
JSON.stringify(versions, null, 2),
);
res.json({ versions });
} catch (error) {
console.error("❌ Failed to get available versions:", error.message);
res.status(500).json({ error: "Failed to get available versions" });
}
});
// Get binary information
router.get(
"/binary/:version/:architecture",
authenticateToken,
async (_req, res) => {
try {
const { version, architecture } = req.params;
const binaryInfo = await agentVersionService.getBinaryInfo(
version,
architecture,
);
res.json(binaryInfo);
} catch (error) {
console.error("❌ Failed to get binary info:", error.message);
res.status(404).json({ error: error.message });
}
},
);
// Download agent binary
router.get(
"/download/:version/:architecture",
authenticateToken,
async (_req, res) => {
try {
const { version, architecture } = req.params;
// Validate architecture
if (!agentVersionService.supportedArchitectures.includes(architecture)) {
return res.status(400).json({ error: "Unsupported architecture" });
}
await agentVersionService.serveBinary(version, architecture, res);
} catch (error) {
console.error("❌ Failed to serve binary:", error.message);
res.status(500).json({ error: "Failed to serve binary" });
}
},
);
// Get latest binary for architecture (for agents to query)
router.get("/latest/:architecture", async (req, res) => {
try {
const { architecture } = req.params;
// Validate architecture
if (!agentVersionService.supportedArchitectures.includes(architecture)) {
return res.status(400).json({ error: "Unsupported architecture" });
}
const versionInfo = await agentVersionService.getVersionInfo();
if (!versionInfo.latestVersion) {
return res.status(404).json({ error: "No latest version available" });
}
const binaryInfo = await agentVersionService.getBinaryInfo(
versionInfo.latestVersion,
architecture,
);
res.json({
version: binaryInfo.version,
architecture: binaryInfo.architecture,
size: binaryInfo.size,
hash: binaryInfo.hash,
downloadUrl: `/api/v1/agent/download/${binaryInfo.version}/${binaryInfo.architecture}`,
});
} catch (error) {
console.error("❌ Failed to get latest binary info:", error.message);
res.status(500).json({ error: "Failed to get latest binary information" });
}
});
// Push update notification to specific agent
router.post(
"/notify-update/:apiId",
authenticateToken,
requirePermission("admin"),
async (_req, res) => {
try {
const { apiId } = req.params;
const { version, force = false } = req.body;
const versionInfo = await agentVersionService.getVersionInfo();
const targetVersion = version || versionInfo.latestVersion;
if (!targetVersion) {
return res
.status(400)
.json({ error: "No version specified or available" });
}
// Import WebSocket service
const { pushUpdateNotification } = require("../services/agentWs");
// Push update notification via WebSocket
pushUpdateNotification(apiId, {
version: targetVersion,
force,
downloadUrl: `/api/v1/agent/latest/${req.body.architecture || "linux-amd64"}`,
message: `Update available: ${targetVersion}`,
});
res.json({
success: true,
message: `Update notification sent to agent ${apiId}`,
version: targetVersion,
});
} catch (error) {
console.error("❌ Failed to notify agent update:", error.message);
res.status(500).json({ error: "Failed to notify agent update" });
}
},
);
// Push update notification to all agents
router.post(
"/notify-update-all",
authenticateToken,
requirePermission("admin"),
async (_req, res) => {
try {
const { version, force = false } = req.body;
const versionInfo = await agentVersionService.getVersionInfo();
const targetVersion = version || versionInfo.latestVersion;
if (!targetVersion) {
return res
.status(400)
.json({ error: "No version specified or available" });
}
// Import WebSocket service
const { pushUpdateNotificationToAll } = require("../services/agentWs");
// Push update notification to all connected agents
const result = await pushUpdateNotificationToAll({
version: targetVersion,
force,
message: `Update available: ${targetVersion}`,
});
res.json({
success: true,
message: `Update notification sent to ${result.notifiedCount} agents`,
version: targetVersion,
notifiedCount: result.notifiedCount,
failedCount: result.failedCount,
});
} catch (error) {
console.error("❌ Failed to notify all agents update:", error.message);
res.status(500).json({ error: "Failed to notify all agents update" });
}
},
);
// Check if specific agent needs update and push notification
router.post(
"/check-update/:apiId",
authenticateToken,
requirePermission("can_manage_settings"),
async (_req, res) => {
try {
const { apiId } = req.params;
const { version, force = false } = req.body;
if (!version) {
return res.status(400).json({
success: false,
error: "Agent version is required",
});
}
console.log(
`🔍 Checking update for agent ${apiId} (version: ${version})`,
);
const result = await agentVersionService.checkAndPushAgentUpdate(
apiId,
version,
force,
);
console.log(
"📊 Agent update check result:",
JSON.stringify(result, null, 2),
);
res.json({
success: true,
...result,
});
} catch (error) {
console.error("❌ Failed to check agent update:", error.message);
res.status(500).json({
success: false,
error: "Failed to check agent update",
details: error.message,
});
}
},
);
// Push updates to all connected agents
router.post(
"/push-updates-all",
authenticateToken,
requirePermission("can_manage_settings"),
async (_req, res) => {
try {
const { force = false } = req.body;
console.log(`🔄 Pushing updates to all agents (force: ${force})`);
const result = await agentVersionService.checkAndPushUpdatesToAll(force);
console.log("📊 Bulk update result:", JSON.stringify(result, null, 2));
res.json(result);
} catch (error) {
console.error("❌ Failed to push updates to all agents:", error.message);
res.status(500).json({
success: false,
error: "Failed to push updates to all agents",
details: error.message,
});
}
},
);
// Agent reports its version (for automatic update checking)
router.post("/report-version", authenticateToken, async (req, res) => {
try {
const { apiId, version } = req.body;
if (!apiId || !version) {
return res.status(400).json({
success: false,
error: "API ID and version are required",
});
}
console.log(`📊 Agent ${apiId} reported version: ${version}`);
// Check if agent needs update and push notification if needed
const updateResult = await agentVersionService.checkAndPushAgentUpdate(
apiId,
version,
);
res.json({
success: true,
message: "Version reported successfully",
updateCheck: updateResult,
});
} catch (error) {
console.error("❌ Failed to process agent version report:", error.message);
res.status(500).json({
success: false,
error: "Failed to process version report",
details: error.message,
});
}
});
module.exports = router;

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,772 @@
const express = require("express");
const { getPrismaClient } = require("../config/prisma");
const crypto = require("node:crypto");
const bcrypt = require("bcryptjs");
const { body, validationResult } = require("express-validator");
const { authenticateToken } = require("../middleware/auth");
const { requireManageSettings } = require("../middleware/permissions");
const { v4: uuidv4 } = require("uuid");
const router = express.Router();
const prisma = getPrismaClient();
// Generate auto-enrollment token credentials
const generate_auto_enrollment_token = () => {
const token_key = `patchmon_ae_${crypto.randomBytes(16).toString("hex")}`;
const token_secret = crypto.randomBytes(48).toString("hex");
return { token_key, token_secret };
};
// Middleware to validate auto-enrollment token
const validate_auto_enrollment_token = async (req, res, next) => {
try {
const token_key = req.headers["x-auto-enrollment-key"];
const token_secret = req.headers["x-auto-enrollment-secret"];
if (!token_key || !token_secret) {
return res
.status(401)
.json({ error: "Auto-enrollment credentials required" });
}
// Find token
const token = await prisma.auto_enrollment_tokens.findUnique({
where: { token_key: token_key },
});
if (!token || !token.is_active) {
return res.status(401).json({ error: "Invalid or inactive token" });
}
// Verify secret (hashed)
const is_valid = await bcrypt.compare(token_secret, token.token_secret);
if (!is_valid) {
return res.status(401).json({ error: "Invalid token secret" });
}
// Check expiration
if (token.expires_at && new Date() > new Date(token.expires_at)) {
return res.status(401).json({ error: "Token expired" });
}
// Check IP whitelist if configured
if (token.allowed_ip_ranges && token.allowed_ip_ranges.length > 0) {
const client_ip = req.ip || req.connection.remoteAddress;
// Basic IP check - can be enhanced with CIDR matching
const ip_allowed = token.allowed_ip_ranges.some((allowed_ip) => {
return client_ip.includes(allowed_ip);
});
if (!ip_allowed) {
console.warn(
`Auto-enrollment attempt from unauthorized IP: ${client_ip}`,
);
return res
.status(403)
.json({ error: "IP address not authorized for this token" });
}
}
// Check rate limit (hosts per day)
const today = new Date().toISOString().split("T")[0];
const token_reset_date = token.last_reset_date.toISOString().split("T")[0];
if (token_reset_date !== today) {
// Reset daily counter
await prisma.auto_enrollment_tokens.update({
where: { id: token.id },
data: {
hosts_created_today: 0,
last_reset_date: new Date(),
updated_at: new Date(),
},
});
token.hosts_created_today = 0;
}
if (token.hosts_created_today >= token.max_hosts_per_day) {
return res.status(429).json({
error: "Rate limit exceeded",
message: `Maximum ${token.max_hosts_per_day} hosts per day allowed for this token`,
});
}
req.auto_enrollment_token = token;
next();
} catch (error) {
console.error("Auto-enrollment token validation error:", error);
res.status(500).json({ error: "Token validation failed" });
}
};
// ========== ADMIN ENDPOINTS (Manage Tokens) ==========
// Create auto-enrollment token
router.post(
"/tokens",
authenticateToken,
requireManageSettings,
[
body("token_name")
.isLength({ min: 1, max: 255 })
.withMessage("Token name is required (max 255 characters)"),
body("allowed_ip_ranges")
.optional()
.isArray()
.withMessage("Allowed IP ranges must be an array"),
body("max_hosts_per_day")
.optional()
.isInt({ min: 1, max: 1000 })
.withMessage("Max hosts per day must be between 1 and 1000"),
body("default_host_group_id")
.optional({ nullable: true, checkFalsy: true })
.isString(),
body("expires_at")
.optional({ nullable: true, checkFalsy: true })
.isISO8601()
.withMessage("Invalid date format"),
],
async (req, res) => {
try {
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
}
const {
token_name,
allowed_ip_ranges = [],
max_hosts_per_day = 100,
default_host_group_id,
expires_at,
metadata = {},
} = req.body;
// Validate host group if provided
if (default_host_group_id) {
const host_group = await prisma.host_groups.findUnique({
where: { id: default_host_group_id },
});
if (!host_group) {
return res.status(400).json({ error: "Host group not found" });
}
}
const { token_key, token_secret } = generate_auto_enrollment_token();
const hashed_secret = await bcrypt.hash(token_secret, 10);
const token = await prisma.auto_enrollment_tokens.create({
data: {
id: uuidv4(),
token_name,
token_key: token_key,
token_secret: hashed_secret,
created_by_user_id: req.user.id,
allowed_ip_ranges,
max_hosts_per_day,
default_host_group_id: default_host_group_id || null,
expires_at: expires_at ? new Date(expires_at) : null,
metadata: { integration_type: "proxmox-lxc", ...metadata },
updated_at: new Date(),
},
include: {
host_groups: {
select: {
id: true,
name: true,
color: true,
},
},
users: {
select: {
id: true,
username: true,
first_name: true,
last_name: true,
},
},
},
});
// Return unhashed secret ONLY once (like API keys)
res.status(201).json({
message: "Auto-enrollment token created successfully",
token: {
id: token.id,
token_name: token.token_name,
token_key: token_key,
token_secret: token_secret, // ONLY returned here!
max_hosts_per_day: token.max_hosts_per_day,
default_host_group: token.host_groups,
created_by: token.users,
expires_at: token.expires_at,
},
warning: "⚠️ Save the token_secret now - it cannot be retrieved later!",
});
} catch (error) {
console.error("Create auto-enrollment token error:", error);
res.status(500).json({ error: "Failed to create token" });
}
},
);
// List auto-enrollment tokens
router.get(
"/tokens",
authenticateToken,
requireManageSettings,
async (_req, res) => {
try {
const tokens = await prisma.auto_enrollment_tokens.findMany({
select: {
id: true,
token_name: true,
token_key: true,
is_active: true,
allowed_ip_ranges: true,
max_hosts_per_day: true,
hosts_created_today: true,
last_used_at: true,
expires_at: true,
created_at: true,
default_host_group_id: true,
metadata: true,
host_groups: {
select: {
id: true,
name: true,
color: true,
},
},
users: {
select: {
id: true,
username: true,
first_name: true,
last_name: true,
},
},
},
orderBy: { created_at: "desc" },
});
res.json(tokens);
} catch (error) {
console.error("List auto-enrollment tokens error:", error);
res.status(500).json({ error: "Failed to list tokens" });
}
},
);
// Get single token details
router.get(
"/tokens/:tokenId",
authenticateToken,
requireManageSettings,
async (req, res) => {
try {
const { tokenId } = req.params;
const token = await prisma.auto_enrollment_tokens.findUnique({
where: { id: tokenId },
include: {
host_groups: {
select: {
id: true,
name: true,
color: true,
},
},
users: {
select: {
id: true,
username: true,
first_name: true,
last_name: true,
},
},
},
});
if (!token) {
return res.status(404).json({ error: "Token not found" });
}
// Don't include the secret in response
const { token_secret: _secret, ...token_data } = token;
res.json(token_data);
} catch (error) {
console.error("Get token error:", error);
res.status(500).json({ error: "Failed to get token" });
}
},
);
// Update token (toggle active state, update limits, etc.)
router.patch(
"/tokens/:tokenId",
authenticateToken,
requireManageSettings,
[
body("is_active").optional().isBoolean(),
body("max_hosts_per_day").optional().isInt({ min: 1, max: 1000 }),
body("allowed_ip_ranges").optional().isArray(),
body("expires_at").optional().isISO8601(),
],
async (req, res) => {
try {
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
}
const { tokenId } = req.params;
const update_data = { updated_at: new Date() };
if (req.body.is_active !== undefined)
update_data.is_active = req.body.is_active;
if (req.body.max_hosts_per_day !== undefined)
update_data.max_hosts_per_day = req.body.max_hosts_per_day;
if (req.body.allowed_ip_ranges !== undefined)
update_data.allowed_ip_ranges = req.body.allowed_ip_ranges;
if (req.body.expires_at !== undefined)
update_data.expires_at = new Date(req.body.expires_at);
const token = await prisma.auto_enrollment_tokens.update({
where: { id: tokenId },
data: update_data,
include: {
host_groups: true,
users: {
select: {
username: true,
first_name: true,
last_name: true,
},
},
},
});
const { token_secret: _secret, ...token_data } = token;
res.json({
message: "Token updated successfully",
token: token_data,
});
} catch (error) {
console.error("Update token error:", error);
res.status(500).json({ error: "Failed to update token" });
}
},
);
// Delete token
router.delete(
"/tokens/:tokenId",
authenticateToken,
requireManageSettings,
async (req, res) => {
try {
const { tokenId } = req.params;
const token = await prisma.auto_enrollment_tokens.findUnique({
where: { id: tokenId },
});
if (!token) {
return res.status(404).json({ error: "Token not found" });
}
await prisma.auto_enrollment_tokens.delete({
where: { id: tokenId },
});
res.json({
message: "Auto-enrollment token deleted successfully",
deleted_token: {
id: token.id,
token_name: token.token_name,
},
});
} catch (error) {
console.error("Delete token error:", error);
res.status(500).json({ error: "Failed to delete token" });
}
},
);
// ========== AUTO-ENROLLMENT ENDPOINTS (Used by Scripts) ==========
// Future integrations can follow this pattern:
// - /proxmox-lxc - Proxmox LXC containers
// - /vmware-esxi - VMware ESXi VMs
// - /docker - Docker containers
// - /kubernetes - Kubernetes pods
// - /aws-ec2 - AWS EC2 instances
// Serve the Proxmox LXC enrollment script with credentials injected
router.get("/proxmox-lxc", async (req, res) => {
try {
// Get token from query params
const token_key = req.query.token_key;
const token_secret = req.query.token_secret;
if (!token_key || !token_secret) {
return res
.status(401)
.json({ error: "Token key and secret required as query parameters" });
}
// Validate token
const token = await prisma.auto_enrollment_tokens.findUnique({
where: { token_key: token_key },
});
if (!token || !token.is_active) {
return res.status(401).json({ error: "Invalid or inactive token" });
}
// Verify secret
const is_valid = await bcrypt.compare(token_secret, token.token_secret);
if (!is_valid) {
return res.status(401).json({ error: "Invalid token secret" });
}
// Check expiration
if (token.expires_at && new Date() > new Date(token.expires_at)) {
return res.status(401).json({ error: "Token expired" });
}
const fs = require("node:fs");
const path = require("node:path");
const script_path = path.join(
__dirname,
"../../../agents/proxmox_auto_enroll.sh",
);
if (!fs.existsSync(script_path)) {
return res
.status(404)
.json({ error: "Proxmox enrollment script not found" });
}
let script = fs.readFileSync(script_path, "utf8");
// Convert Windows line endings to Unix line endings
script = script.replace(/\r\n/g, "\n").replace(/\r/g, "\n");
// Get the configured server URL from settings
let server_url = "http://localhost:3001";
try {
const settings = await prisma.settings.findFirst();
if (settings?.server_url) {
server_url = settings.server_url;
}
} catch (settings_error) {
console.warn(
"Could not fetch settings, using default server URL:",
settings_error.message,
);
}
// Determine curl flags dynamically from settings
let curl_flags = "-s";
try {
const settings = await prisma.settings.findFirst();
if (settings && settings.ignore_ssl_self_signed === true) {
curl_flags = "-sk";
}
} catch (_) {}
// Check for --force parameter
const force_install = req.query.force === "true" || req.query.force === "1";
// Inject the token credentials, server URL, curl flags, and force flag into the script
const env_vars = `#!/bin/bash
# PatchMon Auto-Enrollment Configuration (Auto-generated)
export PATCHMON_URL="${server_url}"
export AUTO_ENROLLMENT_KEY="${token.token_key}"
export AUTO_ENROLLMENT_SECRET="${token_secret}"
export CURL_FLAGS="${curl_flags}"
export FORCE_INSTALL="${force_install ? "true" : "false"}"
`;
// Remove the shebang and configuration section from the original script
script = script.replace(/^#!/, "#");
// Remove the configuration section (between # ===== CONFIGURATION ===== and the next # =====)
script = script.replace(
/# ===== CONFIGURATION =====[\s\S]*?(?=# ===== COLOR OUTPUT =====)/,
"",
);
script = env_vars + script;
res.setHeader("Content-Type", "text/plain");
res.setHeader(
"Content-Disposition",
'inline; filename="proxmox_auto_enroll.sh"',
);
res.send(script);
} catch (error) {
console.error("Proxmox script serve error:", error);
res.status(500).json({ error: "Failed to serve enrollment script" });
}
});
// Create host via auto-enrollment
router.post(
"/enroll",
validate_auto_enrollment_token,
[
body("friendly_name")
.isLength({ min: 1, max: 255 })
.withMessage("Friendly name is required"),
body("machine_id")
.isLength({ min: 1, max: 255 })
.withMessage("Machine ID is required"),
body("metadata").optional().isObject(),
],
async (req, res) => {
try {
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
}
const { friendly_name, machine_id } = req.body;
// Generate host API credentials
const api_id = `patchmon_${crypto.randomBytes(8).toString("hex")}`;
const api_key = crypto.randomBytes(32).toString("hex");
// Check if host already exists by machine_id (not hostname)
const existing_host = await prisma.hosts.findUnique({
where: { machine_id },
});
if (existing_host) {
return res.status(409).json({
error: "Host already exists",
host_id: existing_host.id,
api_id: existing_host.api_id,
machine_id: existing_host.machine_id,
friendly_name: existing_host.friendly_name,
message:
"This machine is already enrolled in PatchMon (matched by machine ID)",
});
}
// Create host
const host = await prisma.hosts.create({
data: {
id: uuidv4(),
machine_id,
friendly_name,
os_type: "unknown",
os_version: "unknown",
api_id: api_id,
api_key: api_key,
status: "pending",
notes: `Auto-enrolled via ${req.auto_enrollment_token.token_name} on ${new Date().toISOString()}`,
updated_at: new Date(),
},
});
// Create host group membership if default host group is specified
let hostGroupMembership = null;
if (req.auto_enrollment_token.default_host_group_id) {
hostGroupMembership = await prisma.host_group_memberships.create({
data: {
id: uuidv4(),
host_id: host.id,
host_group_id: req.auto_enrollment_token.default_host_group_id,
created_at: new Date(),
},
});
}
// Update token usage stats
await prisma.auto_enrollment_tokens.update({
where: { id: req.auto_enrollment_token.id },
data: {
hosts_created_today: { increment: 1 },
last_used_at: new Date(),
updated_at: new Date(),
},
});
console.log(
`Auto-enrolled host: ${friendly_name} (${host.id}) via token: ${req.auto_enrollment_token.token_name}`,
);
// Get host group details for response if membership was created
let hostGroup = null;
if (hostGroupMembership) {
hostGroup = await prisma.host_groups.findUnique({
where: { id: req.auto_enrollment_token.default_host_group_id },
select: {
id: true,
name: true,
color: true,
},
});
}
res.status(201).json({
message: "Host enrolled successfully",
host: {
id: host.id,
friendly_name: host.friendly_name,
api_id: api_id,
api_key: api_key,
host_group: hostGroup,
status: host.status,
},
});
} catch (error) {
console.error("Auto-enrollment error:", error);
res.status(500).json({ error: "Failed to enroll host" });
}
},
);
// Bulk enroll multiple hosts at once
router.post(
"/enroll/bulk",
validate_auto_enrollment_token,
[
body("hosts")
.isArray({ min: 1, max: 50 })
.withMessage("Hosts array required (max 50)"),
body("hosts.*.friendly_name")
.isLength({ min: 1 })
.withMessage("Each host needs a friendly_name"),
],
async (req, res) => {
try {
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
}
const { hosts } = req.body;
// Check rate limit
const remaining_quota =
req.auto_enrollment_token.max_hosts_per_day -
req.auto_enrollment_token.hosts_created_today;
if (hosts.length > remaining_quota) {
return res.status(429).json({
error: "Rate limit exceeded",
message: `Only ${remaining_quota} hosts remaining in daily quota`,
});
}
const results = {
success: [],
failed: [],
skipped: [],
};
for (const host_data of hosts) {
try {
const { friendly_name, machine_id } = host_data;
if (!machine_id) {
results.failed.push({
friendly_name,
error: "Machine ID is required",
});
continue;
}
// Check if host already exists by machine_id
const existing_host = await prisma.hosts.findUnique({
where: { machine_id },
});
if (existing_host) {
results.skipped.push({
friendly_name,
machine_id,
reason: "Machine already enrolled",
api_id: existing_host.api_id,
});
continue;
}
// Generate credentials
const api_id = `patchmon_${crypto.randomBytes(8).toString("hex")}`;
const api_key = crypto.randomBytes(32).toString("hex");
// Create host
const host = await prisma.hosts.create({
data: {
id: uuidv4(),
machine_id,
friendly_name,
os_type: "unknown",
os_version: "unknown",
api_id: api_id,
api_key: api_key,
status: "pending",
notes: `Auto-enrolled via ${req.auto_enrollment_token.token_name} on ${new Date().toISOString()}`,
updated_at: new Date(),
},
});
// Create host group membership if default host group is specified
if (req.auto_enrollment_token.default_host_group_id) {
await prisma.host_group_memberships.create({
data: {
id: uuidv4(),
host_id: host.id,
host_group_id: req.auto_enrollment_token.default_host_group_id,
created_at: new Date(),
},
});
}
results.success.push({
id: host.id,
friendly_name: host.friendly_name,
api_id: api_id,
api_key: api_key,
});
} catch (error) {
results.failed.push({
friendly_name: host_data.friendly_name,
error: error.message,
});
}
}
// Update token usage stats
if (results.success.length > 0) {
await prisma.auto_enrollment_tokens.update({
where: { id: req.auto_enrollment_token.id },
data: {
hosts_created_today: { increment: results.success.length },
last_used_at: new Date(),
updated_at: new Date(),
},
});
}
res.status(201).json({
message: `Bulk enrollment completed: ${results.success.length} succeeded, ${results.failed.length} failed, ${results.skipped.length} skipped`,
results,
});
} catch (error) {
console.error("Bulk auto-enrollment error:", error);
res.status(500).json({ error: "Failed to bulk enroll hosts" });
}
},
);
module.exports = router;

View File

@@ -0,0 +1,416 @@
const express = require("express");
const { queueManager, QUEUE_NAMES } = require("../services/automation");
const { getConnectedApiIds } = require("../services/agentWs");
const { authenticateToken } = require("../middleware/auth");
const router = express.Router();
// Get all queue statistics
router.get("/stats", authenticateToken, async (_req, res) => {
try {
const stats = await queueManager.getAllQueueStats();
res.json({
success: true,
data: stats,
});
} catch (error) {
console.error("Error fetching queue stats:", error);
res.status(500).json({
success: false,
error: "Failed to fetch queue statistics",
});
}
});
// Get specific queue statistics
router.get("/stats/:queueName", authenticateToken, async (req, res) => {
try {
const { queueName } = req.params;
if (!Object.values(QUEUE_NAMES).includes(queueName)) {
return res.status(400).json({
success: false,
error: "Invalid queue name",
});
}
const stats = await queueManager.getQueueStats(queueName);
res.json({
success: true,
data: stats,
});
} catch (error) {
console.error("Error fetching queue stats:", error);
res.status(500).json({
success: false,
error: "Failed to fetch queue statistics",
});
}
});
// Get recent jobs for a queue
router.get("/jobs/:queueName", authenticateToken, async (req, res) => {
try {
const { queueName } = req.params;
const { limit = 10 } = req.query;
if (!Object.values(QUEUE_NAMES).includes(queueName)) {
return res.status(400).json({
success: false,
error: "Invalid queue name",
});
}
const jobs = await queueManager.getRecentJobs(
queueName,
parseInt(limit, 10),
);
// Format jobs for frontend
const formattedJobs = jobs.map((job) => ({
id: job.id,
name: job.name,
status: job.finishedOn
? job.failedReason
? "failed"
: "completed"
: "active",
progress: job.progress,
data: job.data,
returnvalue: job.returnvalue,
failedReason: job.failedReason,
processedOn: job.processedOn,
finishedOn: job.finishedOn,
createdAt: new Date(job.timestamp),
attemptsMade: job.attemptsMade,
delay: job.delay,
}));
res.json({
success: true,
data: formattedJobs,
});
} catch (error) {
console.error("Error fetching recent jobs:", error);
res.status(500).json({
success: false,
error: "Failed to fetch recent jobs",
});
}
});
// Trigger manual GitHub update check
router.post("/trigger/github-update", authenticateToken, async (_req, res) => {
try {
const job = await queueManager.triggerGitHubUpdateCheck();
res.json({
success: true,
data: {
jobId: job.id,
message: "GitHub update check triggered successfully",
},
});
} catch (error) {
console.error("Error triggering GitHub update check:", error);
res.status(500).json({
success: false,
error: "Failed to trigger GitHub update check",
});
}
});
// Trigger manual session cleanup
router.post(
"/trigger/session-cleanup",
authenticateToken,
async (_req, res) => {
try {
const job = await queueManager.triggerSessionCleanup();
res.json({
success: true,
data: {
jobId: job.id,
message: "Session cleanup triggered successfully",
},
});
} catch (error) {
console.error("Error triggering session cleanup:", error);
res.status(500).json({
success: false,
error: "Failed to trigger session cleanup",
});
}
},
);
// Trigger Agent Collection: enqueue report_now for connected agents only
router.post(
"/trigger/agent-collection",
authenticateToken,
async (_req, res) => {
try {
const queue = queueManager.queues[QUEUE_NAMES.AGENT_COMMANDS];
const apiIds = getConnectedApiIds();
if (!apiIds || apiIds.length === 0) {
return res.json({ success: true, data: { enqueued: 0 } });
}
const jobs = apiIds.map((apiId) => ({
name: "report_now",
data: { api_id: apiId, type: "report_now" },
opts: { attempts: 3, backoff: { type: "fixed", delay: 2000 } },
}));
await queue.addBulk(jobs);
res.json({ success: true, data: { enqueued: jobs.length } });
} catch (error) {
console.error("Error triggering agent collection:", error);
res
.status(500)
.json({ success: false, error: "Failed to trigger agent collection" });
}
},
);
// Trigger manual orphaned repo cleanup
router.post(
"/trigger/orphaned-repo-cleanup",
authenticateToken,
async (_req, res) => {
try {
const job = await queueManager.triggerOrphanedRepoCleanup();
res.json({
success: true,
data: {
jobId: job.id,
message: "Orphaned repository cleanup triggered successfully",
},
});
} catch (error) {
console.error("Error triggering orphaned repository cleanup:", error);
res.status(500).json({
success: false,
error: "Failed to trigger orphaned repository cleanup",
});
}
},
);
// Trigger manual orphaned package cleanup
router.post(
"/trigger/orphaned-package-cleanup",
authenticateToken,
async (_req, res) => {
try {
const job = await queueManager.triggerOrphanedPackageCleanup();
res.json({
success: true,
data: {
jobId: job.id,
message: "Orphaned package cleanup triggered successfully",
},
});
} catch (error) {
console.error("Error triggering orphaned package cleanup:", error);
res.status(500).json({
success: false,
error: "Failed to trigger orphaned package cleanup",
});
}
},
);
// Get queue health status
router.get("/health", authenticateToken, async (_req, res) => {
try {
const stats = await queueManager.getAllQueueStats();
const totalJobs = Object.values(stats).reduce((sum, queueStats) => {
return sum + queueStats.waiting + queueStats.active + queueStats.failed;
}, 0);
const health = {
status: "healthy",
totalJobs,
queues: Object.keys(stats).length,
timestamp: new Date().toISOString(),
};
// Check for unhealthy conditions
if (totalJobs > 1000) {
health.status = "warning";
health.message = "High number of queued jobs";
}
const failedJobs = Object.values(stats).reduce((sum, queueStats) => {
return sum + queueStats.failed;
}, 0);
if (failedJobs > 10) {
health.status = "error";
health.message = "High number of failed jobs";
}
res.json({
success: true,
data: health,
});
} catch (error) {
console.error("Error checking queue health:", error);
res.status(500).json({
success: false,
error: "Failed to check queue health",
});
}
});
// Get automation overview (for dashboard cards)
router.get("/overview", authenticateToken, async (_req, res) => {
try {
const stats = await queueManager.getAllQueueStats();
const { getSettings } = require("../services/settingsService");
const settings = await getSettings();
// Get recent jobs for each queue to show last run times
const recentJobs = await Promise.all([
queueManager.getRecentJobs(QUEUE_NAMES.GITHUB_UPDATE_CHECK, 1),
queueManager.getRecentJobs(QUEUE_NAMES.SESSION_CLEANUP, 1),
queueManager.getRecentJobs(QUEUE_NAMES.ORPHANED_REPO_CLEANUP, 1),
queueManager.getRecentJobs(QUEUE_NAMES.ORPHANED_PACKAGE_CLEANUP, 1),
queueManager.getRecentJobs(QUEUE_NAMES.AGENT_COMMANDS, 1),
]);
// Calculate overview metrics
const overview = {
scheduledTasks:
stats[QUEUE_NAMES.GITHUB_UPDATE_CHECK].delayed +
stats[QUEUE_NAMES.SESSION_CLEANUP].delayed +
stats[QUEUE_NAMES.ORPHANED_REPO_CLEANUP].delayed +
stats[QUEUE_NAMES.ORPHANED_PACKAGE_CLEANUP].delayed,
runningTasks:
stats[QUEUE_NAMES.GITHUB_UPDATE_CHECK].active +
stats[QUEUE_NAMES.SESSION_CLEANUP].active +
stats[QUEUE_NAMES.ORPHANED_REPO_CLEANUP].active +
stats[QUEUE_NAMES.ORPHANED_PACKAGE_CLEANUP].active,
failedTasks:
stats[QUEUE_NAMES.GITHUB_UPDATE_CHECK].failed +
stats[QUEUE_NAMES.SESSION_CLEANUP].failed +
stats[QUEUE_NAMES.ORPHANED_REPO_CLEANUP].failed +
stats[QUEUE_NAMES.ORPHANED_PACKAGE_CLEANUP].failed,
totalAutomations: Object.values(stats).reduce((sum, queueStats) => {
return (
sum +
queueStats.completed +
queueStats.failed +
queueStats.active +
queueStats.waiting +
queueStats.delayed
);
}, 0),
// Automation details with last run times
automations: [
{
name: "GitHub Update Check",
queue: QUEUE_NAMES.GITHUB_UPDATE_CHECK,
description: "Checks for new PatchMon releases",
schedule: "Daily at midnight",
lastRun: recentJobs[0][0]?.finishedOn
? new Date(recentJobs[0][0].finishedOn).toLocaleString()
: "Never",
lastRunTimestamp: recentJobs[0][0]?.finishedOn || 0,
status: recentJobs[0][0]?.failedReason
? "Failed"
: recentJobs[0][0]
? "Success"
: "Never run",
stats: stats[QUEUE_NAMES.GITHUB_UPDATE_CHECK],
},
{
name: "Session Cleanup",
queue: QUEUE_NAMES.SESSION_CLEANUP,
description: "Cleans up expired user sessions",
schedule: "Every hour",
lastRun: recentJobs[1][0]?.finishedOn
? new Date(recentJobs[1][0].finishedOn).toLocaleString()
: "Never",
lastRunTimestamp: recentJobs[1][0]?.finishedOn || 0,
status: recentJobs[1][0]?.failedReason
? "Failed"
: recentJobs[1][0]
? "Success"
: "Never run",
stats: stats[QUEUE_NAMES.SESSION_CLEANUP],
},
{
name: "Orphaned Repo Cleanup",
queue: QUEUE_NAMES.ORPHANED_REPO_CLEANUP,
description: "Removes repositories with no associated hosts",
schedule: "Daily at 2 AM",
lastRun: recentJobs[2][0]?.finishedOn
? new Date(recentJobs[2][0].finishedOn).toLocaleString()
: "Never",
lastRunTimestamp: recentJobs[2][0]?.finishedOn || 0,
status: recentJobs[2][0]?.failedReason
? "Failed"
: recentJobs[2][0]
? "Success"
: "Never run",
stats: stats[QUEUE_NAMES.ORPHANED_REPO_CLEANUP],
},
{
name: "Orphaned Package Cleanup",
queue: QUEUE_NAMES.ORPHANED_PACKAGE_CLEANUP,
description: "Removes packages with no associated hosts",
schedule: "Daily at 3 AM",
lastRun: recentJobs[3][0]?.finishedOn
? new Date(recentJobs[3][0].finishedOn).toLocaleString()
: "Never",
lastRunTimestamp: recentJobs[3][0]?.finishedOn || 0,
status: recentJobs[3][0]?.failedReason
? "Failed"
: recentJobs[3][0]
? "Success"
: "Never run",
stats: stats[QUEUE_NAMES.ORPHANED_PACKAGE_CLEANUP],
},
{
name: "Collect Host Statistics",
queue: QUEUE_NAMES.AGENT_COMMANDS,
description: "Collects package statistics from connected agents only",
schedule: `Every ${settings.update_interval} minutes (Agent-driven)`,
lastRun: recentJobs[4][0]?.finishedOn
? new Date(recentJobs[4][0].finishedOn).toLocaleString()
: "Never",
lastRunTimestamp: recentJobs[4][0]?.finishedOn || 0,
status: recentJobs[4][0]?.failedReason
? "Failed"
: recentJobs[4][0]
? "Success"
: "Never run",
stats: stats[QUEUE_NAMES.AGENT_COMMANDS],
},
].sort((a, b) => {
// Sort by last run timestamp (most recent first)
// If both have never run (timestamp 0), maintain original order
if (a.lastRunTimestamp === 0 && b.lastRunTimestamp === 0) return 0;
if (a.lastRunTimestamp === 0) return 1; // Never run goes to bottom
if (b.lastRunTimestamp === 0) return -1; // Never run goes to bottom
return b.lastRunTimestamp - a.lastRunTimestamp; // Most recent first
}),
};
res.json({
success: true,
data: overview,
});
} catch (error) {
console.error("Error fetching automation overview:", error);
res.status(500).json({
success: false,
error: "Failed to fetch automation overview",
});
}
});
module.exports = router;

View File

@@ -1,91 +1,379 @@
const express = require('express');
const { body, validationResult } = require('express-validator');
const { PrismaClient } = require('@prisma/client');
const { authenticateToken } = require('../middleware/auth');
const express = require("express");
const { body, validationResult } = require("express-validator");
const { getPrismaClient } = require("../config/prisma");
const { authenticateToken } = require("../middleware/auth");
const { v4: uuidv4 } = require("uuid");
const router = express.Router();
const prisma = new PrismaClient();
const prisma = getPrismaClient();
// Helper function to get user permissions based on role
async function getUserPermissions(userRole) {
try {
const permissions = await prisma.role_permissions.findUnique({
where: { role: userRole },
});
// If no specific permissions found, return default admin permissions (for backward compatibility)
if (!permissions) {
console.warn(
`No permissions found for role: ${userRole}, defaulting to admin access`,
);
return {
can_view_dashboard: true,
can_view_hosts: true,
can_manage_hosts: true,
can_view_packages: true,
can_manage_packages: true,
can_view_users: true,
can_manage_users: true,
can_view_reports: true,
can_export_data: true,
can_manage_settings: true,
};
}
return permissions;
} catch (error) {
console.error("Error fetching user permissions:", error);
// Return admin permissions as fallback
return {
can_view_dashboard: true,
can_view_hosts: true,
can_manage_hosts: true,
can_view_packages: true,
can_manage_packages: true,
can_view_users: true,
can_manage_users: true,
can_view_reports: true,
can_export_data: true,
can_manage_settings: true,
};
}
}
// Helper function to create permission-based dashboard preferences for a new user
async function createDefaultDashboardPreferences(userId, userRole = "user") {
try {
// Get user's actual permissions
const permissions = await getUserPermissions(userRole);
// Define all possible dashboard cards with their required permissions
// Order aligned with preferred layout
const allCards = [
// Host-related cards
{ cardId: "totalHosts", requiredPermission: "can_view_hosts", order: 0 },
{
cardId: "hostsNeedingUpdates",
requiredPermission: "can_view_hosts",
order: 1,
},
// Package-related cards
{
cardId: "totalOutdatedPackages",
requiredPermission: "can_view_packages",
order: 2,
},
{
cardId: "securityUpdates",
requiredPermission: "can_view_packages",
order: 3,
},
// Host-related cards (continued)
{
cardId: "totalHostGroups",
requiredPermission: "can_view_hosts",
order: 4,
},
{
cardId: "upToDateHosts",
requiredPermission: "can_view_hosts",
order: 5,
},
// Repository-related cards
{ cardId: "totalRepos", requiredPermission: "can_view_hosts", order: 6 },
// User management cards (admin only)
{ cardId: "totalUsers", requiredPermission: "can_view_users", order: 7 },
// System/Report cards
{
cardId: "osDistribution",
requiredPermission: "can_view_reports",
order: 8,
},
{
cardId: "osDistributionBar",
requiredPermission: "can_view_reports",
order: 9,
},
{
cardId: "osDistributionDoughnut",
requiredPermission: "can_view_reports",
order: 10,
},
{
cardId: "recentCollection",
requiredPermission: "can_view_hosts",
order: 11,
},
{
cardId: "updateStatus",
requiredPermission: "can_view_reports",
order: 12,
},
{
cardId: "packagePriority",
requiredPermission: "can_view_packages",
order: 13,
},
{
cardId: "packageTrends",
requiredPermission: "can_view_packages",
order: 14,
},
{
cardId: "recentUsers",
requiredPermission: "can_view_users",
order: 15,
},
{
cardId: "quickStats",
requiredPermission: "can_view_dashboard",
order: 16,
},
];
// Filter cards based on user's permissions
const allowedCards = allCards.filter((card) => {
return permissions[card.requiredPermission] === true;
});
// Create preferences data
const preferencesData = allowedCards.map((card) => ({
id: uuidv4(),
user_id: userId,
card_id: card.cardId,
enabled: true,
order: card.order, // Preserve original order from allCards
created_at: new Date(),
updated_at: new Date(),
}));
await prisma.dashboard_preferences.createMany({
data: preferencesData,
});
console.log(
`Permission-based dashboard preferences created for user ${userId} with role ${userRole}: ${allowedCards.length} cards`,
);
} catch (error) {
console.error("Error creating default dashboard preferences:", error);
// Don't throw error - this shouldn't break user creation
}
}
// Get user's dashboard preferences
router.get('/', authenticateToken, async (req, res) => {
try {
const preferences = await prisma.dashboardPreferences.findMany({
where: { userId: req.user.id },
orderBy: { order: 'asc' }
});
res.json(preferences);
} catch (error) {
console.error('Dashboard preferences fetch error:', error);
res.status(500).json({ error: 'Failed to fetch dashboard preferences' });
}
router.get("/", authenticateToken, async (req, res) => {
try {
const preferences = await prisma.dashboard_preferences.findMany({
where: { user_id: req.user.id },
orderBy: { order: "asc" },
});
res.json(preferences);
} catch (error) {
console.error("Dashboard preferences fetch error:", error);
res.status(500).json({ error: "Failed to fetch dashboard preferences" });
}
});
// Update dashboard preferences (bulk update)
router.put('/', authenticateToken, [
body('preferences').isArray().withMessage('Preferences must be an array'),
body('preferences.*.cardId').isString().withMessage('Card ID is required'),
body('preferences.*.enabled').isBoolean().withMessage('Enabled must be boolean'),
body('preferences.*.order').isInt().withMessage('Order must be integer')
], async (req, res) => {
try {
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
}
router.put(
"/",
authenticateToken,
[
body("preferences").isArray().withMessage("Preferences must be an array"),
body("preferences.*.cardId").isString().withMessage("Card ID is required"),
body("preferences.*.enabled")
.isBoolean()
.withMessage("Enabled must be boolean"),
body("preferences.*.order").isInt().withMessage("Order must be integer"),
],
async (req, res) => {
try {
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
}
const { preferences } = req.body;
const userId = req.user.id;
const { preferences } = req.body;
const userId = req.user.id;
// Delete existing preferences for this user
await prisma.dashboardPreferences.deleteMany({
where: { userId }
});
// Delete existing preferences for this user
await prisma.dashboard_preferences.deleteMany({
where: { user_id: userId },
});
// Create new preferences
const newPreferences = preferences.map(pref => ({
userId,
cardId: pref.cardId,
enabled: pref.enabled,
order: pref.order
}));
// Create new preferences
const newPreferences = preferences.map((pref) => ({
id: require("uuid").v4(),
user_id: userId,
card_id: pref.cardId,
enabled: pref.enabled,
order: pref.order,
updated_at: new Date(),
}));
const createdPreferences = await prisma.dashboardPreferences.createMany({
data: newPreferences
});
await prisma.dashboard_preferences.createMany({
data: newPreferences,
});
res.json({
message: 'Dashboard preferences updated successfully',
preferences: newPreferences
});
} catch (error) {
console.error('Dashboard preferences update error:', error);
res.status(500).json({ error: 'Failed to update dashboard preferences' });
}
});
res.json({
message: "Dashboard preferences updated successfully",
preferences: newPreferences,
});
} catch (error) {
console.error("Dashboard preferences update error:", error);
res.status(500).json({ error: "Failed to update dashboard preferences" });
}
},
);
// Get default dashboard card configuration
router.get('/defaults', authenticateToken, async (req, res) => {
try {
const defaultCards = [
{ cardId: 'totalHosts', title: 'Total Hosts', icon: 'Server', enabled: true, order: 0 },
{ cardId: 'hostsNeedingUpdates', title: 'Needs Updating', icon: 'AlertTriangle', enabled: true, order: 1 },
{ cardId: 'totalOutdatedPackages', title: 'Outdated Packages', icon: 'Package', enabled: true, order: 2 },
{ cardId: 'securityUpdates', title: 'Security Updates', icon: 'Shield', enabled: true, order: 3 },
{ cardId: 'erroredHosts', title: 'Errored Hosts', icon: 'AlertTriangle', enabled: true, order: 4 },
{ cardId: 'offlineHosts', title: 'Offline/Stale Hosts', icon: 'WifiOff', enabled: false, order: 5 },
{ cardId: 'osDistribution', title: 'OS Distribution', icon: 'BarChart3', enabled: true, order: 6 },
{ cardId: 'osDistributionBar', title: 'OS Distribution (Bar)', icon: 'BarChart3', enabled: false, order: 7 },
{ cardId: 'updateStatus', title: 'Update Status', icon: 'BarChart3', enabled: true, order: 8 },
{ cardId: 'packagePriority', title: 'Package Priority', icon: 'BarChart3', enabled: true, order: 9 },
{ cardId: 'quickStats', title: 'Quick Stats', icon: 'TrendingUp', enabled: true, order: 10 }
];
router.get("/defaults", authenticateToken, async (_req, res) => {
try {
// This provides a comprehensive dashboard view for all new users
const defaultCards = [
{
cardId: "totalHosts",
title: "Total Hosts",
icon: "Server",
enabled: true,
order: 0,
},
{
cardId: "hostsNeedingUpdates",
title: "Needs Updating",
icon: "AlertTriangle",
enabled: true,
order: 1,
},
{
cardId: "totalOutdatedPackages",
title: "Outdated Packages",
icon: "Package",
enabled: true,
order: 2,
},
{
cardId: "securityUpdates",
title: "Security Updates",
icon: "Shield",
enabled: true,
order: 3,
},
{
cardId: "totalHostGroups",
title: "Host Groups",
icon: "Folder",
enabled: true,
order: 4,
},
{
cardId: "upToDateHosts",
title: "Up to date",
icon: "CheckCircle",
enabled: true,
order: 5,
},
{
cardId: "totalRepos",
title: "Repositories",
icon: "GitBranch",
enabled: true,
order: 6,
},
{
cardId: "totalUsers",
title: "Users",
icon: "Users",
enabled: true,
order: 7,
},
{
cardId: "osDistribution",
title: "OS Distribution",
icon: "BarChart3",
enabled: true,
order: 8,
},
{
cardId: "osDistributionBar",
title: "OS Distribution (Bar)",
icon: "BarChart3",
enabled: true,
order: 9,
},
{
cardId: "osDistributionDoughnut",
title: "OS Distribution (Doughnut)",
icon: "PieChart",
enabled: true,
order: 10,
},
{
cardId: "recentCollection",
title: "Recent Collection",
icon: "Server",
enabled: true,
order: 11,
},
{
cardId: "updateStatus",
title: "Update Status",
icon: "BarChart3",
enabled: true,
order: 12,
},
{
cardId: "packagePriority",
title: "Package Priority",
icon: "BarChart3",
enabled: true,
order: 13,
},
{
cardId: "packageTrends",
title: "Package Trends",
icon: "TrendingUp",
enabled: true,
order: 14,
},
{
cardId: "recentUsers",
title: "Recent Users Logged in",
icon: "Users",
enabled: true,
order: 15,
},
{
cardId: "quickStats",
title: "Quick Stats",
icon: "TrendingUp",
enabled: true,
order: 16,
},
];
res.json(defaultCards);
} catch (error) {
console.error('Default dashboard cards error:', error);
res.status(500).json({ error: 'Failed to fetch default dashboard cards' });
}
res.json(defaultCards);
} catch (error) {
console.error("Default dashboard cards error:", error);
res.status(500).json({ error: "Failed to fetch default dashboard cards" });
}
});
module.exports = router;
module.exports = { router, createDefaultDashboardPreferences };

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,779 @@
const express = require("express");
const { authenticateToken } = require("../middleware/auth");
const { getPrismaClient } = require("../config/prisma");
const { v4: uuidv4 } = require("uuid");
const prisma = getPrismaClient();
const router = express.Router();
// Helper function to convert BigInt fields to strings for JSON serialization
const convertBigIntToString = (obj) => {
if (obj === null || obj === undefined) return obj;
if (typeof obj === "bigint") {
return obj.toString();
}
if (Array.isArray(obj)) {
return obj.map(convertBigIntToString);
}
if (typeof obj === "object") {
const converted = {};
for (const key in obj) {
converted[key] = convertBigIntToString(obj[key]);
}
return converted;
}
return obj;
};
// GET /api/v1/docker/dashboard - Get Docker dashboard statistics
router.get("/dashboard", authenticateToken, async (_req, res) => {
try {
// Get total hosts with Docker containers
const hostsWithDocker = await prisma.docker_containers.groupBy({
by: ["host_id"],
_count: true,
});
// Get total containers
const totalContainers = await prisma.docker_containers.count();
// Get running containers
const runningContainers = await prisma.docker_containers.count({
where: { status: "running" },
});
// Get total images
const totalImages = await prisma.docker_images.count();
// Get available updates
const availableUpdates = await prisma.docker_image_updates.count();
// Get containers by status
const containersByStatus = await prisma.docker_containers.groupBy({
by: ["status"],
_count: true,
});
// Get images by source
const imagesBySource = await prisma.docker_images.groupBy({
by: ["source"],
_count: true,
});
res.json({
stats: {
totalHostsWithDocker: hostsWithDocker.length,
totalContainers,
runningContainers,
totalImages,
availableUpdates,
},
containersByStatus,
imagesBySource,
});
} catch (error) {
console.error("Error fetching Docker dashboard:", error);
res.status(500).json({ error: "Failed to fetch Docker dashboard" });
}
});
// GET /api/v1/docker/containers - Get all containers with filters
router.get("/containers", authenticateToken, async (req, res) => {
try {
const { status, hostId, imageId, search, page = 1, limit = 50 } = req.query;
const where = {};
if (status) where.status = status;
if (hostId) where.host_id = hostId;
if (imageId) where.image_id = imageId;
if (search) {
where.OR = [
{ name: { contains: search, mode: "insensitive" } },
{ image_name: { contains: search, mode: "insensitive" } },
];
}
const skip = (parseInt(page, 10) - 1) * parseInt(limit, 10);
const take = parseInt(limit, 10);
const [containers, total] = await Promise.all([
prisma.docker_containers.findMany({
where,
include: {
docker_images: true,
},
orderBy: { updated_at: "desc" },
skip,
take,
}),
prisma.docker_containers.count({ where }),
]);
// Get host information for each container
const hostIds = [...new Set(containers.map((c) => c.host_id))];
const hosts = await prisma.hosts.findMany({
where: { id: { in: hostIds } },
select: { id: true, friendly_name: true, hostname: true, ip: true },
});
const hostsMap = hosts.reduce((acc, host) => {
acc[host.id] = host;
return acc;
}, {});
const containersWithHosts = containers.map((container) => ({
...container,
host: hostsMap[container.host_id],
}));
res.json(
convertBigIntToString({
containers: containersWithHosts,
pagination: {
page: parseInt(page, 10),
limit: parseInt(limit, 10),
total,
totalPages: Math.ceil(total / parseInt(limit, 10)),
},
}),
);
} catch (error) {
console.error("Error fetching containers:", error);
res.status(500).json({ error: "Failed to fetch containers" });
}
});
// GET /api/v1/docker/containers/:id - Get container detail
router.get("/containers/:id", authenticateToken, async (req, res) => {
try {
const { id } = req.params;
const container = await prisma.docker_containers.findUnique({
where: { id },
include: {
docker_images: {
include: {
docker_image_updates: true,
},
},
},
});
if (!container) {
return res.status(404).json({ error: "Container not found" });
}
// Get host information
const host = await prisma.hosts.findUnique({
where: { id: container.host_id },
select: {
id: true,
friendly_name: true,
hostname: true,
ip: true,
os_type: true,
os_version: true,
},
});
// Get other containers using the same image
const similarContainers = await prisma.docker_containers.findMany({
where: {
image_id: container.image_id,
id: { not: id },
},
take: 10,
});
res.json(
convertBigIntToString({
container: {
...container,
host,
},
similarContainers,
}),
);
} catch (error) {
console.error("Error fetching container detail:", error);
res.status(500).json({ error: "Failed to fetch container detail" });
}
});
// GET /api/v1/docker/images - Get all images with filters
router.get("/images", authenticateToken, async (req, res) => {
try {
const { source, search, page = 1, limit = 50 } = req.query;
const where = {};
if (source) where.source = source;
if (search) {
where.OR = [
{ repository: { contains: search, mode: "insensitive" } },
{ tag: { contains: search, mode: "insensitive" } },
];
}
const skip = (parseInt(page, 10) - 1) * parseInt(limit, 10);
const take = parseInt(limit, 10);
const [images, total] = await Promise.all([
prisma.docker_images.findMany({
where,
include: {
_count: {
select: {
docker_containers: true,
docker_image_updates: true,
},
},
docker_image_updates: {
take: 1,
orderBy: { created_at: "desc" },
},
},
orderBy: { updated_at: "desc" },
skip,
take,
}),
prisma.docker_images.count({ where }),
]);
// Get unique hosts using each image
const imagesWithHosts = await Promise.all(
images.map(async (image) => {
const containers = await prisma.docker_containers.findMany({
where: { image_id: image.id },
select: { host_id: true },
distinct: ["host_id"],
});
return {
...image,
hostsCount: containers.length,
hasUpdates: image._count.docker_image_updates > 0,
};
}),
);
res.json(
convertBigIntToString({
images: imagesWithHosts,
pagination: {
page: parseInt(page, 10),
limit: parseInt(limit, 10),
total,
totalPages: Math.ceil(total / parseInt(limit, 10)),
},
}),
);
} catch (error) {
console.error("Error fetching images:", error);
res.status(500).json({ error: "Failed to fetch images" });
}
});
// GET /api/v1/docker/images/:id - Get image detail
router.get("/images/:id", authenticateToken, async (req, res) => {
try {
const { id } = req.params;
const image = await prisma.docker_images.findUnique({
where: { id },
include: {
docker_containers: {
take: 100,
},
docker_image_updates: {
orderBy: { created_at: "desc" },
},
},
});
if (!image) {
return res.status(404).json({ error: "Image not found" });
}
// Get unique hosts using this image
const hostIds = [...new Set(image.docker_containers.map((c) => c.host_id))];
const hosts = await prisma.hosts.findMany({
where: { id: { in: hostIds } },
select: { id: true, friendly_name: true, hostname: true, ip: true },
});
res.json(
convertBigIntToString({
image,
hosts,
totalContainers: image.docker_containers.length,
totalHosts: hosts.length,
}),
);
} catch (error) {
console.error("Error fetching image detail:", error);
res.status(500).json({ error: "Failed to fetch image detail" });
}
});
// GET /api/v1/docker/hosts - Get all hosts with Docker
router.get("/hosts", authenticateToken, async (req, res) => {
try {
const { page = 1, limit = 50 } = req.query;
// Get hosts that have Docker containers
const hostsWithContainers = await prisma.docker_containers.groupBy({
by: ["host_id"],
_count: true,
});
const hostIds = hostsWithContainers.map((h) => h.host_id);
const skip = (parseInt(page, 10) - 1) * parseInt(limit, 10);
const take = parseInt(limit, 10);
const hosts = await prisma.hosts.findMany({
where: { id: { in: hostIds } },
skip,
take,
orderBy: { friendly_name: "asc" },
});
// Get container counts and statuses for each host
const hostsWithStats = await Promise.all(
hosts.map(async (host) => {
const [totalContainers, runningContainers, totalImages] =
await Promise.all([
prisma.docker_containers.count({
where: { host_id: host.id },
}),
prisma.docker_containers.count({
where: { host_id: host.id, status: "running" },
}),
prisma.docker_containers.findMany({
where: { host_id: host.id },
select: { image_id: true },
distinct: ["image_id"],
}),
]);
return {
...host,
dockerStats: {
totalContainers,
runningContainers,
totalImages: totalImages.length,
},
};
}),
);
res.json(
convertBigIntToString({
hosts: hostsWithStats,
pagination: {
page: parseInt(page, 10),
limit: parseInt(limit, 10),
total: hostIds.length,
totalPages: Math.ceil(hostIds.length / parseInt(limit, 10)),
},
}),
);
} catch (error) {
console.error("Error fetching Docker hosts:", error);
res.status(500).json({ error: "Failed to fetch Docker hosts" });
}
});
// GET /api/v1/docker/hosts/:id - Get host Docker detail
router.get("/hosts/:id", authenticateToken, async (req, res) => {
try {
const { id } = req.params;
const host = await prisma.hosts.findUnique({
where: { id },
});
if (!host) {
return res.status(404).json({ error: "Host not found" });
}
// Get containers on this host
const containers = await prisma.docker_containers.findMany({
where: { host_id: id },
include: {
docker_images: {
include: {
docker_image_updates: true,
},
},
},
orderBy: { name: "asc" },
});
// Get unique images on this host
const imageIds = [...new Set(containers.map((c) => c.image_id))].filter(
Boolean,
);
const images = await prisma.docker_images.findMany({
where: { id: { in: imageIds } },
});
// Get container statistics
const runningContainers = containers.filter(
(c) => c.status === "running",
).length;
const stoppedContainers = containers.filter(
(c) => c.status === "exited" || c.status === "stopped",
).length;
res.json(
convertBigIntToString({
host,
containers,
images,
stats: {
totalContainers: containers.length,
runningContainers,
stoppedContainers,
totalImages: images.length,
},
}),
);
} catch (error) {
console.error("Error fetching host Docker detail:", error);
res.status(500).json({ error: "Failed to fetch host Docker detail" });
}
});
// GET /api/v1/docker/updates - Get available updates
router.get("/updates", authenticateToken, async (req, res) => {
try {
const { page = 1, limit = 50, securityOnly = false } = req.query;
const where = {};
if (securityOnly === "true") {
where.is_security_update = true;
}
const skip = (parseInt(page, 10) - 1) * parseInt(limit, 10);
const take = parseInt(limit, 10);
const [updates, total] = await Promise.all([
prisma.docker_image_updates.findMany({
where,
include: {
docker_images: {
include: {
docker_containers: {
select: {
id: true,
host_id: true,
name: true,
},
},
},
},
},
orderBy: [{ is_security_update: "desc" }, { created_at: "desc" }],
skip,
take,
}),
prisma.docker_image_updates.count({ where }),
]);
// Get affected hosts for each update
const updatesWithHosts = await Promise.all(
updates.map(async (update) => {
const hostIds = [
...new Set(
update.docker_images.docker_containers.map((c) => c.host_id),
),
];
const hosts = await prisma.hosts.findMany({
where: { id: { in: hostIds } },
select: { id: true, friendly_name: true, hostname: true },
});
return {
...update,
affectedHosts: hosts,
affectedContainersCount:
update.docker_images.docker_containers.length,
};
}),
);
res.json(
convertBigIntToString({
updates: updatesWithHosts,
pagination: {
page: parseInt(page, 10),
limit: parseInt(limit, 10),
total,
totalPages: Math.ceil(total / parseInt(limit, 10)),
},
}),
);
} catch (error) {
console.error("Error fetching Docker updates:", error);
res.status(500).json({ error: "Failed to fetch Docker updates" });
}
});
// POST /api/v1/docker/collect - Collect Docker data from agent
router.post("/collect", async (req, res) => {
try {
const { apiId, apiKey, containers, images, updates } = req.body;
// Validate API credentials
const host = await prisma.hosts.findFirst({
where: { api_id: apiId, api_key: apiKey },
});
if (!host) {
return res.status(401).json({ error: "Invalid API credentials" });
}
const now = new Date();
// Helper function to validate and parse dates
const parseDate = (dateString) => {
if (!dateString) return now;
const date = new Date(dateString);
return Number.isNaN(date.getTime()) ? now : date;
};
// Process containers
if (containers && Array.isArray(containers)) {
for (const containerData of containers) {
const containerId = uuidv4();
// Find or create image
let imageId = null;
if (containerData.image_repository && containerData.image_tag) {
const image = await prisma.docker_images.upsert({
where: {
repository_tag_image_id: {
repository: containerData.image_repository,
tag: containerData.image_tag,
image_id: containerData.image_id || "unknown",
},
},
update: {
last_checked: now,
updated_at: now,
},
create: {
id: uuidv4(),
repository: containerData.image_repository,
tag: containerData.image_tag,
image_id: containerData.image_id || "unknown",
source: containerData.image_source || "docker-hub",
created_at: parseDate(containerData.created_at),
updated_at: now,
},
});
imageId = image.id;
}
// Upsert container
await prisma.docker_containers.upsert({
where: {
host_id_container_id: {
host_id: host.id,
container_id: containerData.container_id,
},
},
update: {
name: containerData.name,
image_id: imageId,
image_name: containerData.image_name,
image_tag: containerData.image_tag || "latest",
status: containerData.status,
state: containerData.state,
ports: containerData.ports || null,
started_at: containerData.started_at
? parseDate(containerData.started_at)
: null,
updated_at: now,
last_checked: now,
},
create: {
id: containerId,
host_id: host.id,
container_id: containerData.container_id,
name: containerData.name,
image_id: imageId,
image_name: containerData.image_name,
image_tag: containerData.image_tag || "latest",
status: containerData.status,
state: containerData.state,
ports: containerData.ports || null,
created_at: parseDate(containerData.created_at),
started_at: containerData.started_at
? parseDate(containerData.started_at)
: null,
updated_at: now,
},
});
}
}
// Process standalone images
if (images && Array.isArray(images)) {
for (const imageData of images) {
await prisma.docker_images.upsert({
where: {
repository_tag_image_id: {
repository: imageData.repository,
tag: imageData.tag,
image_id: imageData.image_id,
},
},
update: {
size_bytes: imageData.size_bytes
? BigInt(imageData.size_bytes)
: null,
last_checked: now,
updated_at: now,
},
create: {
id: uuidv4(),
repository: imageData.repository,
tag: imageData.tag,
image_id: imageData.image_id,
digest: imageData.digest,
size_bytes: imageData.size_bytes
? BigInt(imageData.size_bytes)
: null,
source: imageData.source || "docker-hub",
created_at: parseDate(imageData.created_at),
updated_at: now,
},
});
}
}
// Process updates
// First, get all images for this host to clean up old updates
const hostImageIds = await prisma.docker_containers
.findMany({
where: { host_id: host.id },
select: { image_id: true },
distinct: ["image_id"],
})
.then((results) => results.map((r) => r.image_id).filter(Boolean));
// Delete old updates for images on this host that are no longer reported
if (hostImageIds.length > 0) {
const reportedImageIds = [];
// Process new updates
if (updates && Array.isArray(updates)) {
for (const updateData of updates) {
// Find the image by repository, tag, and image_id
const image = await prisma.docker_images.findFirst({
where: {
repository: updateData.repository,
tag: updateData.current_tag,
image_id: updateData.image_id,
},
});
if (image) {
reportedImageIds.push(image.id);
// Store digest info in changelog_url field as JSON for now
const digestInfo = JSON.stringify({
method: "digest_comparison",
current_digest: updateData.current_digest,
available_digest: updateData.available_digest,
});
// Upsert the update record
await prisma.docker_image_updates.upsert({
where: {
image_id_available_tag: {
image_id: image.id,
available_tag: updateData.available_tag,
},
},
update: {
updated_at: now,
changelog_url: digestInfo,
severity: "digest_changed",
},
create: {
id: uuidv4(),
image_id: image.id,
current_tag: updateData.current_tag,
available_tag: updateData.available_tag,
severity: "digest_changed",
changelog_url: digestInfo,
updated_at: now,
},
});
}
}
}
// Remove stale updates for images on this host that are no longer in the updates list
const imageIdsToCleanup = hostImageIds.filter(
(id) => !reportedImageIds.includes(id),
);
if (imageIdsToCleanup.length > 0) {
await prisma.docker_image_updates.deleteMany({
where: {
image_id: { in: imageIdsToCleanup },
},
});
}
}
res.json({ success: true, message: "Docker data collected successfully" });
} catch (error) {
console.error("Error collecting Docker data:", error);
console.error("Error stack:", error.stack);
console.error("Request body:", JSON.stringify(req.body, null, 2));
res.status(500).json({
error: "Failed to collect Docker data",
message: error.message,
details: process.env.NODE_ENV === "development" ? error.stack : undefined,
});
}
});
// GET /api/v1/docker/agent - Serve the Docker agent installation script
router.get("/agent", async (_req, res) => {
try {
const fs = require("node:fs");
const path = require("node:path");
const agentPath = path.join(
__dirname,
"../../..",
"agents",
"patchmon-docker-agent.sh",
);
// Check if file exists
if (!fs.existsSync(agentPath)) {
return res.status(404).json({ error: "Docker agent script not found" });
}
// Read and serve the file
const agentScript = fs.readFileSync(agentPath, "utf8");
res.setHeader("Content-Type", "text/x-shellscript");
res.setHeader(
"Content-Disposition",
'inline; filename="patchmon-docker-agent.sh"',
);
res.send(agentScript);
} catch (error) {
console.error("Error serving Docker agent:", error);
res.status(500).json({ error: "Failed to serve Docker agent script" });
}
});
module.exports = router;

View File

@@ -0,0 +1,246 @@
const express = require("express");
const { getPrismaClient } = require("../config/prisma");
const bcrypt = require("bcryptjs");
const router = express.Router();
const prisma = getPrismaClient();
// Middleware to authenticate API key
const authenticateApiKey = async (req, res, next) => {
try {
const authHeader = req.headers.authorization;
if (!authHeader || !authHeader.startsWith("Basic ")) {
return res
.status(401)
.json({ error: "Missing or invalid authorization header" });
}
// Decode base64 credentials
const base64Credentials = authHeader.split(" ")[1];
const credentials = Buffer.from(base64Credentials, "base64").toString(
"ascii",
);
const [apiKey, apiSecret] = credentials.split(":");
if (!apiKey || !apiSecret) {
return res.status(401).json({ error: "Invalid credentials format" });
}
// Find the token in database
const token = await prisma.auto_enrollment_tokens.findUnique({
where: { token_key: apiKey },
include: {
users: {
select: {
id: true,
username: true,
role: true,
},
},
},
});
if (!token) {
console.log(`API key not found: ${apiKey}`);
return res.status(401).json({ error: "Invalid API key" });
}
// Check if token is active
if (!token.is_active) {
return res.status(401).json({ error: "API key is disabled" });
}
// Check if token has expired
if (token.expires_at && new Date(token.expires_at) < new Date()) {
return res.status(401).json({ error: "API key has expired" });
}
// Check if token is for gethomepage integration
if (token.metadata?.integration_type !== "gethomepage") {
return res.status(401).json({ error: "Invalid API key type" });
}
// Verify the secret
const isValidSecret = await bcrypt.compare(apiSecret, token.token_secret);
if (!isValidSecret) {
return res.status(401).json({ error: "Invalid API secret" });
}
// Check IP restrictions if any
if (token.allowed_ip_ranges && token.allowed_ip_ranges.length > 0) {
const clientIp = req.ip || req.connection.remoteAddress;
const forwardedFor = req.headers["x-forwarded-for"];
const realIp = req.headers["x-real-ip"];
// Get the actual client IP (considering proxies)
const actualClientIp = forwardedFor
? forwardedFor.split(",")[0].trim()
: realIp || clientIp;
const isAllowedIp = token.allowed_ip_ranges.some((range) => {
// Simple IP range check (can be enhanced for CIDR support)
return actualClientIp.startsWith(range) || actualClientIp === range;
});
if (!isAllowedIp) {
console.log(
`IP validation failed. Client IP: ${actualClientIp}, Allowed ranges: ${token.allowed_ip_ranges.join(", ")}`,
);
return res.status(403).json({ error: "IP address not allowed" });
}
}
// Update last used timestamp
await prisma.auto_enrollment_tokens.update({
where: { id: token.id },
data: { last_used_at: new Date() },
});
// Attach token info to request
req.apiToken = token;
next();
} catch (error) {
console.error("API key authentication error:", error);
res.status(500).json({ error: "Authentication failed" });
}
};
// Get homepage widget statistics
router.get("/stats", authenticateApiKey, async (_req, res) => {
try {
// Get total hosts count
const totalHosts = await prisma.hosts.count({
where: { status: "active" },
});
// Get total unique packages that need updates (consistent with dashboard)
const totalOutdatedPackages = await prisma.packages.count({
where: {
host_packages: {
some: {
needs_update: true,
},
},
},
});
// Get total repositories count
const totalRepos = await prisma.repositories.count({
where: { is_active: true },
});
// Get hosts that need updates (have outdated packages)
const hostsNeedingUpdates = await prisma.hosts.count({
where: {
status: "active",
host_packages: {
some: {
needs_update: true,
},
},
},
});
// Get security updates count (unique packages - consistent with dashboard)
const securityUpdates = await prisma.packages.count({
where: {
host_packages: {
some: {
needs_update: true,
is_security_update: true,
},
},
},
});
// Get hosts with security updates
const hostsWithSecurityUpdates = await prisma.hosts.count({
where: {
status: "active",
host_packages: {
some: {
needs_update: true,
is_security_update: true,
},
},
},
});
// Get up-to-date hosts count
const upToDateHosts = totalHosts - hostsNeedingUpdates;
// Get recent update activity (last 24 hours)
const oneDayAgo = new Date(Date.now() - 24 * 60 * 60 * 1000);
const recentUpdates = await prisma.update_history.count({
where: {
timestamp: {
gte: oneDayAgo,
},
status: "success",
},
});
// Get OS distribution
const osDistribution = await prisma.hosts.groupBy({
by: ["os_type"],
where: { status: "active" },
_count: {
id: true,
},
orderBy: {
_count: {
id: "desc",
},
},
});
// Format OS distribution data
const osDistributionFormatted = osDistribution.map((os) => ({
name: os.os_type,
count: os._count.id,
}));
// Extract top 3 OS types for flat display in widgets
const top_os_1 = osDistributionFormatted[0] || { name: "None", count: 0 };
const top_os_2 = osDistributionFormatted[1] || { name: "None", count: 0 };
const top_os_3 = osDistributionFormatted[2] || { name: "None", count: 0 };
// Prepare response data
const stats = {
total_hosts: totalHosts,
total_outdated_packages: totalOutdatedPackages,
total_repos: totalRepos,
hosts_needing_updates: hostsNeedingUpdates,
up_to_date_hosts: upToDateHosts,
security_updates: securityUpdates,
hosts_with_security_updates: hostsWithSecurityUpdates,
recent_updates_24h: recentUpdates,
os_distribution: osDistributionFormatted,
// Flattened OS data for easy widget display
top_os_1_name: top_os_1.name,
top_os_1_count: top_os_1.count,
top_os_2_name: top_os_2.name,
top_os_2_count: top_os_2.count,
top_os_3_name: top_os_3.name,
top_os_3_count: top_os_3.count,
last_updated: new Date().toISOString(),
};
res.json(stats);
} catch (error) {
console.error("Error fetching homepage stats:", error);
res.status(500).json({ error: "Failed to fetch statistics" });
}
});
// Health check endpoint for the API
router.get("/health", authenticateApiKey, async (req, res) => {
res.json({
status: "ok",
timestamp: new Date().toISOString(),
api_key: req.apiToken.token_name,
});
});
module.exports = router;

View File

@@ -1,226 +1,267 @@
const express = require('express');
const { body, validationResult } = require('express-validator');
const { PrismaClient } = require('@prisma/client');
const { authenticateToken } = require('../middleware/auth');
const { requireManageHosts } = require('../middleware/permissions');
const express = require("express");
const { body, validationResult } = require("express-validator");
const { getPrismaClient } = require("../config/prisma");
const { randomUUID } = require("node:crypto");
const { authenticateToken } = require("../middleware/auth");
const { requireManageHosts } = require("../middleware/permissions");
const router = express.Router();
const prisma = new PrismaClient();
const prisma = getPrismaClient();
// Get all host groups
router.get('/', authenticateToken, async (req, res) => {
try {
const hostGroups = await prisma.hostGroup.findMany({
include: {
_count: {
select: {
hosts: true
}
}
},
orderBy: {
name: 'asc'
}
});
router.get("/", authenticateToken, async (_req, res) => {
try {
const hostGroups = await prisma.host_groups.findMany({
include: {
_count: {
select: {
host_group_memberships: true,
},
},
},
orderBy: {
name: "asc",
},
});
res.json(hostGroups);
} catch (error) {
console.error('Error fetching host groups:', error);
res.status(500).json({ error: 'Failed to fetch host groups' });
}
res.json(hostGroups);
} catch (error) {
console.error("Error fetching host groups:", error);
res.status(500).json({ error: "Failed to fetch host groups" });
}
});
// Get a specific host group by ID
router.get('/:id', authenticateToken, async (req, res) => {
try {
const { id } = req.params;
router.get("/:id", authenticateToken, async (req, res) => {
try {
const { id } = req.params;
const hostGroup = await prisma.hostGroup.findUnique({
where: { id },
include: {
hosts: {
select: {
id: true,
friendlyName: true,
hostname: true,
ip: true,
osType: true,
osVersion: true,
status: true,
lastUpdate: true
}
}
}
});
const hostGroup = await prisma.host_groups.findUnique({
where: { id },
include: {
host_group_memberships: {
include: {
hosts: {
select: {
id: true,
friendly_name: true,
hostname: true,
ip: true,
os_type: true,
os_version: true,
status: true,
last_update: true,
},
},
},
},
},
});
if (!hostGroup) {
return res.status(404).json({ error: 'Host group not found' });
}
if (!hostGroup) {
return res.status(404).json({ error: "Host group not found" });
}
res.json(hostGroup);
} catch (error) {
console.error('Error fetching host group:', error);
res.status(500).json({ error: 'Failed to fetch host group' });
}
res.json(hostGroup);
} catch (error) {
console.error("Error fetching host group:", error);
res.status(500).json({ error: "Failed to fetch host group" });
}
});
// Create a new host group
router.post('/', authenticateToken, requireManageHosts, [
body('name').trim().isLength({ min: 1 }).withMessage('Name is required'),
body('description').optional().trim(),
body('color').optional().isHexColor().withMessage('Color must be a valid hex color')
], async (req, res) => {
try {
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
}
router.post(
"/",
authenticateToken,
requireManageHosts,
[
body("name").trim().isLength({ min: 1 }).withMessage("Name is required"),
body("description").optional().trim(),
body("color")
.optional()
.isHexColor()
.withMessage("Color must be a valid hex color"),
],
async (req, res) => {
try {
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
}
const { name, description, color } = req.body;
const { name, description, color } = req.body;
// Check if host group with this name already exists
const existingGroup = await prisma.hostGroup.findUnique({
where: { name }
});
// Check if host group with this name already exists
const existingGroup = await prisma.host_groups.findUnique({
where: { name },
});
if (existingGroup) {
return res.status(400).json({ error: 'A host group with this name already exists' });
}
if (existingGroup) {
return res
.status(400)
.json({ error: "A host group with this name already exists" });
}
const hostGroup = await prisma.hostGroup.create({
data: {
name,
description: description || null,
color: color || '#3B82F6'
}
});
const hostGroup = await prisma.host_groups.create({
data: {
id: randomUUID(),
name,
description: description || null,
color: color || "#3B82F6",
updated_at: new Date(),
},
});
res.status(201).json(hostGroup);
} catch (error) {
console.error('Error creating host group:', error);
res.status(500).json({ error: 'Failed to create host group' });
}
});
res.status(201).json(hostGroup);
} catch (error) {
console.error("Error creating host group:", error);
res.status(500).json({ error: "Failed to create host group" });
}
},
);
// Update a host group
router.put('/:id', authenticateToken, requireManageHosts, [
body('name').trim().isLength({ min: 1 }).withMessage('Name is required'),
body('description').optional().trim(),
body('color').optional().isHexColor().withMessage('Color must be a valid hex color')
], async (req, res) => {
try {
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
}
router.put(
"/:id",
authenticateToken,
requireManageHosts,
[
body("name").trim().isLength({ min: 1 }).withMessage("Name is required"),
body("description").optional().trim(),
body("color")
.optional()
.isHexColor()
.withMessage("Color must be a valid hex color"),
],
async (req, res) => {
try {
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
}
const { id } = req.params;
const { name, description, color } = req.body;
const { id } = req.params;
const { name, description, color } = req.body;
// Check if host group exists
const existingGroup = await prisma.hostGroup.findUnique({
where: { id }
});
// Check if host group exists
const existingGroup = await prisma.host_groups.findUnique({
where: { id },
});
if (!existingGroup) {
return res.status(404).json({ error: 'Host group not found' });
}
if (!existingGroup) {
return res.status(404).json({ error: "Host group not found" });
}
// Check if another host group with this name already exists
const duplicateGroup = await prisma.hostGroup.findFirst({
where: {
name,
id: { not: id }
}
});
// Check if another host group with this name already exists
const duplicateGroup = await prisma.host_groups.findFirst({
where: {
name,
id: { not: id },
},
});
if (duplicateGroup) {
return res.status(400).json({ error: 'A host group with this name already exists' });
}
if (duplicateGroup) {
return res
.status(400)
.json({ error: "A host group with this name already exists" });
}
const hostGroup = await prisma.hostGroup.update({
where: { id },
data: {
name,
description: description || null,
color: color || '#3B82F6'
}
});
const hostGroup = await prisma.host_groups.update({
where: { id },
data: {
name,
description: description || null,
color: color || "#3B82F6",
updated_at: new Date(),
},
});
res.json(hostGroup);
} catch (error) {
console.error('Error updating host group:', error);
res.status(500).json({ error: 'Failed to update host group' });
}
});
res.json(hostGroup);
} catch (error) {
console.error("Error updating host group:", error);
res.status(500).json({ error: "Failed to update host group" });
}
},
);
// Delete a host group
router.delete('/:id', authenticateToken, requireManageHosts, async (req, res) => {
try {
const { id } = req.params;
router.delete(
"/:id",
authenticateToken,
requireManageHosts,
async (req, res) => {
try {
const { id } = req.params;
// Check if host group exists
const existingGroup = await prisma.hostGroup.findUnique({
where: { id },
include: {
_count: {
select: {
hosts: true
}
}
}
});
// Check if host group exists
const existingGroup = await prisma.host_groups.findUnique({
where: { id },
include: {
_count: {
select: {
host_group_memberships: true,
},
},
},
});
if (!existingGroup) {
return res.status(404).json({ error: 'Host group not found' });
}
if (!existingGroup) {
return res.status(404).json({ error: "Host group not found" });
}
// Check if host group has hosts
if (existingGroup._count.hosts > 0) {
return res.status(400).json({
error: 'Cannot delete host group that contains hosts. Please move or remove hosts first.'
});
}
// If host group has memberships, remove them first
if (existingGroup._count.host_group_memberships > 0) {
await prisma.host_group_memberships.deleteMany({
where: { host_group_id: id },
});
}
await prisma.hostGroup.delete({
where: { id }
});
await prisma.host_groups.delete({
where: { id },
});
res.json({ message: 'Host group deleted successfully' });
} catch (error) {
console.error('Error deleting host group:', error);
res.status(500).json({ error: 'Failed to delete host group' });
}
});
res.json({ message: "Host group deleted successfully" });
} catch (error) {
console.error("Error deleting host group:", error);
res.status(500).json({ error: "Failed to delete host group" });
}
},
);
// Get hosts in a specific group
router.get('/:id/hosts', authenticateToken, async (req, res) => {
try {
const { id } = req.params;
router.get("/:id/hosts", authenticateToken, async (req, res) => {
try {
const { id } = req.params;
const hosts = await prisma.host.findMany({
where: { hostGroupId: id },
select: {
id: true,
friendlyName: true,
ip: true,
osType: true,
osVersion: true,
architecture: true,
status: true,
lastUpdate: true,
createdAt: true
},
orderBy: {
friendlyName: 'asc'
}
});
const hosts = await prisma.hosts.findMany({
where: {
host_group_memberships: {
some: {
host_group_id: id,
},
},
},
select: {
id: true,
friendly_name: true,
ip: true,
os_type: true,
os_version: true,
architecture: true,
status: true,
last_update: true,
created_at: true,
},
orderBy: {
friendly_name: "asc",
},
});
res.json(hosts);
} catch (error) {
console.error('Error fetching hosts in group:', error);
res.status(500).json({ error: 'Failed to fetch hosts in group' });
}
res.json(hosts);
} catch (error) {
console.error("Error fetching hosts in group:", error);
res.status(500).json({ error: "Failed to fetch hosts in group" });
}
});
module.exports = router;

File diff suppressed because it is too large Load Diff

View File

@@ -1,214 +1,373 @@
const express = require('express');
const { PrismaClient } = require('@prisma/client');
const { body, validationResult } = require('express-validator');
const express = require("express");
const { getPrismaClient } = require("../config/prisma");
const router = express.Router();
const prisma = new PrismaClient();
const prisma = getPrismaClient();
// Get all packages with their update status
router.get('/', async (req, res) => {
try {
const {
page = 1,
limit = 50,
search = '',
category = '',
needsUpdate = '',
isSecurityUpdate = ''
} = req.query;
router.get("/", async (req, res) => {
try {
const {
page = 1,
limit = 50,
search = "",
category = "",
needsUpdate = "",
isSecurityUpdate = "",
host = "",
} = req.query;
const skip = (parseInt(page) - 1) * parseInt(limit);
const take = parseInt(limit);
const skip = (parseInt(page, 10) - 1) * parseInt(limit, 10);
const take = parseInt(limit, 10);
// Build where clause
const where = {
AND: [
// Search filter
search ? {
OR: [
{ name: { contains: search, mode: 'insensitive' } },
{ description: { contains: search, mode: 'insensitive' } }
]
} : {},
// Category filter
category ? { category: { equals: category } } : {},
// Update status filters
needsUpdate ? {
hostPackages: {
some: {
needsUpdate: needsUpdate === 'true'
}
}
} : {},
isSecurityUpdate ? {
hostPackages: {
some: {
isSecurityUpdate: isSecurityUpdate === 'true'
}
}
} : {}
]
};
// Build where clause
const where = {
AND: [
// Search filter
search
? {
OR: [
{ name: { contains: search, mode: "insensitive" } },
{ description: { contains: search, mode: "insensitive" } },
],
}
: {},
// Category filter
category ? { category: { equals: category } } : {},
// Host filter - only return packages installed on the specified host
// Combined with update status filters if both are present
host
? {
host_packages: {
some: {
host_id: host,
// If needsUpdate or isSecurityUpdate filters are present, apply them here
...(needsUpdate
? { needs_update: needsUpdate === "true" }
: {}),
...(isSecurityUpdate
? { is_security_update: isSecurityUpdate === "true" }
: {}),
},
},
}
: {},
// Update status filters (only applied if no host filter)
// If host filter is present, these are already applied above
!host && needsUpdate
? {
host_packages: {
some: {
needs_update: needsUpdate === "true",
},
},
}
: {},
!host && isSecurityUpdate
? {
host_packages: {
some: {
is_security_update: isSecurityUpdate === "true",
},
},
}
: {},
],
};
// Get packages with counts
const [packages, totalCount] = await Promise.all([
prisma.package.findMany({
where,
select: {
id: true,
name: true,
description: true,
category: true,
latestVersion: true,
createdAt: true,
_count: {
hostPackages: true
}
},
skip,
take,
orderBy: {
name: 'asc'
}
}),
prisma.package.count({ where })
]);
// Get packages with counts
const [packages, totalCount] = await Promise.all([
prisma.packages.findMany({
where,
select: {
id: true,
name: true,
description: true,
category: true,
latest_version: true,
created_at: true,
_count: {
select: {
host_packages: true,
},
},
},
skip,
take,
orderBy: {
name: "asc",
},
}),
prisma.packages.count({ where }),
]);
// Get additional stats for each package
const packagesWithStats = await Promise.all(
packages.map(async (pkg) => {
const [updatesCount, securityCount, affectedHosts] = await Promise.all([
prisma.hostPackage.count({
where: {
packageId: pkg.id,
needsUpdate: true
}
}),
prisma.hostPackage.count({
where: {
packageId: pkg.id,
needsUpdate: true,
isSecurityUpdate: true
}
}),
prisma.hostPackage.findMany({
where: {
packageId: pkg.id,
needsUpdate: true
},
select: {
host: {
select: {
id: true,
friendlyName: true,
hostname: true,
osType: true
}
}
},
take: 10 // Limit to first 10 for performance
})
]);
// Get additional stats for each package
const packagesWithStats = await Promise.all(
packages.map(async (pkg) => {
// Build base where clause for this package
const baseWhere = { package_id: pkg.id };
return {
...pkg,
stats: {
totalInstalls: pkg._count.hostPackages,
updatesNeeded: updatesCount,
securityUpdates: securityCount,
affectedHosts: affectedHosts.map(hp => hp.host)
}
};
})
);
// If host filter is specified, add host filter to all queries
const hostWhere = host ? { ...baseWhere, host_id: host } : baseWhere;
res.json({
packages: packagesWithStats,
pagination: {
page: parseInt(page),
limit: parseInt(limit),
total: totalCount,
pages: Math.ceil(totalCount / parseInt(limit))
}
});
} catch (error) {
console.error('Error fetching packages:', error);
res.status(500).json({ error: 'Failed to fetch packages' });
}
const [updatesCount, securityCount, packageHosts] = await Promise.all([
prisma.host_packages.count({
where: {
...hostWhere,
needs_update: true,
},
}),
prisma.host_packages.count({
where: {
...hostWhere,
needs_update: true,
is_security_update: true,
},
}),
prisma.host_packages.findMany({
where: {
...hostWhere,
// If host filter is specified, include all packages for that host
// Otherwise, only include packages that need updates
...(host ? {} : { needs_update: true }),
},
select: {
hosts: {
select: {
id: true,
friendly_name: true,
hostname: true,
os_type: true,
},
},
current_version: true,
available_version: true,
needs_update: true,
is_security_update: true,
},
take: 10, // Limit to first 10 for performance
}),
]);
return {
...pkg,
packageHostsCount: pkg._count.host_packages,
packageHosts: packageHosts.map((hp) => ({
hostId: hp.hosts.id,
friendlyName: hp.hosts.friendly_name,
osType: hp.hosts.os_type,
currentVersion: hp.current_version,
availableVersion: hp.available_version,
needsUpdate: hp.needs_update,
isSecurityUpdate: hp.is_security_update,
})),
stats: {
totalInstalls: pkg._count.host_packages,
updatesNeeded: updatesCount,
securityUpdates: securityCount,
},
};
}),
);
res.json({
packages: packagesWithStats,
pagination: {
page: parseInt(page, 10),
limit: parseInt(limit, 10),
total: totalCount,
pages: Math.ceil(totalCount / parseInt(limit, 10)),
},
});
} catch (error) {
console.error("Error fetching packages:", error);
res.status(500).json({ error: "Failed to fetch packages" });
}
});
// Get package details by ID
router.get('/:packageId', async (req, res) => {
try {
const { packageId } = req.params;
router.get("/:packageId", async (req, res) => {
try {
const { packageId } = req.params;
const packageData = await prisma.package.findUnique({
where: { id: packageId },
include: {
hostPackages: {
include: {
host: {
select: {
id: true,
hostname: true,
ip: true,
osType: true,
osVersion: true,
lastUpdate: true
}
}
},
orderBy: {
needsUpdate: 'desc'
}
}
}
});
const packageData = await prisma.packages.findUnique({
where: { id: packageId },
include: {
host_packages: {
include: {
hosts: {
select: {
id: true,
hostname: true,
ip: true,
os_type: true,
os_version: true,
last_update: true,
},
},
},
orderBy: {
needs_update: "desc",
},
},
},
});
if (!packageData) {
return res.status(404).json({ error: 'Package not found' });
}
if (!packageData) {
return res.status(404).json({ error: "Package not found" });
}
// Calculate statistics
const stats = {
totalInstalls: packageData.hostPackages.length,
updatesNeeded: packageData.hostPackages.filter(hp => hp.needsUpdate).length,
securityUpdates: packageData.hostPackages.filter(hp => hp.needsUpdate && hp.isSecurityUpdate).length,
upToDate: packageData.hostPackages.filter(hp => !hp.needsUpdate).length
};
// Calculate statistics
const stats = {
totalInstalls: packageData.host_packages.length,
updatesNeeded: packageData.host_packages.filter((hp) => hp.needs_update)
.length,
securityUpdates: packageData.host_packages.filter(
(hp) => hp.needs_update && hp.is_security_update,
).length,
upToDate: packageData.host_packages.filter((hp) => !hp.needs_update)
.length,
};
// Group by version
const versionDistribution = packageData.hostPackages.reduce((acc, hp) => {
const version = hp.currentVersion;
acc[version] = (acc[version] || 0) + 1;
return acc;
}, {});
// Group by version
const versionDistribution = packageData.host_packages.reduce((acc, hp) => {
const version = hp.current_version;
acc[version] = (acc[version] || 0) + 1;
return acc;
}, {});
// Group by OS type
const osDistribution = packageData.hostPackages.reduce((acc, hp) => {
const osType = hp.host.osType;
acc[osType] = (acc[osType] || 0) + 1;
return acc;
}, {});
// Group by OS type
const osDistribution = packageData.host_packages.reduce((acc, hp) => {
const osType = hp.hosts.os_type;
acc[osType] = (acc[osType] || 0) + 1;
return acc;
}, {});
res.json({
...packageData,
stats,
distributions: {
versions: Object.entries(versionDistribution).map(([version, count]) => ({
version,
count
})),
osTypes: Object.entries(osDistribution).map(([osType, count]) => ({
osType,
count
}))
}
});
} catch (error) {
console.error('Error fetching package details:', error);
res.status(500).json({ error: 'Failed to fetch package details' });
}
res.json({
...packageData,
stats,
distributions: {
versions: Object.entries(versionDistribution).map(
([version, count]) => ({
version,
count,
}),
),
osTypes: Object.entries(osDistribution).map(([osType, count]) => ({
osType,
count,
})),
},
});
} catch (error) {
console.error("Error fetching package details:", error);
res.status(500).json({ error: "Failed to fetch package details" });
}
});
module.exports = router;
// Get hosts where a package is installed
router.get("/:packageId/hosts", async (req, res) => {
try {
const { packageId } = req.params;
const {
page = 1,
limit = 25,
search = "",
sortBy = "friendly_name",
sortOrder = "asc",
} = req.query;
const offset = (parseInt(page, 10) - 1) * parseInt(limit, 10);
// Build search conditions
const searchConditions = search
? {
OR: [
{
hosts: {
friendly_name: { contains: search, mode: "insensitive" },
},
},
{ hosts: { hostname: { contains: search, mode: "insensitive" } } },
{ current_version: { contains: search, mode: "insensitive" } },
{ available_version: { contains: search, mode: "insensitive" } },
],
}
: {};
// Build sort conditions
const orderBy = {};
if (
sortBy === "friendly_name" ||
sortBy === "hostname" ||
sortBy === "os_type"
) {
orderBy.hosts = { [sortBy]: sortOrder };
} else if (sortBy === "needs_update") {
orderBy[sortBy] = sortOrder;
} else {
orderBy[sortBy] = sortOrder;
}
// Get total count
const totalCount = await prisma.host_packages.count({
where: {
package_id: packageId,
...searchConditions,
},
});
// Get paginated results
const hostPackages = await prisma.host_packages.findMany({
where: {
package_id: packageId,
...searchConditions,
},
include: {
hosts: {
select: {
id: true,
friendly_name: true,
hostname: true,
os_type: true,
os_version: true,
last_update: true,
},
},
},
orderBy,
skip: offset,
take: parseInt(limit, 10),
});
// Transform the data for the frontend
const hosts = hostPackages.map((hp) => ({
hostId: hp.hosts.id,
friendlyName: hp.hosts.friendly_name,
hostname: hp.hosts.hostname,
osType: hp.hosts.os_type,
osVersion: hp.hosts.os_version,
lastUpdate: hp.hosts.last_update,
currentVersion: hp.current_version,
availableVersion: hp.available_version,
needsUpdate: hp.needs_update,
isSecurityUpdate: hp.is_security_update,
lastChecked: hp.last_checked,
}));
res.json({
hosts,
pagination: {
page: parseInt(page, 10),
limit: parseInt(limit, 10),
total: totalCount,
pages: Math.ceil(totalCount / parseInt(limit, 10)),
},
});
} catch (error) {
console.error("Error fetching package hosts:", error);
res.status(500).json({ error: "Failed to fetch package hosts" });
}
});
module.exports = router;

View File

@@ -1,173 +1,203 @@
const express = require('express');
const { PrismaClient } = require('@prisma/client');
const { authenticateToken, requireAdmin } = require('../middleware/auth');
const { requireManageSettings } = require('../middleware/permissions');
const express = require("express");
const { getPrismaClient } = require("../config/prisma");
const { authenticateToken } = require("../middleware/auth");
const {
requireManageSettings,
requireManageUsers,
} = require("../middleware/permissions");
const router = express.Router();
const prisma = new PrismaClient();
const prisma = getPrismaClient();
// Get all role permissions
router.get('/roles', authenticateToken, requireManageSettings, async (req, res) => {
try {
const permissions = await prisma.rolePermissions.findMany({
orderBy: {
role: 'asc'
}
});
// Get all role permissions (allow users who can manage users to view roles)
router.get(
"/roles",
authenticateToken,
requireManageUsers,
async (_req, res) => {
try {
const permissions = await prisma.role_permissions.findMany({
orderBy: {
role: "asc",
},
});
res.json(permissions);
} catch (error) {
console.error('Get role permissions error:', error);
res.status(500).json({ error: 'Failed to fetch role permissions' });
}
});
res.json(permissions);
} catch (error) {
console.error("Get role permissions error:", error);
res.status(500).json({ error: "Failed to fetch role permissions" });
}
},
);
// Get permissions for a specific role
router.get('/roles/:role', authenticateToken, requireManageSettings, async (req, res) => {
try {
const { role } = req.params;
const permissions = await prisma.rolePermissions.findUnique({
where: { role }
});
router.get(
"/roles/:role",
authenticateToken,
requireManageSettings,
async (req, res) => {
try {
const { role } = req.params;
if (!permissions) {
return res.status(404).json({ error: 'Role not found' });
}
const permissions = await prisma.role_permissions.findUnique({
where: { role },
});
res.json(permissions);
} catch (error) {
console.error('Get role permission error:', error);
res.status(500).json({ error: 'Failed to fetch role permission' });
}
});
if (!permissions) {
return res.status(404).json({ error: "Role not found" });
}
res.json(permissions);
} catch (error) {
console.error("Get role permission error:", error);
res.status(500).json({ error: "Failed to fetch role permission" });
}
},
);
// Create or update role permissions
router.put('/roles/:role', authenticateToken, requireManageSettings, async (req, res) => {
try {
const { role } = req.params;
const {
canViewDashboard,
canViewHosts,
canManageHosts,
canViewPackages,
canManagePackages,
canViewUsers,
canManageUsers,
canViewReports,
canExportData,
canManageSettings
} = req.body;
router.put(
"/roles/:role",
authenticateToken,
requireManageSettings,
async (req, res) => {
try {
const { role } = req.params;
const {
can_view_dashboard,
can_view_hosts,
can_manage_hosts,
can_view_packages,
can_manage_packages,
can_view_users,
can_manage_users,
can_view_reports,
can_export_data,
can_manage_settings,
} = req.body;
// Prevent modifying admin role permissions (admin should always have full access)
if (role === 'admin') {
return res.status(400).json({ error: 'Cannot modify admin role permissions' });
}
// Prevent modifying admin and user role permissions (built-in roles)
if (role === "admin" || role === "user") {
return res.status(400).json({
error: `Cannot modify ${role} role permissions - this is a built-in role`,
});
}
const permissions = await prisma.rolePermissions.upsert({
where: { role },
update: {
canViewDashboard,
canViewHosts,
canManageHosts,
canViewPackages,
canManagePackages,
canViewUsers,
canManageUsers,
canViewReports,
canExportData,
canManageSettings
},
create: {
role,
canViewDashboard,
canViewHosts,
canManageHosts,
canViewPackages,
canManagePackages,
canViewUsers,
canManageUsers,
canViewReports,
canExportData,
canManageSettings
}
});
const permissions = await prisma.role_permissions.upsert({
where: { role },
update: {
can_view_dashboard: can_view_dashboard,
can_view_hosts: can_view_hosts,
can_manage_hosts: can_manage_hosts,
can_view_packages: can_view_packages,
can_manage_packages: can_manage_packages,
can_view_users: can_view_users,
can_manage_users: can_manage_users,
can_view_reports: can_view_reports,
can_export_data: can_export_data,
can_manage_settings: can_manage_settings,
updated_at: new Date(),
},
create: {
id: require("uuid").v4(),
role,
can_view_dashboard: can_view_dashboard,
can_view_hosts: can_view_hosts,
can_manage_hosts: can_manage_hosts,
can_view_packages: can_view_packages,
can_manage_packages: can_manage_packages,
can_view_users: can_view_users,
can_manage_users: can_manage_users,
can_view_reports: can_view_reports,
can_export_data: can_export_data,
can_manage_settings: can_manage_settings,
updated_at: new Date(),
},
});
res.json({
message: 'Role permissions updated successfully',
permissions
});
} catch (error) {
console.error('Update role permissions error:', error);
res.status(500).json({ error: 'Failed to update role permissions' });
}
});
res.json({
message: "Role permissions updated successfully",
permissions,
});
} catch (error) {
console.error("Update role permissions error:", error);
res.status(500).json({ error: "Failed to update role permissions" });
}
},
);
// Delete a role (and its permissions)
router.delete('/roles/:role', authenticateToken, requireManageSettings, async (req, res) => {
try {
const { role } = req.params;
router.delete(
"/roles/:role",
authenticateToken,
requireManageSettings,
async (req, res) => {
try {
const { role } = req.params;
// Prevent deleting admin role
if (role === 'admin') {
return res.status(400).json({ error: 'Cannot delete admin role' });
}
// Prevent deleting admin and user roles (built-in roles)
if (role === "admin" || role === "user") {
return res.status(400).json({
error: `Cannot delete ${role} role - this is a built-in role`,
});
}
// Check if any users are using this role
const usersWithRole = await prisma.user.count({
where: { role }
});
// Check if any users are using this role
const usersWithRole = await prisma.users.count({
where: { role },
});
if (usersWithRole > 0) {
return res.status(400).json({
error: `Cannot delete role "${role}" because ${usersWithRole} user(s) are currently using it`
});
}
if (usersWithRole > 0) {
return res.status(400).json({
error: `Cannot delete role "${role}" because ${usersWithRole} user(s) are currently using it`,
});
}
await prisma.rolePermissions.delete({
where: { role }
});
await prisma.role_permissions.delete({
where: { role },
});
res.json({
message: `Role "${role}" deleted successfully`
});
} catch (error) {
console.error('Delete role error:', error);
res.status(500).json({ error: 'Failed to delete role' });
}
});
res.json({
message: `Role "${role}" deleted successfully`,
});
} catch (error) {
console.error("Delete role error:", error);
res.status(500).json({ error: "Failed to delete role" });
}
},
);
// Get user's permissions based on their role
router.get('/user-permissions', authenticateToken, async (req, res) => {
try {
const userRole = req.user.role;
const permissions = await prisma.rolePermissions.findUnique({
where: { role: userRole }
});
router.get("/user-permissions", authenticateToken, async (req, res) => {
try {
const userRole = req.user.role;
if (!permissions) {
// If no specific permissions found, return default admin permissions
return res.json({
role: userRole,
canViewDashboard: true,
canViewHosts: true,
canManageHosts: true,
canViewPackages: true,
canManagePackages: true,
canViewUsers: true,
canManageUsers: true,
canViewReports: true,
canExportData: true,
canManageSettings: true,
});
}
const permissions = await prisma.role_permissions.findUnique({
where: { role: userRole },
});
res.json(permissions);
} catch (error) {
console.error('Get user permissions error:', error);
res.status(500).json({ error: 'Failed to fetch user permissions' });
}
if (!permissions) {
// If no specific permissions found, return default admin permissions
return res.json({
role: userRole,
can_view_dashboard: true,
can_view_hosts: true,
can_manage_hosts: true,
can_view_packages: true,
can_manage_packages: true,
can_view_users: true,
can_manage_users: true,
can_view_reports: true,
can_export_data: true,
can_manage_settings: true,
});
}
res.json(permissions);
} catch (error) {
console.error("Get user permissions error:", error);
res.status(500).json({ error: "Failed to fetch user permissions" });
}
});
module.exports = router;

View File

@@ -1,302 +1,418 @@
const express = require('express');
const { body, validationResult } = require('express-validator');
const { PrismaClient } = require('@prisma/client');
const { authenticateToken } = require('../middleware/auth');
const { requireViewHosts, requireManageHosts } = require('../middleware/permissions');
const express = require("express");
const { body, validationResult } = require("express-validator");
const { getPrismaClient } = require("../config/prisma");
const { authenticateToken } = require("../middleware/auth");
const {
requireViewHosts,
requireManageHosts,
} = require("../middleware/permissions");
const router = express.Router();
const prisma = new PrismaClient();
const prisma = getPrismaClient();
// Get all repositories with host count
router.get('/', authenticateToken, requireViewHosts, async (req, res) => {
try {
const repositories = await prisma.repository.findMany({
include: {
hostRepositories: {
include: {
host: {
select: {
id: true,
friendlyName: true,
status: true
}
}
}
},
_count: {
select: {
hostRepositories: true
}
}
},
orderBy: [
{ name: 'asc' },
{ url: 'asc' }
]
});
router.get("/", authenticateToken, requireViewHosts, async (_req, res) => {
try {
const repositories = await prisma.repositories.findMany({
include: {
host_repositories: {
include: {
hosts: {
select: {
id: true,
friendly_name: true,
status: true,
},
},
},
},
_count: {
select: {
host_repositories: true,
},
},
},
orderBy: [{ name: "asc" }, { url: "asc" }],
});
// Transform data to include host counts and status
const transformedRepos = repositories.map(repo => ({
...repo,
hostCount: repo._count.hostRepositories,
enabledHostCount: repo.hostRepositories.filter(hr => hr.isEnabled).length,
activeHostCount: repo.hostRepositories.filter(hr => hr.host.status === 'active').length,
hosts: repo.hostRepositories.map(hr => ({
id: hr.host.id,
friendlyName: hr.host.friendlyName,
status: hr.host.status,
isEnabled: hr.isEnabled,
lastChecked: hr.lastChecked
}))
}));
// Transform data to include host counts and status
const transformedRepos = repositories.map((repo) => ({
...repo,
hostCount: repo._count.host_repositories,
enabledHostCount: repo.host_repositories.filter((hr) => hr.is_enabled)
.length,
activeHostCount: repo.host_repositories.filter(
(hr) => hr.hosts.status === "active",
).length,
hosts: repo.host_repositories.map((hr) => ({
id: hr.hosts.id,
friendlyName: hr.hosts.friendly_name,
status: hr.hosts.status,
isEnabled: hr.is_enabled,
lastChecked: hr.last_checked,
})),
}));
res.json(transformedRepos);
} catch (error) {
console.error('Repository list error:', error);
res.status(500).json({ error: 'Failed to fetch repositories' });
}
res.json(transformedRepos);
} catch (error) {
console.error("Repository list error:", error);
res.status(500).json({ error: "Failed to fetch repositories" });
}
});
// Get repositories for a specific host
router.get('/host/:hostId', authenticateToken, requireViewHosts, async (req, res) => {
try {
const { hostId } = req.params;
router.get(
"/host/:hostId",
authenticateToken,
requireViewHosts,
async (req, res) => {
try {
const { hostId } = req.params;
const hostRepositories = await prisma.hostRepository.findMany({
where: { hostId },
include: {
repository: true,
host: {
select: {
id: true,
friendlyName: true
}
}
},
orderBy: {
repository: {
name: 'asc'
}
}
});
const hostRepositories = await prisma.host_repositories.findMany({
where: { host_id: hostId },
include: {
repositories: true,
hosts: {
select: {
id: true,
friendly_name: true,
},
},
},
orderBy: {
repositories: {
name: "asc",
},
},
});
res.json(hostRepositories);
} catch (error) {
console.error('Host repositories error:', error);
res.status(500).json({ error: 'Failed to fetch host repositories' });
}
});
res.json(hostRepositories);
} catch (error) {
console.error("Host repositories error:", error);
res.status(500).json({ error: "Failed to fetch host repositories" });
}
},
);
// Get repository details with all hosts
router.get('/:repositoryId', authenticateToken, requireViewHosts, async (req, res) => {
try {
const { repositoryId } = req.params;
router.get(
"/:repositoryId",
authenticateToken,
requireViewHosts,
async (req, res) => {
try {
const { repositoryId } = req.params;
const repository = await prisma.repository.findUnique({
where: { id: repositoryId },
include: {
hostRepositories: {
include: {
host: {
select: {
id: true,
friendlyName: true,
hostname: true,
ip: true,
osType: true,
osVersion: true,
status: true,
lastUpdate: true
}
}
},
orderBy: {
host: {
friendlyName: 'asc'
}
}
}
}
});
const repository = await prisma.repositories.findUnique({
where: { id: repositoryId },
include: {
host_repositories: {
include: {
hosts: {
select: {
id: true,
friendly_name: true,
hostname: true,
ip: true,
os_type: true,
os_version: true,
status: true,
last_update: true,
},
},
},
orderBy: {
hosts: {
friendly_name: "asc",
},
},
},
},
});
if (!repository) {
return res.status(404).json({ error: 'Repository not found' });
}
if (!repository) {
return res.status(404).json({ error: "Repository not found" });
}
res.json(repository);
} catch (error) {
console.error('Repository detail error:', error);
res.status(500).json({ error: 'Failed to fetch repository details' });
}
});
res.json(repository);
} catch (error) {
console.error("Repository detail error:", error);
res.status(500).json({ error: "Failed to fetch repository details" });
}
},
);
// Update repository information (admin only)
router.put('/:repositoryId', authenticateToken, requireManageHosts, [
body('name').optional().isLength({ min: 1 }).withMessage('Name is required'),
body('description').optional(),
body('isActive').optional().isBoolean().withMessage('isActive must be a boolean'),
body('priority').optional().isInt({ min: 0 }).withMessage('Priority must be a positive integer')
], async (req, res) => {
try {
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
}
router.put(
"/:repositoryId",
authenticateToken,
requireManageHosts,
[
body("name")
.optional()
.isLength({ min: 1 })
.withMessage("Name is required"),
body("description").optional(),
body("isActive")
.optional()
.isBoolean()
.withMessage("isActive must be a boolean"),
body("priority")
.optional()
.isInt({ min: 0 })
.withMessage("Priority must be a positive integer"),
],
async (req, res) => {
try {
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
}
const { repositoryId } = req.params;
const { name, description, isActive, priority } = req.body;
const { repositoryId } = req.params;
const { name, description, isActive, priority } = req.body;
const repository = await prisma.repository.update({
where: { id: repositoryId },
data: {
...(name && { name }),
...(description !== undefined && { description }),
...(isActive !== undefined && { isActive }),
...(priority !== undefined && { priority })
},
include: {
_count: {
select: {
hostRepositories: true
}
}
}
});
const repository = await prisma.repositories.update({
where: { id: repositoryId },
data: {
...(name && { name }),
...(description !== undefined && { description }),
...(isActive !== undefined && { is_active: isActive }),
...(priority !== undefined && { priority }),
},
include: {
_count: {
select: {
host_repositories: true,
},
},
},
});
res.json(repository);
} catch (error) {
console.error('Repository update error:', error);
res.status(500).json({ error: 'Failed to update repository' });
}
});
res.json(repository);
} catch (error) {
console.error("Repository update error:", error);
res.status(500).json({ error: "Failed to update repository" });
}
},
);
// Toggle repository status for a specific host
router.patch('/host/:hostId/repository/:repositoryId', authenticateToken, requireManageHosts, [
body('isEnabled').isBoolean().withMessage('isEnabled must be a boolean')
], async (req, res) => {
try {
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
}
router.patch(
"/host/:hostId/repository/:repositoryId",
authenticateToken,
requireManageHosts,
[body("isEnabled").isBoolean().withMessage("isEnabled must be a boolean")],
async (req, res) => {
try {
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
}
const { hostId, repositoryId } = req.params;
const { isEnabled } = req.body;
const { hostId, repositoryId } = req.params;
const { isEnabled } = req.body;
const hostRepository = await prisma.hostRepository.update({
where: {
hostId_repositoryId: {
hostId,
repositoryId
}
},
data: {
isEnabled,
lastChecked: new Date()
},
include: {
repository: true,
host: {
select: {
friendlyName: true
}
}
}
});
const hostRepository = await prisma.host_repositories.update({
where: {
host_id_repository_id: {
host_id: hostId,
repository_id: repositoryId,
},
},
data: {
is_enabled: isEnabled,
last_checked: new Date(),
},
include: {
repositories: true,
hosts: {
select: {
friendly_name: true,
},
},
},
});
res.json({
message: `Repository ${isEnabled ? 'enabled' : 'disabled'} for host ${hostRepository.host.friendlyName}`,
hostRepository
});
} catch (error) {
console.error('Host repository toggle error:', error);
res.status(500).json({ error: 'Failed to toggle repository status' });
}
});
res.json({
message: `Repository ${isEnabled ? "enabled" : "disabled"} for host ${hostRepository.hosts.friendly_name}`,
hostRepository,
});
} catch (error) {
console.error("Host repository toggle error:", error);
res.status(500).json({ error: "Failed to toggle repository status" });
}
},
);
// Get repository statistics
router.get('/stats/summary', authenticateToken, requireViewHosts, async (req, res) => {
try {
const stats = await prisma.repository.aggregate({
_count: true
});
router.get(
"/stats/summary",
authenticateToken,
requireViewHosts,
async (_req, res) => {
try {
const stats = await prisma.repositories.aggregate({
_count: true,
});
const hostRepoStats = await prisma.hostRepository.aggregate({
_count: {
isEnabled: true
},
where: {
isEnabled: true
}
});
const hostRepoStats = await prisma.host_repositories.aggregate({
_count: {
is_enabled: true,
},
where: {
is_enabled: true,
},
});
const secureRepos = await prisma.repository.count({
where: { isSecure: true }
});
const secureRepos = await prisma.repositories.count({
where: { is_secure: true },
});
const activeRepos = await prisma.repository.count({
where: { isActive: true }
});
const activeRepos = await prisma.repositories.count({
where: { is_active: true },
});
res.json({
totalRepositories: stats._count,
activeRepositories: activeRepos,
secureRepositories: secureRepos,
enabledHostRepositories: hostRepoStats._count.isEnabled,
securityPercentage: stats._count > 0 ? Math.round((secureRepos / stats._count) * 100) : 0
});
} catch (error) {
console.error('Repository stats error:', error);
res.status(500).json({ error: 'Failed to fetch repository statistics' });
}
});
res.json({
totalRepositories: stats._count,
activeRepositories: activeRepos,
secureRepositories: secureRepos,
enabledHostRepositories: hostRepoStats._count.isEnabled,
securityPercentage:
stats._count > 0 ? Math.round((secureRepos / stats._count) * 100) : 0,
});
} catch (error) {
console.error("Repository stats error:", error);
res.status(500).json({ error: "Failed to fetch repository statistics" });
}
},
);
// Delete a specific repository (admin only)
router.delete(
"/:repositoryId",
authenticateToken,
requireManageHosts,
async (req, res) => {
try {
const { repositoryId } = req.params;
// Check if repository exists first
const existingRepository = await prisma.repositories.findUnique({
where: { id: repositoryId },
select: {
id: true,
name: true,
url: true,
_count: {
select: {
host_repositories: true,
},
},
},
});
if (!existingRepository) {
return res.status(404).json({
error: "Repository not found",
details: "The repository may have been deleted or does not exist",
});
}
// Delete repository and all related data (cascade will handle host_repositories)
await prisma.repositories.delete({
where: { id: repositoryId },
});
res.json({
message: "Repository deleted successfully",
deletedRepository: {
id: existingRepository.id,
name: existingRepository.name,
url: existingRepository.url,
hostCount: existingRepository._count.host_repositories,
},
});
} catch (error) {
console.error("Repository deletion error:", error);
// Handle specific Prisma errors
if (error.code === "P2025") {
return res.status(404).json({
error: "Repository not found",
details: "The repository may have been deleted or does not exist",
});
}
if (error.code === "P2003") {
return res.status(400).json({
error: "Cannot delete repository due to foreign key constraints",
details: "The repository has related data that prevents deletion",
});
}
res.status(500).json({
error: "Failed to delete repository",
details: error.message || "An unexpected error occurred",
});
}
},
);
// Cleanup orphaned repositories (admin only)
router.delete('/cleanup/orphaned', authenticateToken, requireManageHosts, async (req, res) => {
try {
console.log('Cleaning up orphaned repositories...');
// Find repositories with no host relationships
const orphanedRepos = await prisma.repository.findMany({
where: {
hostRepositories: {
none: {}
}
}
});
router.delete(
"/cleanup/orphaned",
authenticateToken,
requireManageHosts,
async (_req, res) => {
try {
console.log("Cleaning up orphaned repositories...");
if (orphanedRepos.length === 0) {
return res.json({
message: 'No orphaned repositories found',
deletedCount: 0,
deletedRepositories: []
});
}
// Find repositories with no host relationships
const orphanedRepos = await prisma.repositories.findMany({
where: {
host_repositories: {
none: {},
},
},
});
// Delete orphaned repositories
const deleteResult = await prisma.repository.deleteMany({
where: {
hostRepositories: {
none: {}
}
}
});
if (orphanedRepos.length === 0) {
return res.json({
message: "No orphaned repositories found",
deletedCount: 0,
deletedRepositories: [],
});
}
console.log(`Deleted ${deleteResult.count} orphaned repositories`);
// Delete orphaned repositories
const deleteResult = await prisma.repositories.deleteMany({
where: {
hostRepositories: {
none: {},
},
},
});
res.json({
message: `Successfully deleted ${deleteResult.count} orphaned repositories`,
deletedCount: deleteResult.count,
deletedRepositories: orphanedRepos.map(repo => ({
id: repo.id,
name: repo.name,
url: repo.url
}))
});
} catch (error) {
console.error('Repository cleanup error:', error);
res.status(500).json({ error: 'Failed to cleanup orphaned repositories' });
}
});
console.log(`Deleted ${deleteResult.count} orphaned repositories`);
res.json({
message: `Successfully deleted ${deleteResult.count} orphaned repositories`,
deletedCount: deleteResult.count,
deletedRepositories: orphanedRepos.map((repo) => ({
id: repo.id,
name: repo.name,
url: repo.url,
})),
});
} catch (error) {
console.error("Repository cleanup error:", error);
res
.status(500)
.json({ error: "Failed to cleanup orphaned repositories" });
}
},
);
module.exports = router;

View File

@@ -0,0 +1,249 @@
const express = require("express");
const router = express.Router();
const { getPrismaClient } = require("../config/prisma");
const { authenticateToken } = require("../middleware/auth");
const prisma = getPrismaClient();
/**
* Global search endpoint
* Searches across hosts, packages, repositories, and users
* Returns categorized results
*/
router.get("/", authenticateToken, async (req, res) => {
try {
const { q } = req.query;
if (!q || q.trim().length === 0) {
return res.json({
hosts: [],
packages: [],
repositories: [],
users: [],
});
}
const searchTerm = q.trim();
// Prepare results object
const results = {
hosts: [],
packages: [],
repositories: [],
users: [],
};
// Get user permissions from database
let userPermissions = null;
try {
userPermissions = await prisma.role_permissions.findUnique({
where: { role: req.user.role },
});
// If no specific permissions found, default to admin permissions
if (!userPermissions) {
console.warn(
`No permissions found for role: ${req.user.role}, defaulting to admin access`,
);
userPermissions = {
can_view_hosts: true,
can_view_packages: true,
can_view_users: true,
};
}
} catch (permError) {
console.error("Error fetching permissions:", permError);
// Default to restrictive permissions on error
userPermissions = {
can_view_hosts: false,
can_view_packages: false,
can_view_users: false,
};
}
// Search hosts if user has permission
if (userPermissions.can_view_hosts) {
try {
const hosts = await prisma.hosts.findMany({
where: {
OR: [
{ hostname: { contains: searchTerm, mode: "insensitive" } },
{ friendly_name: { contains: searchTerm, mode: "insensitive" } },
{ ip: { contains: searchTerm, mode: "insensitive" } },
{ machine_id: { contains: searchTerm, mode: "insensitive" } },
],
},
select: {
id: true,
machine_id: true,
hostname: true,
friendly_name: true,
ip: true,
os_type: true,
os_version: true,
status: true,
last_update: true,
},
take: 10, // Limit results
orderBy: {
last_update: "desc",
},
});
results.hosts = hosts.map((host) => ({
id: host.id,
hostname: host.hostname,
friendly_name: host.friendly_name,
ip: host.ip,
os_type: host.os_type,
os_version: host.os_version,
status: host.status,
last_update: host.last_update,
type: "host",
}));
} catch (error) {
console.error("Error searching hosts:", error);
}
}
// Search packages if user has permission
if (userPermissions.can_view_packages) {
try {
const packages = await prisma.packages.findMany({
where: {
name: { contains: searchTerm, mode: "insensitive" },
},
select: {
id: true,
name: true,
description: true,
category: true,
latest_version: true,
_count: {
select: {
host_packages: true,
},
},
},
take: 10,
orderBy: {
name: "asc",
},
});
results.packages = packages.map((pkg) => ({
id: pkg.id,
name: pkg.name,
description: pkg.description,
category: pkg.category,
latest_version: pkg.latest_version,
host_count: pkg._count.host_packages,
type: "package",
}));
} catch (error) {
console.error("Error searching packages:", error);
}
}
// Search repositories if user has permission (usually same as hosts)
if (userPermissions.can_view_hosts) {
try {
const repositories = await prisma.repositories.findMany({
where: {
OR: [
{ name: { contains: searchTerm, mode: "insensitive" } },
{ url: { contains: searchTerm, mode: "insensitive" } },
{ description: { contains: searchTerm, mode: "insensitive" } },
],
},
select: {
id: true,
name: true,
url: true,
distribution: true,
repo_type: true,
is_active: true,
description: true,
_count: {
select: {
host_repositories: true,
},
},
},
take: 10,
orderBy: {
name: "asc",
},
});
results.repositories = repositories.map((repo) => ({
id: repo.id,
name: repo.name,
url: repo.url,
distribution: repo.distribution,
repo_type: repo.repo_type,
is_active: repo.is_active,
description: repo.description,
host_count: repo._count.host_repositories,
type: "repository",
}));
} catch (error) {
console.error("Error searching repositories:", error);
}
}
// Search users if user has permission
if (userPermissions.can_view_users) {
try {
const users = await prisma.users.findMany({
where: {
OR: [
{ username: { contains: searchTerm, mode: "insensitive" } },
{ email: { contains: searchTerm, mode: "insensitive" } },
{ first_name: { contains: searchTerm, mode: "insensitive" } },
{ last_name: { contains: searchTerm, mode: "insensitive" } },
],
},
select: {
id: true,
username: true,
email: true,
first_name: true,
last_name: true,
role: true,
is_active: true,
last_login: true,
},
take: 10,
orderBy: {
username: "asc",
},
});
results.users = users.map((user) => ({
id: user.id,
username: user.username,
email: user.email,
first_name: user.first_name,
last_name: user.last_name,
role: user.role,
is_active: user.is_active,
last_login: user.last_login,
type: "user",
}));
} catch (error) {
console.error("Error searching users:", error);
}
}
res.json(results);
} catch (error) {
console.error("Global search error:", error);
res.status(500).json({
error: "Failed to perform search",
message: error.message,
});
}
});
module.exports = router;

View File

@@ -1,274 +1,471 @@
const express = require('express');
const { body, validationResult } = require('express-validator');
const { PrismaClient } = require('@prisma/client');
const { authenticateToken } = require('../middleware/auth');
const { requireManageSettings } = require('../middleware/permissions');
const express = require("express");
const { body, validationResult } = require("express-validator");
const { getPrismaClient } = require("../config/prisma");
const { authenticateToken } = require("../middleware/auth");
const { requireManageSettings } = require("../middleware/permissions");
const { getSettings, updateSettings } = require("../services/settingsService");
const router = express.Router();
const prisma = new PrismaClient();
const prisma = getPrismaClient();
// Function to trigger crontab updates on all hosts with auto-update enabled
async function triggerCrontabUpdates() {
try {
console.log('Triggering crontab updates on all hosts with auto-update enabled...');
// Get all hosts that have auto-update enabled
const hosts = await prisma.host.findMany({
where: {
autoUpdate: true,
status: 'active' // Only update active hosts
},
select: {
id: true,
friendlyName: true,
apiId: true,
apiKey: true
}
});
console.log(`Found ${hosts.length} hosts with auto-update enabled`);
// For each host, we'll send a special update command that triggers crontab update
// This is done by sending a ping with a special flag
for (const host of hosts) {
try {
console.log(`Triggering crontab update for host: ${host.friendlyName}`);
// We'll use the existing ping endpoint but add a special parameter
// The agent will detect this and run update-crontab command
const http = require('http');
const https = require('https');
const serverUrl = process.env.SERVER_URL || 'http://localhost:3001';
const url = new URL(`${serverUrl}/api/v1/hosts/ping`);
const isHttps = url.protocol === 'https:';
const client = isHttps ? https : http;
const postData = JSON.stringify({
triggerCrontabUpdate: true,
message: 'Update interval changed, please update your crontab'
});
const options = {
hostname: url.hostname,
port: url.port || (isHttps ? 443 : 80),
path: url.pathname,
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': Buffer.byteLength(postData),
'X-API-ID': host.apiId,
'X-API-KEY': host.apiKey
}
};
const req = client.request(options, (res) => {
if (res.statusCode === 200) {
console.log(`Successfully triggered crontab update for ${host.friendlyName}`);
} else {
console.error(`Failed to trigger crontab update for ${host.friendlyName}: ${res.statusCode}`);
}
});
req.on('error', (error) => {
console.error(`Error triggering crontab update for ${host.friendlyName}:`, error.message);
});
req.write(postData);
req.end();
} catch (error) {
console.error(`Error triggering crontab update for ${host.friendlyName}:`, error.message);
}
}
console.log('Crontab update trigger completed');
} catch (error) {
console.error('Error in triggerCrontabUpdates:', error);
}
// WebSocket broadcaster for agent policy updates (no longer used - queue-based delivery preferred)
// const { broadcastSettingsUpdate } = require("../services/agentWs");
const { queueManager, QUEUE_NAMES } = require("../services/automation");
// Helpers
function normalizeUpdateInterval(minutes) {
let m = parseInt(minutes, 10);
if (Number.isNaN(m)) return 60;
if (m < 5) m = 5;
if (m > 1440) m = 1440;
if (m < 60) {
// Clamp to 5-59, step 5
const snapped = Math.round(m / 5) * 5;
return Math.min(59, Math.max(5, snapped));
}
// Allowed hour-based presets
const allowed = [60, 120, 180, 360, 720, 1440];
let nearest = allowed[0];
let bestDiff = Math.abs(m - nearest);
for (const a of allowed) {
const d = Math.abs(m - a);
if (d < bestDiff) {
bestDiff = d;
nearest = a;
}
}
return nearest;
}
function buildCronExpression(minutes) {
const m = normalizeUpdateInterval(minutes);
if (m < 60) {
return `*/${m} * * * *`;
}
if (m === 60) {
// Hourly at current minute is chosen by agent; default 0 here
return `0 * * * *`;
}
const hours = Math.floor(m / 60);
// Every N hours at minute 0
return `0 */${hours} * * *`;
}
// Get current settings
router.get('/', authenticateToken, requireManageSettings, async (req, res) => {
try {
let settings = await prisma.settings.findFirst();
// If no settings exist, create default settings
if (!settings) {
settings = await prisma.settings.create({
data: {
serverUrl: 'http://localhost:3001',
serverProtocol: 'http',
serverHost: 'localhost',
serverPort: 3001,
frontendUrl: 'http://localhost:3000',
updateInterval: 60,
autoUpdate: false
}
});
}
console.log('Returning settings:', settings);
res.json(settings);
} catch (error) {
console.error('Settings fetch error:', error);
res.status(500).json({ error: 'Failed to fetch settings' });
}
router.get("/", authenticateToken, requireManageSettings, async (_req, res) => {
try {
const settings = await getSettings();
if (process.env.ENABLE_LOGGING === "true") {
console.log("Returning settings:", settings);
}
res.json(settings);
} catch (error) {
console.error("Settings fetch error:", error);
res.status(500).json({ error: "Failed to fetch settings" });
}
});
// Update settings
router.put('/', authenticateToken, requireManageSettings, [
body('serverProtocol').isIn(['http', 'https']).withMessage('Protocol must be http or https'),
body('serverHost').isLength({ min: 1 }).withMessage('Server host is required'),
body('serverPort').isInt({ min: 1, max: 65535 }).withMessage('Port must be between 1 and 65535'),
body('frontendUrl').isLength({ min: 1 }).withMessage('Frontend URL is required'),
body('updateInterval').isInt({ min: 5, max: 1440 }).withMessage('Update interval must be between 5 and 1440 minutes'),
body('autoUpdate').isBoolean().withMessage('Auto update must be a boolean'),
body('githubRepoUrl').optional().isLength({ min: 1 }).withMessage('GitHub repo URL must be a non-empty string'),
body('repositoryType').optional().isIn(['public', 'private']).withMessage('Repository type must be public or private'),
body('sshKeyPath').optional().custom((value) => {
if (value && value.trim().length === 0) {
return true; // Allow empty string
}
if (value && value.trim().length < 1) {
throw new Error('SSH key path must be a non-empty string');
}
return true;
})
], async (req, res) => {
try {
console.log('Settings update request body:', req.body);
const errors = validationResult(req);
if (!errors.isEmpty()) {
console.log('Validation errors:', errors.array());
return res.status(400).json({ errors: errors.array() });
}
router.put(
"/",
authenticateToken,
requireManageSettings,
[
body("serverProtocol")
.optional()
.isIn(["http", "https"])
.withMessage("Protocol must be http or https"),
body("serverHost")
.optional()
.isLength({ min: 1 })
.withMessage("Server host is required"),
body("serverPort")
.optional()
.isInt({ min: 1, max: 65535 })
.withMessage("Port must be between 1 and 65535"),
body("updateInterval")
.optional()
.isInt({ min: 5, max: 1440 })
.withMessage("Update interval must be between 5 and 1440 minutes"),
body("autoUpdate")
.optional()
.isBoolean()
.withMessage("Auto update must be a boolean"),
body("ignoreSslSelfSigned")
.optional()
.isBoolean()
.withMessage("Ignore SSL self-signed must be a boolean"),
body("signupEnabled")
.optional()
.isBoolean()
.withMessage("Signup enabled must be a boolean"),
body("defaultUserRole")
.optional()
.isLength({ min: 1 })
.withMessage("Default user role must be a non-empty string"),
body("githubRepoUrl")
.optional()
.isLength({ min: 1 })
.withMessage("GitHub repo URL must be a non-empty string"),
body("repositoryType")
.optional()
.isIn(["public", "private"])
.withMessage("Repository type must be public or private"),
body("sshKeyPath")
.optional()
.custom((value) => {
if (value && value.trim().length === 0) {
return true; // Allow empty string
}
if (value && value.trim().length < 1) {
throw new Error("SSH key path must be a non-empty string");
}
return true;
}),
body("logoDark")
.optional()
.isLength({ min: 1 })
.withMessage("Logo dark path must be a non-empty string"),
body("logoLight")
.optional()
.isLength({ min: 1 })
.withMessage("Logo light path must be a non-empty string"),
body("favicon")
.optional()
.isLength({ min: 1 })
.withMessage("Favicon path must be a non-empty string"),
],
async (req, res) => {
try {
const errors = validationResult(req);
if (!errors.isEmpty()) {
console.log("Validation errors:", errors.array());
return res.status(400).json({ errors: errors.array() });
}
const { serverProtocol, serverHost, serverPort, frontendUrl, updateInterval, autoUpdate, githubRepoUrl, repositoryType, sshKeyPath } = req.body;
console.log('Extracted values:', { serverProtocol, serverHost, serverPort, frontendUrl, updateInterval, autoUpdate, githubRepoUrl, repositoryType, sshKeyPath });
console.log('GitHub repo URL received:', githubRepoUrl, 'Type:', typeof githubRepoUrl);
// Construct server URL from components
const serverUrl = `${serverProtocol}://${serverHost}:${serverPort}`;
let settings = await prisma.settings.findFirst();
if (settings) {
// Update existing settings
console.log('Updating existing settings with data:', {
serverUrl,
serverProtocol,
serverHost,
serverPort,
frontendUrl,
updateInterval: updateInterval || 60,
autoUpdate: autoUpdate || false,
githubRepoUrl: githubRepoUrl !== undefined ? githubRepoUrl : 'git@github.com:9technologygroup/patchmon.net.git',
repositoryType: repositoryType || 'public'
});
console.log('Final githubRepoUrl value being saved:', githubRepoUrl !== undefined ? githubRepoUrl : 'git@github.com:9technologygroup/patchmon.net.git');
const oldUpdateInterval = settings.updateInterval;
settings = await prisma.settings.update({
where: { id: settings.id },
data: {
serverUrl,
serverProtocol,
serverHost,
serverPort,
frontendUrl,
updateInterval: updateInterval || 60,
autoUpdate: autoUpdate || false,
githubRepoUrl: githubRepoUrl !== undefined ? githubRepoUrl : 'git@github.com:9technologygroup/patchmon.net.git',
repositoryType: repositoryType || 'public',
sshKeyPath: sshKeyPath || null
}
});
console.log('Settings updated successfully:', settings);
// If update interval changed, trigger crontab updates on all hosts with auto-update enabled
if (oldUpdateInterval !== (updateInterval || 60)) {
console.log(`Update interval changed from ${oldUpdateInterval} to ${updateInterval || 60} minutes. Triggering crontab updates...`);
await triggerCrontabUpdates();
}
} else {
// Create new settings
settings = await prisma.settings.create({
data: {
serverUrl,
serverProtocol,
serverHost,
serverPort,
frontendUrl,
updateInterval: updateInterval || 60,
autoUpdate: autoUpdate || false,
githubRepoUrl: githubRepoUrl !== undefined ? githubRepoUrl : 'git@github.com:9technologygroup/patchmon.net.git',
repositoryType: repositoryType || 'public',
sshKeyPath: sshKeyPath || null
}
});
}
res.json({
message: 'Settings updated successfully',
settings
});
} catch (error) {
console.error('Settings update error:', error);
res.status(500).json({ error: 'Failed to update settings' });
}
});
const {
serverProtocol,
serverHost,
serverPort,
updateInterval,
autoUpdate,
ignoreSslSelfSigned,
signupEnabled,
defaultUserRole,
githubRepoUrl,
repositoryType,
sshKeyPath,
logoDark,
logoLight,
favicon,
} = req.body;
// Get current settings to check for update interval changes
const currentSettings = await getSettings();
const oldUpdateInterval = currentSettings.update_interval;
// Build update object with only provided fields
const updateData = {};
if (serverProtocol !== undefined)
updateData.server_protocol = serverProtocol;
if (serverHost !== undefined) updateData.server_host = serverHost;
if (serverPort !== undefined) updateData.server_port = serverPort;
if (updateInterval !== undefined) {
updateData.update_interval = normalizeUpdateInterval(updateInterval);
}
if (autoUpdate !== undefined) updateData.auto_update = autoUpdate;
if (ignoreSslSelfSigned !== undefined)
updateData.ignore_ssl_self_signed = ignoreSslSelfSigned;
if (signupEnabled !== undefined)
updateData.signup_enabled = signupEnabled;
if (defaultUserRole !== undefined)
updateData.default_user_role = defaultUserRole;
if (githubRepoUrl !== undefined)
updateData.github_repo_url = githubRepoUrl;
if (repositoryType !== undefined)
updateData.repository_type = repositoryType;
if (sshKeyPath !== undefined) updateData.ssh_key_path = sshKeyPath;
if (logoDark !== undefined) updateData.logo_dark = logoDark;
if (logoLight !== undefined) updateData.logo_light = logoLight;
if (favicon !== undefined) updateData.favicon = favicon;
const updatedSettings = await updateSettings(
currentSettings.id,
updateData,
);
console.log("Settings updated successfully:", updatedSettings);
// If update interval changed, enqueue persistent jobs for agents
if (
updateInterval !== undefined &&
oldUpdateInterval !== updateData.update_interval
) {
console.log(
`Update interval changed from ${oldUpdateInterval} to ${updateData.update_interval} minutes. Enqueueing agent settings updates...`,
);
const hosts = await prisma.hosts.findMany({
where: { status: "active" },
select: { api_id: true },
});
const queue = queueManager.queues[QUEUE_NAMES.AGENT_COMMANDS];
const jobs = hosts.map((h) => ({
name: "settings_update",
data: {
api_id: h.api_id,
type: "settings_update",
update_interval: updateData.update_interval,
},
opts: { attempts: 10, backoff: { type: "exponential", delay: 5000 } },
}));
// Bulk add jobs
await queue.addBulk(jobs);
// Note: Queue-based delivery handles retries and ensures reliable delivery
// No need for immediate broadcast as it would cause duplicate messages
}
res.json({
message: "Settings updated successfully",
settings: updatedSettings,
});
} catch (error) {
console.error("Settings update error:", error);
res.status(500).json({ error: "Failed to update settings" });
}
},
);
// Get server URL for public use (used by installation scripts)
router.get('/server-url', async (req, res) => {
try {
const settings = await prisma.settings.findFirst();
if (!settings) {
return res.json({ serverUrl: 'http://localhost:3001' });
}
res.json({ serverUrl: settings.serverUrl });
} catch (error) {
console.error('Server URL fetch error:', error);
res.json({ serverUrl: 'http://localhost:3001' });
}
router.get("/server-url", async (_req, res) => {
try {
const settings = await getSettings();
const serverUrl = settings.server_url;
res.json({ server_url: serverUrl });
} catch (error) {
console.error("Server URL fetch error:", error);
res.status(500).json({ error: "Failed to fetch server URL" });
}
});
// Get update interval policy for agents (public endpoint)
router.get('/update-interval', async (req, res) => {
try {
const settings = await prisma.settings.findFirst();
if (!settings) {
return res.json({ updateInterval: 60 });
}
res.json({
updateInterval: settings.updateInterval,
cronExpression: `*/${settings.updateInterval} * * * *` // Generate cron expression
});
} catch (error) {
console.error('Update interval fetch error:', error);
res.json({ updateInterval: 60, cronExpression: '0 * * * *' });
}
// Get update interval policy for agents (requires API authentication)
router.get("/update-interval", async (req, res) => {
try {
// Verify API credentials
const apiId = req.headers["x-api-id"];
const apiKey = req.headers["x-api-key"];
if (!apiId || !apiKey) {
return res.status(401).json({ error: "API credentials required" });
}
// Validate API credentials
const host = await prisma.hosts.findUnique({
where: { api_id: apiId },
});
if (!host || host.api_key !== apiKey) {
return res.status(401).json({ error: "Invalid API credentials" });
}
const settings = await getSettings();
const interval = normalizeUpdateInterval(settings.update_interval || 60);
res.json({
updateInterval: interval,
cronExpression: buildCronExpression(interval),
});
} catch (error) {
console.error("Update interval fetch error:", error);
res.json({ updateInterval: 60, cronExpression: "0 * * * *" });
}
});
// Get auto-update policy for agents (public endpoint)
router.get('/auto-update', async (req, res) => {
try {
const settings = await prisma.settings.findFirst();
if (!settings) {
return res.json({ autoUpdate: false });
}
res.json({
autoUpdate: settings.autoUpdate || false
});
} catch (error) {
console.error('Auto-update fetch error:', error);
res.json({ autoUpdate: false });
}
router.get("/auto-update", async (_req, res) => {
try {
const settings = await getSettings();
res.json({
autoUpdate: settings.auto_update || false,
});
} catch (error) {
console.error("Auto-update fetch error:", error);
res.json({ autoUpdate: false });
}
});
// Upload logo files
router.post(
"/logos/upload",
authenticateToken,
requireManageSettings,
async (req, res) => {
try {
const { logoType, fileContent, fileName } = req.body;
if (!logoType || !fileContent) {
return res.status(400).json({
error: "Logo type and file content are required",
});
}
if (!["dark", "light", "favicon"].includes(logoType)) {
return res.status(400).json({
error: "Logo type must be 'dark', 'light', or 'favicon'",
});
}
// Validate file content (basic checks)
if (typeof fileContent !== "string") {
return res.status(400).json({
error: "File content must be a base64 string",
});
}
const fs = require("node:fs").promises;
const path = require("node:path");
const _crypto = require("node:crypto");
// Create assets directory if it doesn't exist
// In development: save to public/assets (served by Vite)
// In production: save to dist/assets (served by built app)
const isDevelopment = process.env.NODE_ENV !== "production";
const assetsDir = isDevelopment
? path.join(__dirname, "../../../frontend/public/assets")
: path.join(__dirname, "../../../frontend/dist/assets");
await fs.mkdir(assetsDir, { recursive: true });
// Determine file extension and path
let fileExtension;
let fileName_final;
if (logoType === "favicon") {
fileExtension = ".svg";
fileName_final = fileName || "logo_square.svg";
} else {
// Determine extension from file content or use default
if (fileContent.startsWith("data:image/png")) {
fileExtension = ".png";
} else if (fileContent.startsWith("data:image/svg")) {
fileExtension = ".svg";
} else if (
fileContent.startsWith("data:image/jpeg") ||
fileContent.startsWith("data:image/jpg")
) {
fileExtension = ".jpg";
} else {
fileExtension = ".png"; // Default to PNG
}
fileName_final = fileName || `logo_${logoType}${fileExtension}`;
}
const filePath = path.join(assetsDir, fileName_final);
// Handle base64 data URLs
let fileBuffer;
if (fileContent.startsWith("data:")) {
const base64Data = fileContent.split(",")[1];
fileBuffer = Buffer.from(base64Data, "base64");
} else {
// Assume it's already base64
fileBuffer = Buffer.from(fileContent, "base64");
}
// Create backup of existing file
try {
const backupPath = `${filePath}.backup.${Date.now()}`;
await fs.copyFile(filePath, backupPath);
console.log(`Created backup: ${backupPath}`);
} catch (error) {
// Ignore if original doesn't exist
if (error.code !== "ENOENT") {
console.warn("Failed to create backup:", error.message);
}
}
// Write new logo file
await fs.writeFile(filePath, fileBuffer);
// Update settings with new logo path
const settings = await getSettings();
const logoPath = `/assets/${fileName_final}`;
const updateData = {};
if (logoType === "dark") {
updateData.logo_dark = logoPath;
} else if (logoType === "light") {
updateData.logo_light = logoPath;
} else if (logoType === "favicon") {
updateData.favicon = logoPath;
}
await updateSettings(settings.id, updateData);
// Get file stats
const stats = await fs.stat(filePath);
res.json({
message: `${logoType} logo uploaded successfully`,
fileName: fileName_final,
path: logoPath,
size: stats.size,
sizeFormatted: `${(stats.size / 1024).toFixed(1)} KB`,
});
} catch (error) {
console.error("Upload logo error:", error);
res.status(500).json({ error: "Failed to upload logo" });
}
},
);
// Reset logo to default
router.post(
"/logos/reset",
authenticateToken,
requireManageSettings,
async (req, res) => {
try {
const { logoType } = req.body;
if (!logoType) {
return res.status(400).json({
error: "Logo type is required",
});
}
if (!["dark", "light", "favicon"].includes(logoType)) {
return res.status(400).json({
error: "Logo type must be 'dark', 'light', or 'favicon'",
});
}
// Get current settings
const settings = await getSettings();
// Clear the custom logo path to revert to default
const updateData = {};
if (logoType === "dark") {
updateData.logo_dark = null;
} else if (logoType === "light") {
updateData.logo_light = null;
} else if (logoType === "favicon") {
updateData.favicon = null;
}
await updateSettings(settings.id, updateData);
res.json({
message: `${logoType} logo reset to default successfully`,
logoType,
});
} catch (error) {
console.error("Reset logo error:", error);
res.status(500).json({ error: "Failed to reset logo" });
}
},
);
module.exports = router;

View File

@@ -1,309 +1,340 @@
const express = require('express');
const { PrismaClient } = require('@prisma/client');
const speakeasy = require('speakeasy');
const QRCode = require('qrcode');
const { authenticateToken } = require('../middleware/auth');
const { body, validationResult } = require('express-validator');
const express = require("express");
const { getPrismaClient } = require("../config/prisma");
const speakeasy = require("speakeasy");
const QRCode = require("qrcode");
const { authenticateToken } = require("../middleware/auth");
const { body, validationResult } = require("express-validator");
const router = express.Router();
const prisma = new PrismaClient();
const prisma = getPrismaClient();
// Generate TFA secret and QR code
router.get('/setup', authenticateToken, async (req, res) => {
try {
const userId = req.user.id;
// Check if user already has TFA enabled
const user = await prisma.user.findUnique({
where: { id: userId },
select: { tfaEnabled: true, tfaSecret: true }
});
router.get("/setup", authenticateToken, async (req, res) => {
try {
const userId = req.user.id;
if (user.tfaEnabled) {
return res.status(400).json({
error: 'Two-factor authentication is already enabled for this account'
});
}
// Check if user already has TFA enabled
const user = await prisma.users.findUnique({
where: { id: userId },
select: { tfa_enabled: true, tfa_secret: true },
});
// Generate a new secret
const secret = speakeasy.generateSecret({
name: `PatchMon (${req.user.username})`,
issuer: 'PatchMon',
length: 32
});
if (user.tfa_enabled) {
return res.status(400).json({
error: "Two-factor authentication is already enabled for this account",
});
}
// Generate QR code
const qrCodeUrl = await QRCode.toDataURL(secret.otpauth_url);
// Generate a new secret
const secret = speakeasy.generateSecret({
name: `PatchMon (${req.user.username})`,
issuer: "PatchMon",
length: 32,
});
// Store the secret temporarily (not enabled yet)
await prisma.user.update({
where: { id: userId },
data: { tfaSecret: secret.base32 }
});
// Generate QR code
const qrCodeUrl = await QRCode.toDataURL(secret.otpauth_url);
res.json({
secret: secret.base32,
qrCode: qrCodeUrl,
manualEntryKey: secret.base32
});
} catch (error) {
console.error('TFA setup error:', error);
res.status(500).json({ error: 'Failed to setup two-factor authentication' });
}
// Store the secret temporarily (not enabled yet)
await prisma.users.update({
where: { id: userId },
data: { tfa_secret: secret.base32 },
});
res.json({
secret: secret.base32,
qrCode: qrCodeUrl,
manualEntryKey: secret.base32,
});
} catch (error) {
console.error("TFA setup error:", error);
res
.status(500)
.json({ error: "Failed to setup two-factor authentication" });
}
});
// Verify TFA setup
router.post('/verify-setup', authenticateToken, [
body('token').isLength({ min: 6, max: 6 }).withMessage('Token must be 6 digits'),
body('token').isNumeric().withMessage('Token must contain only numbers')
], async (req, res) => {
try {
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
}
router.post(
"/verify-setup",
authenticateToken,
[
body("token")
.isLength({ min: 6, max: 6 })
.withMessage("Token must be 6 digits"),
body("token").isNumeric().withMessage("Token must contain only numbers"),
],
async (req, res) => {
try {
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
}
const { token } = req.body;
const userId = req.user.id;
const { token } = req.body;
const userId = req.user.id;
// Get user's TFA secret
const user = await prisma.user.findUnique({
where: { id: userId },
select: { tfaSecret: true, tfaEnabled: true }
});
// Get user's TFA secret
const user = await prisma.users.findUnique({
where: { id: userId },
select: { tfa_secret: true, tfa_enabled: true },
});
if (!user.tfaSecret) {
return res.status(400).json({
error: 'No TFA secret found. Please start the setup process first.'
});
}
if (!user.tfa_secret) {
return res.status(400).json({
error: "No TFA secret found. Please start the setup process first.",
});
}
if (user.tfaEnabled) {
return res.status(400).json({
error: 'Two-factor authentication is already enabled for this account'
});
}
if (user.tfa_enabled) {
return res.status(400).json({
error:
"Two-factor authentication is already enabled for this account",
});
}
// Verify the token
const verified = speakeasy.totp.verify({
secret: user.tfaSecret,
encoding: 'base32',
token: token,
window: 2 // Allow 2 time windows (60 seconds) for clock drift
});
// Verify the token
const verified = speakeasy.totp.verify({
secret: user.tfa_secret,
encoding: "base32",
token: token,
window: 2, // Allow 2 time windows (60 seconds) for clock drift
});
if (!verified) {
return res.status(400).json({
error: 'Invalid verification code. Please try again.'
});
}
if (!verified) {
return res.status(400).json({
error: "Invalid verification code. Please try again.",
});
}
// Generate backup codes
const backupCodes = Array.from({ length: 10 }, () =>
Math.random().toString(36).substring(2, 8).toUpperCase()
);
// Generate backup codes
const backupCodes = Array.from({ length: 10 }, () =>
Math.random().toString(36).substring(2, 8).toUpperCase(),
);
// Enable TFA and store backup codes
await prisma.user.update({
where: { id: userId },
data: {
tfaEnabled: true,
tfaBackupCodes: JSON.stringify(backupCodes)
}
});
// Enable TFA and store backup codes
await prisma.users.update({
where: { id: userId },
data: {
tfa_enabled: true,
tfa_backup_codes: JSON.stringify(backupCodes),
},
});
res.json({
message: 'Two-factor authentication has been enabled successfully',
backupCodes: backupCodes
});
} catch (error) {
console.error('TFA verification error:', error);
res.status(500).json({ error: 'Failed to verify two-factor authentication setup' });
}
});
res.json({
message: "Two-factor authentication has been enabled successfully",
backupCodes: backupCodes,
});
} catch (error) {
console.error("TFA verification error:", error);
res
.status(500)
.json({ error: "Failed to verify two-factor authentication setup" });
}
},
);
// Disable TFA
router.post('/disable', authenticateToken, [
body('password').notEmpty().withMessage('Password is required to disable TFA')
], async (req, res) => {
try {
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
}
router.post(
"/disable",
authenticateToken,
[
body("password")
.notEmpty()
.withMessage("Password is required to disable TFA"),
],
async (req, res) => {
try {
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
}
const { password } = req.body;
const userId = req.user.id;
const { password: _password } = req.body;
const userId = req.user.id;
// Verify password
const user = await prisma.user.findUnique({
where: { id: userId },
select: { passwordHash: true, tfaEnabled: true }
});
// Verify password
const user = await prisma.users.findUnique({
where: { id: userId },
select: { password_hash: true, tfa_enabled: true },
});
if (!user.tfaEnabled) {
return res.status(400).json({
error: 'Two-factor authentication is not enabled for this account'
});
}
if (!user.tfa_enabled) {
return res.status(400).json({
error: "Two-factor authentication is not enabled for this account",
});
}
// Note: In a real implementation, you would verify the password hash here
// For now, we'll skip password verification for simplicity
// FIXME: In a real implementation, you would verify the password hash here
// For now, we'll skip password verification for simplicity
// Disable TFA
await prisma.user.update({
where: { id: userId },
data: {
tfaEnabled: false,
tfaSecret: null,
tfaBackupCodes: null
}
});
// Disable TFA
await prisma.users.update({
where: { id: userId },
data: {
tfa_enabled: false,
tfa_secret: null,
tfa_backup_codes: null,
},
});
res.json({
message: 'Two-factor authentication has been disabled successfully'
});
} catch (error) {
console.error('TFA disable error:', error);
res.status(500).json({ error: 'Failed to disable two-factor authentication' });
}
});
res.json({
message: "Two-factor authentication has been disabled successfully",
});
} catch (error) {
console.error("TFA disable error:", error);
res
.status(500)
.json({ error: "Failed to disable two-factor authentication" });
}
},
);
// Get TFA status
router.get('/status', authenticateToken, async (req, res) => {
try {
const userId = req.user.id;
const user = await prisma.user.findUnique({
where: { id: userId },
select: {
tfaEnabled: true,
tfaSecret: true,
tfaBackupCodes: true
}
});
router.get("/status", authenticateToken, async (req, res) => {
try {
const userId = req.user.id;
res.json({
enabled: user.tfaEnabled,
hasBackupCodes: !!user.tfaBackupCodes
});
} catch (error) {
console.error('TFA status error:', error);
res.status(500).json({ error: 'Failed to get TFA status' });
}
const user = await prisma.users.findUnique({
where: { id: userId },
select: {
tfa_enabled: true,
tfa_secret: true,
tfa_backup_codes: true,
},
});
res.json({
enabled: user.tfa_enabled,
hasBackupCodes: !!user.tfa_backup_codes,
});
} catch (error) {
console.error("TFA status error:", error);
res.status(500).json({ error: "Failed to get TFA status" });
}
});
// Regenerate backup codes
router.post('/regenerate-backup-codes', authenticateToken, async (req, res) => {
try {
const userId = req.user.id;
router.post("/regenerate-backup-codes", authenticateToken, async (req, res) => {
try {
const userId = req.user.id;
// Check if TFA is enabled
const user = await prisma.user.findUnique({
where: { id: userId },
select: { tfaEnabled: true }
});
// Check if TFA is enabled
const user = await prisma.users.findUnique({
where: { id: userId },
select: { tfa_enabled: true },
});
if (!user.tfaEnabled) {
return res.status(400).json({
error: 'Two-factor authentication is not enabled for this account'
});
}
if (!user.tfa_enabled) {
return res.status(400).json({
error: "Two-factor authentication is not enabled for this account",
});
}
// Generate new backup codes
const backupCodes = Array.from({ length: 10 }, () =>
Math.random().toString(36).substring(2, 8).toUpperCase()
);
// Generate new backup codes
const backupCodes = Array.from({ length: 10 }, () =>
Math.random().toString(36).substring(2, 8).toUpperCase(),
);
// Update backup codes
await prisma.user.update({
where: { id: userId },
data: {
tfaBackupCodes: JSON.stringify(backupCodes)
}
});
// Update backup codes
await prisma.users.update({
where: { id: userId },
data: {
tfa_backup_codes: JSON.stringify(backupCodes),
},
});
res.json({
message: 'Backup codes have been regenerated successfully',
backupCodes: backupCodes
});
} catch (error) {
console.error('TFA backup codes regeneration error:', error);
res.status(500).json({ error: 'Failed to regenerate backup codes' });
}
res.json({
message: "Backup codes have been regenerated successfully",
backupCodes: backupCodes,
});
} catch (error) {
console.error("TFA backup codes regeneration error:", error);
res.status(500).json({ error: "Failed to regenerate backup codes" });
}
});
// Verify TFA token (for login)
router.post('/verify', [
body('username').notEmpty().withMessage('Username is required'),
body('token').isLength({ min: 6, max: 6 }).withMessage('Token must be 6 digits'),
body('token').isNumeric().withMessage('Token must contain only numbers')
], async (req, res) => {
try {
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
}
router.post(
"/verify",
[
body("username").notEmpty().withMessage("Username is required"),
body("token")
.isLength({ min: 6, max: 6 })
.withMessage("Token must be 6 digits"),
body("token").isNumeric().withMessage("Token must contain only numbers"),
],
async (req, res) => {
try {
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
}
const { username, token } = req.body;
const { username, token } = req.body;
// Get user's TFA secret
const user = await prisma.user.findUnique({
where: { username },
select: {
id: true,
tfaEnabled: true,
tfaSecret: true,
tfaBackupCodes: true
}
});
// Get user's TFA secret
const user = await prisma.users.findUnique({
where: { username },
select: {
id: true,
tfa_enabled: true,
tfa_secret: true,
tfa_backup_codes: true,
},
});
if (!user || !user.tfaEnabled || !user.tfaSecret) {
return res.status(400).json({
error: 'Two-factor authentication is not enabled for this account'
});
}
if (!user || !user.tfa_enabled || !user.tfa_secret) {
return res.status(400).json({
error: "Two-factor authentication is not enabled for this account",
});
}
// Check if it's a backup code
const backupCodes = user.tfaBackupCodes ? JSON.parse(user.tfaBackupCodes) : [];
const isBackupCode = backupCodes.includes(token);
// Check if it's a backup code
const backupCodes = user.tfa_backup_codes
? JSON.parse(user.tfa_backup_codes)
: [];
const isBackupCode = backupCodes.includes(token);
let verified = false;
let verified = false;
if (isBackupCode) {
// Remove the used backup code
const updatedBackupCodes = backupCodes.filter(code => code !== token);
await prisma.user.update({
where: { id: user.id },
data: {
tfaBackupCodes: JSON.stringify(updatedBackupCodes)
}
});
verified = true;
} else {
// Verify TOTP token
verified = speakeasy.totp.verify({
secret: user.tfaSecret,
encoding: 'base32',
token: token,
window: 2
});
}
if (isBackupCode) {
// Remove the used backup code
const updatedBackupCodes = backupCodes.filter((code) => code !== token);
await prisma.users.update({
where: { id: user.id },
data: {
tfa_backup_codes: JSON.stringify(updatedBackupCodes),
},
});
verified = true;
} else {
// Verify TOTP token
verified = speakeasy.totp.verify({
secret: user.tfa_secret,
encoding: "base32",
token: token,
window: 2,
});
}
if (!verified) {
return res.status(400).json({
error: 'Invalid verification code'
});
}
if (!verified) {
return res.status(400).json({
error: "Invalid verification code",
});
}
res.json({
message: 'Two-factor authentication verified successfully',
userId: user.id
});
} catch (error) {
console.error('TFA verification error:', error);
res.status(500).json({ error: 'Failed to verify two-factor authentication' });
}
});
res.json({
message: "Two-factor authentication verified successfully",
userId: user.id,
});
} catch (error) {
console.error("TFA verification error:", error);
res
.status(500)
.json({ error: "Failed to verify two-factor authentication" });
}
},
);
module.exports = router;

View File

@@ -1,195 +1,358 @@
const express = require('express');
const { authenticateToken } = require('../middleware/auth');
const { requireManageSettings } = require('../middleware/permissions');
const { PrismaClient } = require('@prisma/client');
const { exec } = require('child_process');
const { promisify } = require('util');
const express = require("express");
const { authenticateToken } = require("../middleware/auth");
const { requireManageSettings } = require("../middleware/permissions");
const { getPrismaClient } = require("../config/prisma");
const prisma = new PrismaClient();
const execAsync = promisify(exec);
const prisma = getPrismaClient();
// Default GitHub repository URL
const DEFAULT_GITHUB_REPO = "https://github.com/PatchMon/PatchMon.git";
const router = express.Router();
// Helper function to get current version from package.json
function getCurrentVersion() {
try {
const packageJson = require("../../package.json");
return packageJson?.version || "1.3.0";
} catch (packageError) {
console.warn(
"Could not read version from package.json, using fallback:",
packageError.message,
);
return "1.3.0";
}
}
// Helper function to parse GitHub repository URL
function parseGitHubRepo(repoUrl) {
let owner, repo;
if (repoUrl.includes("git@github.com:")) {
const match = repoUrl.match(/git@github\.com:([^/]+)\/([^/]+)\.git/);
if (match) {
[, owner, repo] = match;
}
} else if (repoUrl.includes("github.com/")) {
const match = repoUrl.match(/github\.com\/([^/]+)\/([^/]+?)(?:\.git)?$/);
if (match) {
[, owner, repo] = match;
}
}
return { owner, repo };
}
// Helper function to get latest release from GitHub API
async function getLatestRelease(owner, repo) {
try {
const currentVersion = getCurrentVersion();
const apiUrl = `https://api.github.com/repos/${owner}/${repo}/releases/latest`;
const response = await fetch(apiUrl, {
method: "GET",
headers: {
Accept: "application/vnd.github.v3+json",
"User-Agent": `PatchMon-Server/${currentVersion}`,
},
});
if (!response.ok) {
const errorText = await response.text();
if (
errorText.includes("rate limit") ||
errorText.includes("API rate limit")
) {
throw new Error("GitHub API rate limit exceeded");
}
throw new Error(
`GitHub API error: ${response.status} ${response.statusText}`,
);
}
const releaseData = await response.json();
return {
tagName: releaseData.tag_name,
version: releaseData.tag_name.replace("v", ""),
publishedAt: releaseData.published_at,
htmlUrl: releaseData.html_url,
};
} catch (error) {
console.error("Error fetching latest release:", error.message);
throw error; // Re-throw to be caught by the calling function
}
}
// Helper function to get latest commit from main branch
async function getLatestCommit(owner, repo) {
try {
const currentVersion = getCurrentVersion();
const apiUrl = `https://api.github.com/repos/${owner}/${repo}/commits/main`;
const response = await fetch(apiUrl, {
method: "GET",
headers: {
Accept: "application/vnd.github.v3+json",
"User-Agent": `PatchMon-Server/${currentVersion}`,
},
});
if (!response.ok) {
const errorText = await response.text();
if (
errorText.includes("rate limit") ||
errorText.includes("API rate limit")
) {
throw new Error("GitHub API rate limit exceeded");
}
throw new Error(
`GitHub API error: ${response.status} ${response.statusText}`,
);
}
const commitData = await response.json();
return {
sha: commitData.sha,
message: commitData.commit.message,
author: commitData.commit.author.name,
date: commitData.commit.author.date,
htmlUrl: commitData.html_url,
};
} catch (error) {
console.error("Error fetching latest commit:", error.message);
throw error; // Re-throw to be caught by the calling function
}
}
// Helper function to get commit count difference
async function getCommitDifference(owner, repo, currentVersion) {
// Try both with and without 'v' prefix for compatibility
const versionTags = [
currentVersion, // Try without 'v' first (new format)
`v${currentVersion}`, // Try with 'v' prefix (old format)
];
for (const versionTag of versionTags) {
try {
// Compare main branch with the released version tag
const apiUrl = `https://api.github.com/repos/${owner}/${repo}/compare/${versionTag}...main`;
const response = await fetch(apiUrl, {
method: "GET",
headers: {
Accept: "application/vnd.github.v3+json",
"User-Agent": `PatchMon-Server/${getCurrentVersion()}`,
},
});
if (!response.ok) {
const errorText = await response.text();
if (
errorText.includes("rate limit") ||
errorText.includes("API rate limit")
) {
throw new Error("GitHub API rate limit exceeded");
}
// If 404, try next tag format
if (response.status === 404) {
continue;
}
throw new Error(
`GitHub API error: ${response.status} ${response.statusText}`,
);
}
const compareData = await response.json();
return {
commitsBehind: compareData.behind_by || 0, // How many commits main is behind release
commitsAhead: compareData.ahead_by || 0, // How many commits main is ahead of release
totalCommits: compareData.total_commits || 0,
branchInfo: "main branch vs release",
};
} catch (error) {
// If rate limit, throw immediately
if (error.message.includes("rate limit")) {
throw error;
}
}
}
// If all attempts failed, throw error
throw new Error(
`Could not find tag '${currentVersion}' or 'v${currentVersion}' in repository`,
);
}
// Helper function to compare version strings (semantic versioning)
function compareVersions(version1, version2) {
const v1parts = version1.split(".").map(Number);
const v2parts = version2.split(".").map(Number);
const maxLength = Math.max(v1parts.length, v2parts.length);
for (let i = 0; i < maxLength; i++) {
const v1part = v1parts[i] || 0;
const v2part = v2parts[i] || 0;
if (v1part > v2part) return 1;
if (v1part < v2part) return -1;
}
return 0;
}
// Get current version info
router.get('/current', authenticateToken, async (req, res) => {
try {
// For now, return hardcoded version - this should match your agent version
const currentVersion = '1.2.5';
res.json({
version: currentVersion,
buildDate: new Date().toISOString(),
environment: process.env.NODE_ENV || 'development'
});
} catch (error) {
console.error('Error getting current version:', error);
res.status(500).json({ error: 'Failed to get current version' });
}
router.get("/current", authenticateToken, async (_req, res) => {
try {
const currentVersion = getCurrentVersion();
// Get settings with cached update info (no GitHub API calls)
const settings = await prisma.settings.findFirst();
const githubRepoUrl = settings?.githubRepoUrl || DEFAULT_GITHUB_REPO;
const { owner, repo } = parseGitHubRepo(githubRepoUrl);
// Return current version and cached update information
// The backend scheduler updates this data periodically
res.json({
version: currentVersion,
latest_version: settings?.latest_version || null,
is_update_available: settings?.is_update_available || false,
last_update_check: settings?.last_update_check || null,
buildDate: new Date().toISOString(),
environment: process.env.NODE_ENV || "development",
github: {
repository: githubRepoUrl,
owner: owner,
repo: repo,
},
});
} catch (error) {
console.error("Error getting current version:", error);
res.status(500).json({ error: "Failed to get current version" });
}
});
// Test SSH key permissions and GitHub access
router.post('/test-ssh-key', authenticateToken, requireManageSettings, async (req, res) => {
try {
const { sshKeyPath, githubRepoUrl } = req.body;
if (!sshKeyPath || !githubRepoUrl) {
return res.status(400).json({
error: 'SSH key path and GitHub repo URL are required'
});
}
// Parse repository info
let owner, repo;
if (githubRepoUrl.includes('git@github.com:')) {
const match = githubRepoUrl.match(/git@github\.com:([^\/]+)\/([^\/]+)\.git/);
if (match) {
[, owner, repo] = match;
}
} else if (githubRepoUrl.includes('github.com/')) {
const match = githubRepoUrl.match(/github\.com\/([^\/]+)\/([^\/]+)/);
if (match) {
[, owner, repo] = match;
}
}
if (!owner || !repo) {
return res.status(400).json({
error: 'Invalid GitHub repository URL format'
});
}
// Check if SSH key file exists and is readable
try {
require('fs').accessSync(sshKeyPath);
} catch (e) {
return res.status(400).json({
error: 'SSH key file not found or not accessible',
details: `Cannot access: ${sshKeyPath}`,
suggestion: 'Check the file path and ensure the application has read permissions'
});
}
// Test SSH connection to GitHub
const sshRepoUrl = `git@github.com:${owner}/${repo}.git`;
const env = {
...process.env,
GIT_SSH_COMMAND: `ssh -i ${sshKeyPath} -o StrictHostKeyChecking=no -o IdentitiesOnly=yes -o ConnectTimeout=10`
};
try {
// Test with a simple git command
const { stdout } = await execAsync(
`git ls-remote --heads ${sshRepoUrl} | head -n 1`,
{
timeout: 15000,
env: env
}
);
if (stdout.trim()) {
return res.json({
success: true,
message: 'SSH key is working correctly',
details: {
sshKeyPath,
repository: `${owner}/${repo}`,
testResult: 'Successfully connected to GitHub'
}
});
} else {
return res.status(400).json({
error: 'SSH connection succeeded but no data returned',
suggestion: 'Check repository access permissions'
});
}
} catch (sshError) {
console.error('SSH test error:', sshError.message);
if (sshError.message.includes('Permission denied')) {
return res.status(403).json({
error: 'SSH key permission denied',
details: 'The SSH key exists but GitHub rejected the connection',
suggestion: 'Verify the SSH key is added to the repository as a deploy key with read access'
});
} else if (sshError.message.includes('Host key verification failed')) {
return res.status(403).json({
error: 'Host key verification failed',
suggestion: 'This is normal for first-time connections. The key will be added to known_hosts automatically.'
});
} else if (sshError.message.includes('Connection timed out')) {
return res.status(408).json({
error: 'Connection timed out',
suggestion: 'Check your internet connection and GitHub status'
});
} else {
return res.status(500).json({
error: 'SSH connection failed',
details: sshError.message,
suggestion: 'Check the SSH key format and repository URL'
});
}
}
} catch (error) {
console.error('SSH key test error:', error);
res.status(500).json({
error: 'Failed to test SSH key',
details: error.message
});
}
});
router.post(
"/test-ssh-key",
authenticateToken,
requireManageSettings,
async (_req, res) => {
res.status(410).json({
error:
"SSH key testing has been removed. Using default public repository.",
});
},
);
// Check for updates from GitHub
router.get('/check-updates', authenticateToken, requireManageSettings, async (req, res) => {
try {
// Get cached update information from settings
const settings = await prisma.settings.findFirst();
if (!settings) {
return res.status(400).json({ error: 'Settings not found' });
}
router.get(
"/check-updates",
authenticateToken,
requireManageSettings,
async (_req, res) => {
try {
// Get cached update information from settings
const settings = await prisma.settings.findFirst();
const currentVersion = '1.2.5';
const latestVersion = settings.latestVersion || currentVersion;
const isUpdateAvailable = settings.updateAvailable || false;
const lastUpdateCheck = settings.lastUpdateCheck;
if (!settings) {
return res.status(400).json({ error: "Settings not found" });
}
res.json({
currentVersion,
latestVersion,
isUpdateAvailable,
lastUpdateCheck,
repositoryType: settings.repositoryType || 'public',
latestRelease: {
tagName: latestVersion ? `v${latestVersion}` : null,
version: latestVersion,
repository: settings.githubRepoUrl ? settings.githubRepoUrl.split('/').slice(-2).join('/') : null,
accessMethod: settings.repositoryType === 'private' ? 'ssh' : 'api'
}
});
const currentVersion = getCurrentVersion();
const githubRepoUrl = settings.githubRepoUrl || DEFAULT_GITHUB_REPO;
const { owner, repo } = parseGitHubRepo(githubRepoUrl);
} catch (error) {
console.error('Error getting update information:', error);
res.status(500).json({ error: 'Failed to get update information' });
}
});
let latestRelease = null;
let latestCommit = null;
let commitDifference = null;
// Simple version comparison function
function compareVersions(version1, version2) {
const v1Parts = version1.split('.').map(Number);
const v2Parts = version2.split('.').map(Number);
const maxLength = Math.max(v1Parts.length, v2Parts.length);
for (let i = 0; i < maxLength; i++) {
const v1Part = v1Parts[i] || 0;
const v2Part = v2Parts[i] || 0;
if (v1Part > v2Part) return 1;
if (v1Part < v2Part) return -1;
}
return 0;
}
// Fetch fresh GitHub data if we have valid owner/repo
if (owner && repo) {
try {
const [releaseData, commitData, differenceData] = await Promise.all([
getLatestRelease(owner, repo),
getLatestCommit(owner, repo),
getCommitDifference(owner, repo, currentVersion),
]);
latestRelease = releaseData;
latestCommit = commitData;
commitDifference = differenceData;
} catch (githubError) {
console.warn(
"Failed to fetch fresh GitHub data:",
githubError.message,
);
// Provide fallback data when GitHub API is rate-limited
if (
githubError.message.includes("rate limit") ||
githubError.message.includes("API rate limit")
) {
console.log("GitHub API rate limited, providing fallback data");
latestRelease = {
tagName: "v1.2.8",
version: "1.2.8",
publishedAt: "2025-10-02T17:12:53Z",
htmlUrl:
"https://github.com/PatchMon/PatchMon/releases/tag/v1.2.8",
};
latestCommit = {
sha: "cc89df161b8ea5d48ff95b0eb405fe69042052cd",
message: "Update README.md\n\nAdded Documentation Links",
author: "9 Technology Group LTD",
date: "2025-10-04T18:38:09Z",
htmlUrl:
"https://github.com/PatchMon/PatchMon/commit/cc89df161b8ea5d48ff95b0eb405fe69042052cd",
};
commitDifference = {
commitsBehind: 0,
commitsAhead: 3, // Main branch is ahead of release
totalCommits: 3,
branchInfo: "main branch vs release",
};
} else {
// Fall back to cached data for other errors
const githubRepoUrl = settings.githubRepoUrl || DEFAULT_GITHUB_REPO;
latestRelease = settings.latest_version
? {
version: settings.latest_version,
tagName: `v${settings.latest_version}`,
publishedAt: null, // Only use date from GitHub API, not cached data
htmlUrl: `${githubRepoUrl.replace(/\.git$/, "")}/releases/tag/v${settings.latest_version}`,
}
: null;
}
}
}
const latestVersion =
latestRelease?.version || settings.latest_version || currentVersion;
const isUpdateAvailable = latestRelease
? compareVersions(latestVersion, currentVersion) > 0
: settings.update_available || false;
res.json({
currentVersion,
latestVersion,
isUpdateAvailable,
lastUpdateCheck: settings.last_update_check || null,
repositoryType: settings.repository_type || "public",
github: {
repository: githubRepoUrl,
owner: owner,
repo: repo,
latestRelease: latestRelease,
latestCommit: latestCommit,
commitDifference: commitDifference,
},
});
} catch (error) {
console.error("Error getting update information:", error);
res.status(500).json({ error: "Failed to get update information" });
}
},
);
module.exports = router;

View File

@@ -0,0 +1,139 @@
const express = require("express");
const { authenticateToken } = require("../middleware/auth");
const {
getConnectionInfo,
subscribeToConnectionChanges,
} = require("../services/agentWs");
const {
validate_session,
update_session_activity,
} = require("../utils/session_manager");
const router = express.Router();
// Get WebSocket connection status by api_id (no database access - pure memory lookup)
router.get("/status/:apiId", authenticateToken, async (req, res) => {
try {
const { apiId } = req.params;
// Direct in-memory check - no database query needed
const connectionInfo = getConnectionInfo(apiId);
// Minimal response for maximum speed
res.json({
success: true,
data: connectionInfo,
});
} catch (error) {
console.error("Error fetching WebSocket status:", error);
res.status(500).json({
success: false,
error: "Failed to fetch WebSocket status",
});
}
});
// Server-Sent Events endpoint for real-time status updates (no polling needed!)
router.get("/status/:apiId/stream", async (req, res) => {
try {
const { apiId } = req.params;
// Manual authentication for SSE (EventSource doesn't support custom headers)
const token =
req.query.token || req.headers.authorization?.replace("Bearer ", "");
if (!token) {
return res.status(401).json({ error: "Authentication required" });
}
// Verify token manually with session validation
const jwt = require("jsonwebtoken");
try {
const decoded = jwt.verify(token, process.env.JWT_SECRET);
// Validate session (same as regular auth middleware)
const validation = await validate_session(decoded.sessionId, token);
if (!validation.valid) {
return res.status(401).json({ error: "Invalid or expired session" });
}
// Update session activity to prevent inactivity timeout
await update_session_activity(decoded.sessionId);
req.user = validation.user;
} catch (_err) {
return res.status(401).json({ error: "Invalid or expired token" });
}
console.log("[SSE] Client connected for api_id:", apiId);
// Set headers for SSE
res.setHeader("Content-Type", "text/event-stream");
res.setHeader("Cache-Control", "no-cache");
res.setHeader("Connection", "keep-alive");
res.setHeader("X-Accel-Buffering", "no"); // Disable nginx buffering
// Send initial status immediately
const initialInfo = getConnectionInfo(apiId);
res.write(`data: ${JSON.stringify(initialInfo)}\n\n`);
res.flushHeaders(); // Ensure headers are sent immediately
// Subscribe to connection changes for this specific api_id
const unsubscribe = subscribeToConnectionChanges(apiId, (_connected) => {
try {
// Push update to client instantly when status changes
const connectionInfo = getConnectionInfo(apiId);
console.log(
`[SSE] Pushing status change for ${apiId}: connected=${connectionInfo.connected} secure=${connectionInfo.secure}`,
);
res.write(`data: ${JSON.stringify(connectionInfo)}\n\n`);
} catch (err) {
console.error("[SSE] Error writing to stream:", err);
}
});
// Heartbeat to keep connection alive (every 30 seconds)
const heartbeat = setInterval(() => {
try {
res.write(": heartbeat\n\n");
} catch (err) {
console.error("[SSE] Error writing heartbeat:", err);
clearInterval(heartbeat);
}
}, 30000);
// Cleanup on client disconnect
req.on("close", () => {
console.log("[SSE] Client disconnected for api_id:", apiId);
clearInterval(heartbeat);
unsubscribe();
});
// Handle errors - distinguish between different error types
req.on("error", (err) => {
// Only log non-connection-reset errors to reduce noise
if (err.code !== "ECONNRESET" && err.code !== "EPIPE") {
console.error("[SSE] Request error:", err);
} else {
console.log("[SSE] Client connection reset for api_id:", apiId);
}
clearInterval(heartbeat);
unsubscribe();
});
// Handle response errors
res.on("error", (err) => {
if (err.code !== "ECONNRESET" && err.code !== "EPIPE") {
console.error("[SSE] Response error:", err);
}
clearInterval(heartbeat);
unsubscribe();
});
} catch (error) {
console.error("[SSE] Unexpected error:", error);
if (!res.headersSent) {
res.status(500).json({ error: "Internal server error" });
}
}
});
module.exports = router;

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,743 @@
const axios = require("axios");
const fs = require("node:fs").promises;
const path = require("node:path");
const { exec, spawn } = require("node:child_process");
const { promisify } = require("node:util");
const _execAsync = promisify(exec);
// Simple semver comparison function
function compareVersions(version1, version2) {
const v1parts = version1.split(".").map(Number);
const v2parts = version2.split(".").map(Number);
// Ensure both arrays have the same length
while (v1parts.length < 3) v1parts.push(0);
while (v2parts.length < 3) v2parts.push(0);
for (let i = 0; i < 3; i++) {
if (v1parts[i] > v2parts[i]) return 1;
if (v1parts[i] < v2parts[i]) return -1;
}
return 0;
}
const crypto = require("node:crypto");
class AgentVersionService {
constructor() {
this.githubApiUrl =
"https://api.github.com/repos/PatchMon/PatchMon-agent/releases";
this.agentsDir = path.resolve(__dirname, "../../../agents");
this.supportedArchitectures = [
"linux-amd64",
"linux-arm64",
"linux-386",
"linux-arm",
];
this.currentVersion = null;
this.latestVersion = null;
this.lastChecked = null;
this.checkInterval = 30 * 60 * 1000; // 30 minutes
}
async initialize() {
try {
// Ensure agents directory exists
await fs.mkdir(this.agentsDir, { recursive: true });
console.log("🔍 Testing GitHub API connectivity...");
try {
const testResponse = await axios.get(
"https://api.github.com/repos/PatchMon/PatchMon-agent/releases",
{
timeout: 5000,
headers: {
"User-Agent": "PatchMon-Server/1.0",
Accept: "application/vnd.github.v3+json",
},
},
);
console.log(
`✅ GitHub API accessible - found ${testResponse.data.length} releases`,
);
} catch (testError) {
console.error("❌ GitHub API not accessible:", testError.message);
if (testError.response) {
console.error(
"❌ Status:",
testError.response.status,
testError.response.statusText,
);
if (testError.response.status === 403) {
console.log("⚠️ GitHub API rate limit exceeded - will retry later");
}
}
}
// Get current agent version by executing the binary
await this.getCurrentAgentVersion();
// Try to check for updates, but don't fail initialization if GitHub API is unavailable
try {
await this.checkForUpdates();
} catch (updateError) {
console.log(
"⚠️ Failed to check for updates on startup, will retry later:",
updateError.message,
);
}
// Set up periodic checking
setInterval(() => {
this.checkForUpdates().catch((error) => {
console.log("⚠️ Periodic update check failed:", error.message);
});
}, this.checkInterval);
console.log("✅ Agent Version Service initialized");
} catch (error) {
console.error(
"❌ Failed to initialize Agent Version Service:",
error.message,
);
}
}
async getCurrentAgentVersion() {
try {
console.log("🔍 Getting current agent version...");
// Try to find the agent binary in agents/ folder only (what gets distributed)
const possiblePaths = [
path.join(this.agentsDir, "patchmon-agent-linux-amd64"),
path.join(this.agentsDir, "patchmon-agent"),
];
let agentPath = null;
for (const testPath of possiblePaths) {
try {
await fs.access(testPath);
agentPath = testPath;
console.log(`✅ Found agent binary at: ${testPath}`);
break;
} catch {
// Path doesn't exist, continue to next
}
}
if (!agentPath) {
console.log(
"⚠️ No agent binary found in agents/ folder, current version will be unknown",
);
console.log("💡 Use the Download Updates button to get agent binaries");
this.currentVersion = null;
return;
}
// Execute the agent binary with help flag to get version info
try {
const child = spawn(agentPath, ["--help"], {
timeout: 10000,
});
let stdout = "";
let stderr = "";
child.stdout.on("data", (data) => {
stdout += data.toString();
});
child.stderr.on("data", (data) => {
stderr += data.toString();
});
const result = await new Promise((resolve, reject) => {
child.on("close", (code) => {
resolve({ stdout, stderr, code });
});
child.on("error", reject);
});
if (result.stderr) {
console.log("⚠️ Agent help stderr:", result.stderr);
}
// Parse version from help output (e.g., "PatchMon Agent v1.3.0")
const versionMatch = result.stdout.match(
/PatchMon Agent v([0-9]+\.[0-9]+\.[0-9]+)/i,
);
if (versionMatch) {
this.currentVersion = versionMatch[1];
console.log(`✅ Current agent version: ${this.currentVersion}`);
} else {
console.log(
"⚠️ Could not parse version from agent help output:",
result.stdout,
);
this.currentVersion = null;
}
} catch (execError) {
console.error("❌ Failed to execute agent binary:", execError.message);
this.currentVersion = null;
}
} catch (error) {
console.error("❌ Failed to get current agent version:", error.message);
this.currentVersion = null;
}
}
async checkForUpdates() {
try {
console.log("🔍 Checking for agent updates...");
const response = await axios.get(this.githubApiUrl, {
timeout: 10000,
headers: {
"User-Agent": "PatchMon-Server/1.0",
Accept: "application/vnd.github.v3+json",
},
});
console.log(`📡 GitHub API response status: ${response.status}`);
console.log(`📦 Found ${response.data.length} releases`);
const releases = response.data;
if (releases.length === 0) {
console.log(" No releases found");
this.latestVersion = null;
this.lastChecked = new Date();
return {
latestVersion: null,
currentVersion: this.currentVersion,
hasUpdate: false,
lastChecked: this.lastChecked,
};
}
const latestRelease = releases[0];
this.latestVersion = latestRelease.tag_name.replace("v", ""); // Remove 'v' prefix
this.lastChecked = new Date();
console.log(`📦 Latest agent version: ${this.latestVersion}`);
// Don't download binaries automatically - only when explicitly requested
console.log(
" Skipping automatic binary download - binaries will be downloaded on demand",
);
return {
latestVersion: this.latestVersion,
currentVersion: this.currentVersion,
hasUpdate: this.currentVersion !== this.latestVersion,
lastChecked: this.lastChecked,
};
} catch (error) {
console.error("❌ Failed to check for updates:", error.message);
if (error.response) {
console.error(
"❌ GitHub API error:",
error.response.status,
error.response.statusText,
);
console.error(
"❌ Rate limit info:",
error.response.headers["x-ratelimit-remaining"],
"/",
error.response.headers["x-ratelimit-limit"],
);
}
throw error;
}
}
async downloadBinariesToAgentsFolder(release) {
try {
console.log(
`⬇️ Downloading binaries for version ${release.tag_name} to agents folder...`,
);
for (const arch of this.supportedArchitectures) {
const assetName = `patchmon-agent-${arch}`;
const asset = release.assets.find((a) => a.name === assetName);
if (!asset) {
console.warn(`⚠️ Binary not found for architecture: ${arch}`);
continue;
}
const binaryPath = path.join(this.agentsDir, assetName);
console.log(`⬇️ Downloading ${assetName}...`);
const response = await axios.get(asset.browser_download_url, {
responseType: "stream",
timeout: 60000,
});
const writer = require("node:fs").createWriteStream(binaryPath);
response.data.pipe(writer);
await new Promise((resolve, reject) => {
writer.on("finish", resolve);
writer.on("error", reject);
});
// Make executable
await fs.chmod(binaryPath, "755");
console.log(`✅ Downloaded: ${assetName} to agents folder`);
}
} catch (error) {
console.error(
"❌ Failed to download binaries to agents folder:",
error.message,
);
throw error;
}
}
async downloadBinaryForVersion(version, architecture) {
try {
console.log(
`⬇️ Downloading binary for version ${version} architecture ${architecture}...`,
);
// Get the release info from GitHub
const response = await axios.get(this.githubApiUrl, {
timeout: 10000,
headers: {
"User-Agent": "PatchMon-Server/1.0",
Accept: "application/vnd.github.v3+json",
},
});
const releases = response.data;
const release = releases.find(
(r) => r.tag_name.replace("v", "") === version,
);
if (!release) {
throw new Error(`Release ${version} not found`);
}
const assetName = `patchmon-agent-${architecture}`;
const asset = release.assets.find((a) => a.name === assetName);
if (!asset) {
throw new Error(`Binary not found for architecture: ${architecture}`);
}
const binaryPath = path.join(
this.agentBinariesDir,
`${release.tag_name}-${assetName}`,
);
console.log(`⬇️ Downloading ${assetName}...`);
const downloadResponse = await axios.get(asset.browser_download_url, {
responseType: "stream",
timeout: 60000,
});
const writer = require("node:fs").createWriteStream(binaryPath);
downloadResponse.data.pipe(writer);
await new Promise((resolve, reject) => {
writer.on("finish", resolve);
writer.on("error", reject);
});
// Make executable
await fs.chmod(binaryPath, "755");
console.log(`✅ Downloaded: ${assetName}`);
return binaryPath;
} catch (error) {
console.error(
`❌ Failed to download binary ${version}-${architecture}:`,
error.message,
);
throw error;
}
}
async getBinaryPath(version, architecture) {
const binaryName = `patchmon-agent-${architecture}`;
const binaryPath = path.join(this.agentsDir, binaryName);
try {
await fs.access(binaryPath);
return binaryPath;
} catch {
throw new Error(`Binary not found: ${binaryName} version ${version}`);
}
}
async serveBinary(version, architecture, res) {
try {
// Check if binary exists, if not download it
const binaryPath = await this.getBinaryPath(version, architecture);
const stats = await fs.stat(binaryPath);
res.setHeader("Content-Type", "application/octet-stream");
res.setHeader(
"Content-Disposition",
`attachment; filename="patchmon-agent-${architecture}"`,
);
res.setHeader("Content-Length", stats.size);
// Add cache headers
res.setHeader("Cache-Control", "public, max-age=3600");
res.setHeader("ETag", `"${version}-${architecture}"`);
const stream = require("node:fs").createReadStream(binaryPath);
stream.pipe(res);
} catch (_error) {
// Binary doesn't exist, try to download it
console.log(
`⬇️ Binary not found locally, attempting to download ${version}-${architecture}...`,
);
try {
await this.downloadBinaryForVersion(version, architecture);
// Retry serving the binary
const binaryPath = await this.getBinaryPath(version, architecture);
const stats = await fs.stat(binaryPath);
res.setHeader("Content-Type", "application/octet-stream");
res.setHeader(
"Content-Disposition",
`attachment; filename="patchmon-agent-${architecture}"`,
);
res.setHeader("Content-Length", stats.size);
res.setHeader("Cache-Control", "public, max-age=3600");
res.setHeader("ETag", `"${version}-${architecture}"`);
const stream = require("node:fs").createReadStream(binaryPath);
stream.pipe(res);
} catch (downloadError) {
console.error(
`❌ Failed to download binary ${version}-${architecture}:`,
downloadError.message,
);
res
.status(404)
.json({ error: "Binary not found and could not be downloaded" });
}
}
}
async getVersionInfo() {
let hasUpdate = false;
let updateStatus = "unknown";
let effectiveLatestVersion = this.currentVersion; // Always use local version if available
// If we have a local version, use it as the latest regardless of GitHub
if (this.currentVersion) {
effectiveLatestVersion = this.currentVersion;
console.log(
`🔄 Using local agent version ${this.currentVersion} as latest`,
);
} else if (this.latestVersion) {
// Fallback to GitHub version only if no local version
effectiveLatestVersion = this.latestVersion;
console.log(
`🔄 No local version found, using GitHub version ${this.latestVersion}`,
);
}
if (this.currentVersion && effectiveLatestVersion) {
const comparison = compareVersions(
this.currentVersion,
effectiveLatestVersion,
);
if (comparison < 0) {
hasUpdate = true;
updateStatus = "update-available";
} else if (comparison > 0) {
hasUpdate = false;
updateStatus = "newer-version";
} else {
hasUpdate = false;
updateStatus = "up-to-date";
}
} else if (effectiveLatestVersion && !this.currentVersion) {
hasUpdate = true;
updateStatus = "no-agent";
} else if (this.currentVersion && !effectiveLatestVersion) {
// We have a current version but no latest version (GitHub API unavailable)
hasUpdate = false;
updateStatus = "github-unavailable";
} else if (!this.currentVersion && !effectiveLatestVersion) {
updateStatus = "no-data";
}
return {
currentVersion: this.currentVersion,
latestVersion: effectiveLatestVersion,
hasUpdate: hasUpdate,
updateStatus: updateStatus,
lastChecked: this.lastChecked,
supportedArchitectures: this.supportedArchitectures,
status: effectiveLatestVersion ? "ready" : "no-releases",
};
}
async refreshCurrentVersion() {
await this.getCurrentAgentVersion();
return this.currentVersion;
}
async downloadLatestUpdate() {
try {
console.log("⬇️ Downloading latest agent update...");
// First check for updates to get the latest release info
const _updateInfo = await this.checkForUpdates();
if (!this.latestVersion) {
throw new Error("No latest version available to download");
}
// Get the release info from GitHub
const response = await axios.get(this.githubApiUrl, {
timeout: 10000,
headers: {
"User-Agent": "PatchMon-Server/1.0",
Accept: "application/vnd.github.v3+json",
},
});
const releases = response.data;
const latestRelease = releases[0];
if (!latestRelease) {
throw new Error("No releases found");
}
console.log(
`⬇️ Downloading binaries for version ${latestRelease.tag_name}...`,
);
// Download binaries for all architectures directly to agents folder
await this.downloadBinariesToAgentsFolder(latestRelease);
console.log("✅ Latest update downloaded successfully");
return {
success: true,
version: this.latestVersion,
downloadedArchitectures: this.supportedArchitectures,
message: `Successfully downloaded version ${this.latestVersion}`,
};
} catch (error) {
console.error("❌ Failed to download latest update:", error.message);
throw error;
}
}
async getAvailableVersions() {
// No local caching - only return latest from GitHub
if (this.latestVersion) {
return [this.latestVersion];
}
return [];
}
async getBinaryInfo(version, architecture) {
try {
// Always use local version if it matches the requested version
if (version === this.currentVersion && this.currentVersion) {
const binaryPath = await this.getBinaryPath(
this.currentVersion,
architecture,
);
const stats = await fs.stat(binaryPath);
// Calculate file hash
const fileBuffer = await fs.readFile(binaryPath);
const hash = crypto
.createHash("sha256")
.update(fileBuffer)
.digest("hex");
return {
version: this.currentVersion,
architecture,
size: stats.size,
hash,
lastModified: stats.mtime,
path: binaryPath,
};
}
// For other versions, try to find them in the agents folder
const binaryPath = await this.getBinaryPath(version, architecture);
const stats = await fs.stat(binaryPath);
// Calculate file hash
const fileBuffer = await fs.readFile(binaryPath);
const hash = crypto.createHash("sha256").update(fileBuffer).digest("hex");
return {
version,
architecture,
size: stats.size,
hash,
lastModified: stats.mtime,
path: binaryPath,
};
} catch (error) {
throw new Error(`Failed to get binary info: ${error.message}`);
}
}
/**
* Check if an agent needs an update and push notification if needed
* @param {string} agentApiId - The agent's API ID
* @param {string} agentVersion - The agent's current version
* @param {boolean} force - Force update regardless of version
* @returns {Object} Update check result
*/
async checkAndPushAgentUpdate(agentApiId, agentVersion, force = false) {
try {
console.log(
`🔍 Checking update for agent ${agentApiId} (version: ${agentVersion})`,
);
// Get current server version info
const versionInfo = await this.getVersionInfo();
if (!versionInfo.latestVersion) {
console.log(`⚠️ No latest version available for agent ${agentApiId}`);
return {
needsUpdate: false,
reason: "no-latest-version",
message: "No latest version available on server",
};
}
// Compare versions
const comparison = compareVersions(
agentVersion,
versionInfo.latestVersion,
);
const needsUpdate = force || comparison < 0;
if (needsUpdate) {
console.log(
`📤 Agent ${agentApiId} needs update: ${agentVersion}${versionInfo.latestVersion}`,
);
// Import agentWs service to push notification
const { pushUpdateNotification } = require("./agentWs");
const updateInfo = {
version: versionInfo.latestVersion,
force: force,
downloadUrl: `/api/v1/agent/binary/${versionInfo.latestVersion}/linux-amd64`,
message: force
? "Force update requested"
: `Update available: ${versionInfo.latestVersion}`,
};
const pushed = pushUpdateNotification(agentApiId, updateInfo);
if (pushed) {
console.log(`✅ Update notification pushed to agent ${agentApiId}`);
return {
needsUpdate: true,
reason: force ? "force-update" : "version-outdated",
message: `Update notification sent: ${agentVersion}${versionInfo.latestVersion}`,
targetVersion: versionInfo.latestVersion,
};
} else {
console.log(
`⚠️ Failed to push update notification to agent ${agentApiId} (not connected)`,
);
return {
needsUpdate: true,
reason: "agent-offline",
message: "Agent needs update but is not connected",
targetVersion: versionInfo.latestVersion,
};
}
} else {
console.log(`✅ Agent ${agentApiId} is up to date: ${agentVersion}`);
return {
needsUpdate: false,
reason: "up-to-date",
message: `Agent is up to date: ${agentVersion}`,
};
}
} catch (error) {
console.error(
`❌ Failed to check update for agent ${agentApiId}:`,
error.message,
);
return {
needsUpdate: false,
reason: "error",
message: `Error checking update: ${error.message}`,
};
}
}
/**
* Check and push updates to all connected agents
* @param {boolean} force - Force update regardless of version
* @returns {Object} Bulk update result
*/
async checkAndPushUpdatesToAll(force = false) {
try {
console.log(
`🔍 Checking updates for all connected agents (force: ${force})`,
);
// Import agentWs service to get connected agents
const { pushUpdateNotificationToAll } = require("./agentWs");
const versionInfo = await this.getVersionInfo();
if (!versionInfo.latestVersion) {
return {
success: false,
message: "No latest version available on server",
updatedAgents: 0,
totalAgents: 0,
};
}
const updateInfo = {
version: versionInfo.latestVersion,
force: force,
downloadUrl: `/api/v1/agent/binary/${versionInfo.latestVersion}/linux-amd64`,
message: force
? "Force update requested for all agents"
: `Update available: ${versionInfo.latestVersion}`,
};
const result = await pushUpdateNotificationToAll(updateInfo);
console.log(
`✅ Bulk update notification sent to ${result.notifiedCount} agents`,
);
return {
success: true,
message: `Update notifications sent to ${result.notifiedCount} agents`,
updatedAgents: result.notifiedCount,
totalAgents: result.totalAgents,
targetVersion: versionInfo.latestVersion,
};
} catch (error) {
console.error("❌ Failed to push updates to all agents:", error.message);
return {
success: false,
message: `Error pushing updates: ${error.message}`,
updatedAgents: 0,
totalAgents: 0,
};
}
}
}
module.exports = new AgentVersionService();

View File

@@ -0,0 +1,282 @@
// Lightweight WebSocket hub for agent connections
// Auth: X-API-ID / X-API-KEY headers on the upgrade request
const WebSocket = require("ws");
const url = require("node:url");
// Connection registry by api_id
const apiIdToSocket = new Map();
// Connection metadata (secure/insecure)
// Map<api_id, { ws: WebSocket, secure: boolean }>
const connectionMetadata = new Map();
// Subscribers for connection status changes (for SSE)
// Map<api_id, Set<callback>>
const connectionChangeSubscribers = new Map();
let wss;
let prisma;
function init(server, prismaClient) {
prisma = prismaClient;
wss = new WebSocket.Server({ noServer: true });
// Handle HTTP upgrade events and authenticate before accepting WS
server.on("upgrade", async (request, socket, head) => {
try {
const { pathname } = url.parse(request.url);
if (!pathname) {
socket.destroy();
return;
}
// Handle Bull Board WebSocket connections
if (pathname.startsWith("/bullboard")) {
// For Bull Board, we need to check if the user is authenticated
// Check for session cookie or authorization header
const sessionCookie = request.headers.cookie?.match(
/bull-board-session=([^;]+)/,
)?.[1];
const authHeader = request.headers.authorization;
if (!sessionCookie && !authHeader) {
socket.destroy();
return;
}
// Accept the WebSocket connection for Bull Board
wss.handleUpgrade(request, socket, head, (ws) => {
ws.on("message", (message) => {
// Echo back for Bull Board WebSocket
ws.send(message);
});
});
return;
}
// Handle agent WebSocket connections
if (!pathname.startsWith("/api/")) {
socket.destroy();
return;
}
// Expected path: /api/{v}/agents/ws
const parts = pathname.split("/").filter(Boolean); // [api, v1, agents, ws]
if (parts.length !== 4 || parts[2] !== "agents" || parts[3] !== "ws") {
socket.destroy();
return;
}
const apiId = request.headers["x-api-id"];
const apiKey = request.headers["x-api-key"];
if (!apiId || !apiKey) {
socket.destroy();
return;
}
// Validate credentials
const host = await prisma.hosts.findUnique({ where: { api_id: apiId } });
if (!host || host.api_key !== apiKey) {
socket.destroy();
return;
}
wss.handleUpgrade(request, socket, head, (ws) => {
ws.apiId = apiId;
// Detect if connection is secure (wss://) or not (ws://)
const isSecure =
socket.encrypted || request.headers["x-forwarded-proto"] === "https";
apiIdToSocket.set(apiId, ws);
connectionMetadata.set(apiId, { ws, secure: isSecure });
console.log(
`[agent-ws] connected api_id=${apiId} protocol=${isSecure ? "wss" : "ws"} total=${apiIdToSocket.size}`,
);
// Notify subscribers of connection
notifyConnectionChange(apiId, true);
ws.on("message", () => {
// Currently we don't need to handle agent->server messages
});
ws.on("close", () => {
const existing = apiIdToSocket.get(apiId);
if (existing === ws) {
apiIdToSocket.delete(apiId);
connectionMetadata.delete(apiId);
// Notify subscribers of disconnection
notifyConnectionChange(apiId, false);
}
console.log(
`[agent-ws] disconnected api_id=${apiId} total=${apiIdToSocket.size}`,
);
});
// Optional: greet/ack
safeSend(ws, JSON.stringify({ type: "connected" }));
});
} catch (_err) {
try {
socket.destroy();
} catch {
/* ignore */
}
}
});
}
function safeSend(ws, data) {
if (ws && ws.readyState === WebSocket.OPEN) {
try {
ws.send(data);
} catch {
/* ignore */
}
}
}
function broadcastSettingsUpdate(newInterval) {
const payload = JSON.stringify({
type: "settings_update",
update_interval: newInterval,
});
for (const [, ws] of apiIdToSocket) {
safeSend(ws, payload);
}
}
function pushReportNow(apiId) {
const ws = apiIdToSocket.get(apiId);
safeSend(ws, JSON.stringify({ type: "report_now" }));
}
function pushSettingsUpdate(apiId, newInterval) {
const ws = apiIdToSocket.get(apiId);
safeSend(
ws,
JSON.stringify({ type: "settings_update", update_interval: newInterval }),
);
}
function pushUpdateNotification(apiId, updateInfo) {
const ws = apiIdToSocket.get(apiId);
if (ws && ws.readyState === WebSocket.OPEN) {
safeSend(
ws,
JSON.stringify({
type: "update_notification",
version: updateInfo.version,
force: updateInfo.force || false,
downloadUrl: updateInfo.downloadUrl,
message: updateInfo.message,
}),
);
console.log(
`📤 Pushed update notification to agent ${apiId}: version ${updateInfo.version}`,
);
return true;
} else {
console.log(
`⚠️ Agent ${apiId} not connected, cannot push update notification`,
);
return false;
}
}
async function pushUpdateNotificationToAll(updateInfo) {
let notifiedCount = 0;
let failedCount = 0;
for (const [apiId, ws] of apiIdToSocket) {
if (ws && ws.readyState === WebSocket.OPEN) {
try {
safeSend(
ws,
JSON.stringify({
type: "update_notification",
version: updateInfo.version,
force: updateInfo.force || false,
message: updateInfo.message,
}),
);
notifiedCount++;
console.log(
`📤 Pushed update notification to agent ${apiId}: version ${updateInfo.version}`,
);
} catch (error) {
failedCount++;
console.error(`❌ Failed to notify agent ${apiId}:`, error.message);
}
} else {
failedCount++;
}
}
console.log(
`📤 Update notification sent to ${notifiedCount} agents, ${failedCount} failed`,
);
return { notifiedCount, failedCount };
}
// Notify all subscribers when connection status changes
function notifyConnectionChange(apiId, connected) {
const subscribers = connectionChangeSubscribers.get(apiId);
if (subscribers) {
for (const callback of subscribers) {
try {
callback(connected);
} catch (err) {
console.error(`[agent-ws] error notifying subscriber:`, err);
}
}
}
}
// Subscribe to connection status changes for a specific api_id
function subscribeToConnectionChanges(apiId, callback) {
if (!connectionChangeSubscribers.has(apiId)) {
connectionChangeSubscribers.set(apiId, new Set());
}
connectionChangeSubscribers.get(apiId).add(callback);
// Return unsubscribe function
return () => {
const subscribers = connectionChangeSubscribers.get(apiId);
if (subscribers) {
subscribers.delete(callback);
if (subscribers.size === 0) {
connectionChangeSubscribers.delete(apiId);
}
}
};
}
module.exports = {
init,
broadcastSettingsUpdate,
pushReportNow,
pushSettingsUpdate,
pushUpdateNotification,
pushUpdateNotificationToAll,
// Expose read-only view of connected agents
getConnectedApiIds: () => Array.from(apiIdToSocket.keys()),
isConnected: (apiId) => {
const ws = apiIdToSocket.get(apiId);
return !!ws && ws.readyState === WebSocket.OPEN;
},
// Get connection info including protocol (ws/wss)
getConnectionInfo: (apiId) => {
const metadata = connectionMetadata.get(apiId);
if (!metadata) {
return { connected: false, secure: false };
}
const connected = metadata.ws.readyState === WebSocket.OPEN;
return { connected, secure: metadata.secure };
},
// Subscribe to connection status changes (for SSE)
subscribeToConnectionChanges,
};

View File

@@ -0,0 +1,153 @@
const { prisma } = require("./shared/prisma");
const { compareVersions, checkPublicRepo } = require("./shared/utils");
/**
* GitHub Update Check Automation
* Checks for new releases on GitHub using HTTPS API
*/
class GitHubUpdateCheck {
constructor(queueManager) {
this.queueManager = queueManager;
this.queueName = "github-update-check";
}
/**
* Process GitHub update check job
*/
async process(_job) {
const startTime = Date.now();
console.log("🔍 Starting GitHub update check...");
try {
// Get settings
const settings = await prisma.settings.findFirst();
const DEFAULT_GITHUB_REPO = "https://github.com/PatchMon/PatchMon.git";
const repoUrl = settings?.githubRepoUrl || DEFAULT_GITHUB_REPO;
let owner, repo;
// Parse GitHub repository URL (supports both HTTPS and SSH formats)
if (repoUrl.includes("git@github.com:")) {
const match = repoUrl.match(/git@github\.com:([^/]+)\/([^/]+)\.git/);
if (match) {
[, owner, repo] = match;
}
} else if (repoUrl.includes("github.com/")) {
const match = repoUrl.match(
/github\.com\/([^/]+)\/([^/]+?)(?:\.git)?$/,
);
if (match) {
[, owner, repo] = match;
}
}
if (!owner || !repo) {
throw new Error("Could not parse GitHub repository URL");
}
// Always use HTTPS GitHub API (simpler and more reliable)
const latestVersion = await checkPublicRepo(owner, repo);
if (!latestVersion) {
throw new Error("Could not determine latest version");
}
// Read version from package.json
let currentVersion = "1.3.0"; // fallback
try {
const packageJson = require("../../../package.json");
if (packageJson?.version) {
currentVersion = packageJson.version;
}
} catch (packageError) {
console.warn(
"Could not read version from package.json:",
packageError.message,
);
}
const isUpdateAvailable =
compareVersions(latestVersion, currentVersion) > 0;
// Update settings with check results
await prisma.settings.update({
where: { id: settings.id },
data: {
last_update_check: new Date(),
update_available: isUpdateAvailable,
latest_version: latestVersion,
},
});
const executionTime = Date.now() - startTime;
console.log(
`✅ GitHub update check completed in ${executionTime}ms - Current: ${currentVersion}, Latest: ${latestVersion}, Update Available: ${isUpdateAvailable}`,
);
return {
success: true,
currentVersion,
latestVersion,
isUpdateAvailable,
executionTime,
};
} catch (error) {
const executionTime = Date.now() - startTime;
console.error(
`❌ GitHub update check failed after ${executionTime}ms:`,
error.message,
);
// Update last check time even on error
try {
const settings = await prisma.settings.findFirst();
if (settings) {
await prisma.settings.update({
where: { id: settings.id },
data: {
last_update_check: new Date(),
update_available: false,
},
});
}
} catch (updateError) {
console.error(
"❌ Error updating last check time:",
updateError.message,
);
}
throw error;
}
}
/**
* Schedule recurring GitHub update check (daily at midnight)
*/
async schedule() {
const job = await this.queueManager.queues[this.queueName].add(
"github-update-check",
{},
{
repeat: { cron: "0 0 * * *" }, // Daily at midnight
jobId: "github-update-check-recurring",
},
);
console.log("✅ GitHub update check scheduled");
return job;
}
/**
* Trigger manual GitHub update check
*/
async triggerManual() {
const job = await this.queueManager.queues[this.queueName].add(
"github-update-check-manual",
{},
{ priority: 1 },
);
console.log("✅ Manual GitHub update check triggered");
return job;
}
}
module.exports = GitHubUpdateCheck;

View File

@@ -0,0 +1,388 @@
const { Queue, Worker } = require("bullmq");
const { redis, redisConnection } = require("./shared/redis");
const { prisma } = require("./shared/prisma");
const agentWs = require("../agentWs");
// Import automation classes
const GitHubUpdateCheck = require("./githubUpdateCheck");
const SessionCleanup = require("./sessionCleanup");
const OrphanedRepoCleanup = require("./orphanedRepoCleanup");
const OrphanedPackageCleanup = require("./orphanedPackageCleanup");
// Queue names
const QUEUE_NAMES = {
GITHUB_UPDATE_CHECK: "github-update-check",
SESSION_CLEANUP: "session-cleanup",
ORPHANED_REPO_CLEANUP: "orphaned-repo-cleanup",
ORPHANED_PACKAGE_CLEANUP: "orphaned-package-cleanup",
AGENT_COMMANDS: "agent-commands",
};
/**
* Main Queue Manager
* Manages all BullMQ queues and workers
*/
class QueueManager {
constructor() {
this.queues = {};
this.workers = {};
this.automations = {};
this.isInitialized = false;
}
/**
* Initialize all queues, workers, and automations
*/
async initialize() {
try {
console.log("✅ Redis connection successful");
// Initialize queues
await this.initializeQueues();
// Initialize automation classes
await this.initializeAutomations();
// Initialize workers
await this.initializeWorkers();
// Setup event listeners
this.setupEventListeners();
this.isInitialized = true;
console.log("✅ Queue manager initialized successfully");
} catch (error) {
console.error("❌ Failed to initialize queue manager:", error.message);
throw error;
}
}
/**
* Initialize all queues
*/
async initializeQueues() {
for (const [_key, queueName] of Object.entries(QUEUE_NAMES)) {
this.queues[queueName] = new Queue(queueName, {
connection: redisConnection,
defaultJobOptions: {
removeOnComplete: 50, // Keep last 50 completed jobs
removeOnFail: 20, // Keep last 20 failed jobs
attempts: 3, // Retry failed jobs 3 times
backoff: {
type: "exponential",
delay: 2000,
},
},
});
console.log(`✅ Queue '${queueName}' initialized`);
}
}
/**
* Initialize automation classes
*/
async initializeAutomations() {
this.automations[QUEUE_NAMES.GITHUB_UPDATE_CHECK] = new GitHubUpdateCheck(
this,
);
this.automations[QUEUE_NAMES.SESSION_CLEANUP] = new SessionCleanup(this);
this.automations[QUEUE_NAMES.ORPHANED_REPO_CLEANUP] =
new OrphanedRepoCleanup(this);
this.automations[QUEUE_NAMES.ORPHANED_PACKAGE_CLEANUP] =
new OrphanedPackageCleanup(this);
console.log("✅ All automation classes initialized");
}
/**
* Initialize all workers
*/
async initializeWorkers() {
// Optimized worker options to reduce Redis connections
const workerOptions = {
connection: redisConnection,
concurrency: 1, // Keep concurrency low to reduce connections
// Connection optimization
maxStalledCount: 1,
stalledInterval: 30000,
// Reduce connection churn
settings: {
stalledInterval: 30000,
maxStalledCount: 1,
},
};
// GitHub Update Check Worker
this.workers[QUEUE_NAMES.GITHUB_UPDATE_CHECK] = new Worker(
QUEUE_NAMES.GITHUB_UPDATE_CHECK,
this.automations[QUEUE_NAMES.GITHUB_UPDATE_CHECK].process.bind(
this.automations[QUEUE_NAMES.GITHUB_UPDATE_CHECK],
),
workerOptions,
);
// Session Cleanup Worker
this.workers[QUEUE_NAMES.SESSION_CLEANUP] = new Worker(
QUEUE_NAMES.SESSION_CLEANUP,
this.automations[QUEUE_NAMES.SESSION_CLEANUP].process.bind(
this.automations[QUEUE_NAMES.SESSION_CLEANUP],
),
workerOptions,
);
// Orphaned Repo Cleanup Worker
this.workers[QUEUE_NAMES.ORPHANED_REPO_CLEANUP] = new Worker(
QUEUE_NAMES.ORPHANED_REPO_CLEANUP,
this.automations[QUEUE_NAMES.ORPHANED_REPO_CLEANUP].process.bind(
this.automations[QUEUE_NAMES.ORPHANED_REPO_CLEANUP],
),
workerOptions,
);
// Orphaned Package Cleanup Worker
this.workers[QUEUE_NAMES.ORPHANED_PACKAGE_CLEANUP] = new Worker(
QUEUE_NAMES.ORPHANED_PACKAGE_CLEANUP,
this.automations[QUEUE_NAMES.ORPHANED_PACKAGE_CLEANUP].process.bind(
this.automations[QUEUE_NAMES.ORPHANED_PACKAGE_CLEANUP],
),
workerOptions,
);
// Agent Commands Worker
this.workers[QUEUE_NAMES.AGENT_COMMANDS] = new Worker(
QUEUE_NAMES.AGENT_COMMANDS,
async (job) => {
const { api_id, type } = job.data;
console.log(`Processing agent command: ${type} for ${api_id}`);
// Send command via WebSocket based on type
if (type === "report_now") {
agentWs.pushReportNow(api_id);
} else if (type === "settings_update") {
// For settings update, we need additional data
const { update_interval } = job.data;
agentWs.pushSettingsUpdate(api_id, update_interval);
} else {
console.error(`Unknown agent command type: ${type}`);
}
},
workerOptions,
);
console.log(
"✅ All workers initialized with optimized connection settings",
);
}
/**
* Setup event listeners for all queues
*/
setupEventListeners() {
for (const queueName of Object.values(QUEUE_NAMES)) {
const queue = this.queues[queueName];
queue.on("error", (error) => {
console.error(`❌ Queue '${queueName}' experienced an error:`, error);
});
queue.on("failed", (job, err) => {
console.error(
`❌ Job '${job.id}' in queue '${queueName}' failed:`,
err,
);
});
queue.on("completed", (job) => {
console.log(`✅ Job '${job.id}' in queue '${queueName}' completed.`);
});
}
console.log("✅ Queue events initialized");
}
/**
* Schedule all recurring jobs
*/
async scheduleAllJobs() {
await this.automations[QUEUE_NAMES.GITHUB_UPDATE_CHECK].schedule();
await this.automations[QUEUE_NAMES.SESSION_CLEANUP].schedule();
await this.automations[QUEUE_NAMES.ORPHANED_REPO_CLEANUP].schedule();
await this.automations[QUEUE_NAMES.ORPHANED_PACKAGE_CLEANUP].schedule();
}
/**
* Manual job triggers
*/
async triggerGitHubUpdateCheck() {
return this.automations[QUEUE_NAMES.GITHUB_UPDATE_CHECK].triggerManual();
}
async triggerSessionCleanup() {
return this.automations[QUEUE_NAMES.SESSION_CLEANUP].triggerManual();
}
async triggerOrphanedRepoCleanup() {
return this.automations[QUEUE_NAMES.ORPHANED_REPO_CLEANUP].triggerManual();
}
async triggerOrphanedPackageCleanup() {
return this.automations[
QUEUE_NAMES.ORPHANED_PACKAGE_CLEANUP
].triggerManual();
}
/**
* Get queue statistics
*/
async getQueueStats(queueName) {
const queue = this.queues[queueName];
if (!queue) {
throw new Error(`Queue ${queueName} not found`);
}
const [waiting, active, completed, failed, delayed] = await Promise.all([
queue.getWaiting(),
queue.getActive(),
queue.getCompleted(),
queue.getFailed(),
queue.getDelayed(),
]);
return {
waiting: waiting.length,
active: active.length,
completed: completed.length,
failed: failed.length,
delayed: delayed.length,
};
}
/**
* Get all queue statistics
*/
async getAllQueueStats() {
const stats = {};
for (const queueName of Object.values(QUEUE_NAMES)) {
stats[queueName] = await this.getQueueStats(queueName);
}
return stats;
}
/**
* Get recent jobs for a queue
*/
async getRecentJobs(queueName, limit = 10) {
const queue = this.queues[queueName];
if (!queue) {
throw new Error(`Queue ${queueName} not found`);
}
const [completed, failed] = await Promise.all([
queue.getCompleted(0, limit - 1),
queue.getFailed(0, limit - 1),
]);
return [...completed, ...failed]
.sort((a, b) => new Date(b.finishedOn) - new Date(a.finishedOn))
.slice(0, limit);
}
/**
* Get jobs for a specific host (by API ID)
*/
async getHostJobs(apiId, limit = 20) {
const queue = this.queues[QUEUE_NAMES.AGENT_COMMANDS];
if (!queue) {
throw new Error(`Queue ${QUEUE_NAMES.AGENT_COMMANDS} not found`);
}
console.log(`[getHostJobs] Looking for jobs with api_id: ${apiId}`);
// Get active queue status (waiting, active, delayed, failed)
const [waiting, active, delayed, failed] = await Promise.all([
queue.getWaiting(),
queue.getActive(),
queue.getDelayed(),
queue.getFailed(),
]);
// Filter by API ID
const filterByApiId = (jobs) =>
jobs.filter((job) => job.data && job.data.api_id === apiId);
const waitingCount = filterByApiId(waiting).length;
const activeCount = filterByApiId(active).length;
const delayedCount = filterByApiId(delayed).length;
const failedCount = filterByApiId(failed).length;
console.log(
`[getHostJobs] Queue status - Waiting: ${waitingCount}, Active: ${activeCount}, Delayed: ${delayedCount}, Failed: ${failedCount}`,
);
// Get job history from database (shows all attempts and status changes)
const jobHistory = await prisma.job_history.findMany({
where: {
api_id: apiId,
},
orderBy: {
created_at: "desc",
},
take: limit,
});
console.log(
`[getHostJobs] Found ${jobHistory.length} job history records for api_id: ${apiId}`,
);
return {
waiting: waitingCount,
active: activeCount,
delayed: delayedCount,
failed: failedCount,
jobHistory: jobHistory.map((job) => ({
id: job.id,
job_id: job.job_id,
job_name: job.job_name,
status: job.status,
attempt_number: job.attempt_number,
error_message: job.error_message,
output: job.output,
created_at: job.created_at,
updated_at: job.updated_at,
completed_at: job.completed_at,
})),
};
}
/**
* Graceful shutdown
*/
async shutdown() {
console.log("🛑 Shutting down queue manager...");
for (const queueName of Object.keys(this.queues)) {
try {
await this.queues[queueName].close();
} catch (e) {
console.warn(
`⚠️ Failed to close queue '${queueName}':`,
e?.message || e,
);
}
if (this.workers?.[queueName]) {
try {
await this.workers[queueName].close();
} catch (e) {
console.warn(
`⚠️ Failed to close worker for '${queueName}':`,
e?.message || e,
);
}
}
}
await redis.quit();
console.log("✅ Queue manager shutdown complete");
}
}
const queueManager = new QueueManager();
module.exports = { queueManager, QUEUE_NAMES };

View File

@@ -0,0 +1,116 @@
const { prisma } = require("./shared/prisma");
/**
* Orphaned Package Cleanup Automation
* Removes packages with no associated hosts
*/
class OrphanedPackageCleanup {
constructor(queueManager) {
this.queueManager = queueManager;
this.queueName = "orphaned-package-cleanup";
}
/**
* Process orphaned package cleanup job
*/
async process(_job) {
const startTime = Date.now();
console.log("🧹 Starting orphaned package cleanup...");
try {
// Find packages with 0 hosts
const orphanedPackages = await prisma.packages.findMany({
where: {
host_packages: {
none: {},
},
},
include: {
_count: {
select: {
host_packages: true,
},
},
},
});
let deletedCount = 0;
const deletedPackages = [];
// Delete orphaned packages
for (const pkg of orphanedPackages) {
try {
await prisma.packages.delete({
where: { id: pkg.id },
});
deletedCount++;
deletedPackages.push({
id: pkg.id,
name: pkg.name,
description: pkg.description,
category: pkg.category,
latest_version: pkg.latest_version,
});
console.log(
`🗑️ Deleted orphaned package: ${pkg.name} (${pkg.latest_version})`,
);
} catch (deleteError) {
console.error(
`❌ Failed to delete package ${pkg.id}:`,
deleteError.message,
);
}
}
const executionTime = Date.now() - startTime;
console.log(
`✅ Orphaned package cleanup completed in ${executionTime}ms - Deleted ${deletedCount} packages`,
);
return {
success: true,
deletedCount,
deletedPackages,
executionTime,
};
} catch (error) {
const executionTime = Date.now() - startTime;
console.error(
`❌ Orphaned package cleanup failed after ${executionTime}ms:`,
error.message,
);
throw error;
}
}
/**
* Schedule recurring orphaned package cleanup (daily at 3 AM)
*/
async schedule() {
const job = await this.queueManager.queues[this.queueName].add(
"orphaned-package-cleanup",
{},
{
repeat: { cron: "0 3 * * *" }, // Daily at 3 AM
jobId: "orphaned-package-cleanup-recurring",
},
);
console.log("✅ Orphaned package cleanup scheduled");
return job;
}
/**
* Trigger manual orphaned package cleanup
*/
async triggerManual() {
const job = await this.queueManager.queues[this.queueName].add(
"orphaned-package-cleanup-manual",
{},
{ priority: 1 },
);
console.log("✅ Manual orphaned package cleanup triggered");
return job;
}
}
module.exports = OrphanedPackageCleanup;

View File

@@ -0,0 +1,114 @@
const { prisma } = require("./shared/prisma");
/**
* Orphaned Repository Cleanup Automation
* Removes repositories with no associated hosts
*/
class OrphanedRepoCleanup {
constructor(queueManager) {
this.queueManager = queueManager;
this.queueName = "orphaned-repo-cleanup";
}
/**
* Process orphaned repository cleanup job
*/
async process(_job) {
const startTime = Date.now();
console.log("🧹 Starting orphaned repository cleanup...");
try {
// Find repositories with 0 hosts
const orphanedRepos = await prisma.repositories.findMany({
where: {
host_repositories: {
none: {},
},
},
include: {
_count: {
select: {
host_repositories: true,
},
},
},
});
let deletedCount = 0;
const deletedRepos = [];
// Delete orphaned repositories
for (const repo of orphanedRepos) {
try {
await prisma.repositories.delete({
where: { id: repo.id },
});
deletedCount++;
deletedRepos.push({
id: repo.id,
name: repo.name,
url: repo.url,
});
console.log(
`🗑️ Deleted orphaned repository: ${repo.name} (${repo.url})`,
);
} catch (deleteError) {
console.error(
`❌ Failed to delete repository ${repo.id}:`,
deleteError.message,
);
}
}
const executionTime = Date.now() - startTime;
console.log(
`✅ Orphaned repository cleanup completed in ${executionTime}ms - Deleted ${deletedCount} repositories`,
);
return {
success: true,
deletedCount,
deletedRepos,
executionTime,
};
} catch (error) {
const executionTime = Date.now() - startTime;
console.error(
`❌ Orphaned repository cleanup failed after ${executionTime}ms:`,
error.message,
);
throw error;
}
}
/**
* Schedule recurring orphaned repository cleanup (daily at 2 AM)
*/
async schedule() {
const job = await this.queueManager.queues[this.queueName].add(
"orphaned-repo-cleanup",
{},
{
repeat: { cron: "0 2 * * *" }, // Daily at 2 AM
jobId: "orphaned-repo-cleanup-recurring",
},
);
console.log("✅ Orphaned repository cleanup scheduled");
return job;
}
/**
* Trigger manual orphaned repository cleanup
*/
async triggerManual() {
const job = await this.queueManager.queues[this.queueName].add(
"orphaned-repo-cleanup-manual",
{},
{ priority: 1 },
);
console.log("✅ Manual orphaned repository cleanup triggered");
return job;
}
}
module.exports = OrphanedRepoCleanup;

View File

@@ -0,0 +1,77 @@
const { prisma } = require("./shared/prisma");
/**
* Session Cleanup Automation
* Cleans up expired user sessions
*/
class SessionCleanup {
constructor(queueManager) {
this.queueManager = queueManager;
this.queueName = "session-cleanup";
}
/**
* Process session cleanup job
*/
async process(_job) {
const startTime = Date.now();
console.log("🧹 Starting session cleanup...");
try {
const result = await prisma.user_sessions.deleteMany({
where: {
OR: [{ expires_at: { lt: new Date() } }, { is_revoked: true }],
},
});
const executionTime = Date.now() - startTime;
console.log(
`✅ Session cleanup completed in ${executionTime}ms - Cleaned up ${result.count} expired sessions`,
);
return {
success: true,
sessionsCleaned: result.count,
executionTime,
};
} catch (error) {
const executionTime = Date.now() - startTime;
console.error(
`❌ Session cleanup failed after ${executionTime}ms:`,
error.message,
);
throw error;
}
}
/**
* Schedule recurring session cleanup (every hour)
*/
async schedule() {
const job = await this.queueManager.queues[this.queueName].add(
"session-cleanup",
{},
{
repeat: { cron: "0 * * * *" }, // Every hour
jobId: "session-cleanup-recurring",
},
);
console.log("✅ Session cleanup scheduled");
return job;
}
/**
* Trigger manual session cleanup
*/
async triggerManual() {
const job = await this.queueManager.queues[this.queueName].add(
"session-cleanup-manual",
{},
{ priority: 1 },
);
console.log("✅ Manual session cleanup triggered");
return job;
}
}
module.exports = SessionCleanup;

View File

@@ -0,0 +1,5 @@
const { getPrismaClient } = require("../../../config/prisma");
const prisma = getPrismaClient();
module.exports = { prisma };

View File

@@ -0,0 +1,56 @@
const IORedis = require("ioredis");
// Redis connection configuration with connection pooling
const redisConnection = {
host: process.env.REDIS_HOST || "localhost",
port: parseInt(process.env.REDIS_PORT, 10) || 6379,
password: process.env.REDIS_PASSWORD || undefined,
username: process.env.REDIS_USER || undefined,
db: parseInt(process.env.REDIS_DB, 10) || 0,
// Connection pooling settings
lazyConnect: true,
keepAlive: 30000,
connectTimeout: 30000, // Increased from 10s to 30s
commandTimeout: 30000, // Increased from 5s to 30s
enableReadyCheck: false,
// Reduce connection churn
family: 4, // Force IPv4
// Retry settings
retryDelayOnClusterDown: 300,
retryDelayOnFailover: 100,
maxRetriesPerRequest: null, // BullMQ requires this to be null
// Connection pool settings
maxLoadingTimeout: 30000,
};
// Create Redis connection with singleton pattern
let redisInstance = null;
function getRedisConnection() {
if (!redisInstance) {
redisInstance = new IORedis(redisConnection);
// Handle graceful shutdown
process.on("beforeExit", async () => {
await redisInstance.quit();
});
process.on("SIGINT", async () => {
await redisInstance.quit();
process.exit(0);
});
process.on("SIGTERM", async () => {
await redisInstance.quit();
process.exit(0);
});
}
return redisInstance;
}
module.exports = {
redis: getRedisConnection(),
redisConnection,
getRedisConnection,
};

View File

@@ -0,0 +1,82 @@
// Common utilities for automation jobs
/**
* Compare two semantic versions
* @param {string} version1 - First version
* @param {string} version2 - Second version
* @returns {number} - 1 if version1 > version2, -1 if version1 < version2, 0 if equal
*/
function compareVersions(version1, version2) {
const v1parts = version1.split(".").map(Number);
const v2parts = version2.split(".").map(Number);
const maxLength = Math.max(v1parts.length, v2parts.length);
for (let i = 0; i < maxLength; i++) {
const v1part = v1parts[i] || 0;
const v2part = v2parts[i] || 0;
if (v1part > v2part) return 1;
if (v1part < v2part) return -1;
}
return 0;
}
/**
* Check public GitHub repository for latest release
* @param {string} owner - Repository owner
* @param {string} repo - Repository name
* @returns {Promise<string|null>} - Latest version or null
*/
async function checkPublicRepo(owner, repo) {
try {
const httpsRepoUrl = `https://api.github.com/repos/${owner}/${repo}/releases/latest`;
let currentVersion = "1.3.0"; // fallback
try {
const packageJson = require("../../../package.json");
if (packageJson?.version) {
currentVersion = packageJson.version;
}
} catch (packageError) {
console.warn(
"Could not read version from package.json for User-Agent, using fallback:",
packageError.message,
);
}
const response = await fetch(httpsRepoUrl, {
method: "GET",
headers: {
Accept: "application/vnd.github.v3+json",
"User-Agent": `PatchMon-Server/${currentVersion}`,
},
});
if (!response.ok) {
const errorText = await response.text();
if (
errorText.includes("rate limit") ||
errorText.includes("API rate limit")
) {
console.log("⚠️ GitHub API rate limit exceeded, skipping update check");
return null;
}
throw new Error(
`GitHub API error: ${response.status} ${response.statusText}`,
);
}
const releaseData = await response.json();
return releaseData.tag_name.replace("v", "");
} catch (error) {
console.error("GitHub API error:", error.message);
throw error;
}
}
module.exports = {
compareVersions,
checkPublicRepo,
};

View File

@@ -0,0 +1,198 @@
const { getPrismaClient } = require("../config/prisma");
const { v4: uuidv4 } = require("uuid");
const prisma = getPrismaClient();
// Cached settings instance
let cachedSettings = null;
// Environment variable to settings field mapping
const ENV_TO_SETTINGS_MAP = {
SERVER_PROTOCOL: "server_protocol",
SERVER_HOST: "server_host",
SERVER_PORT: "server_port",
};
// Helper function to construct server URL without default ports
function constructServerUrl(protocol, host, port) {
const isHttps = protocol.toLowerCase() === "https";
const isHttp = protocol.toLowerCase() === "http";
// Don't append port if it's the default port for the protocol
if ((isHttps && port === 443) || (isHttp && port === 80)) {
return `${protocol}://${host}`.toLowerCase();
}
return `${protocol}://${host}:${port}`.toLowerCase();
}
// Create settings from environment variables and/or defaults
async function createSettingsFromEnvironment() {
const protocol = process.env.SERVER_PROTOCOL || "http";
const host = process.env.SERVER_HOST || "localhost";
const port = parseInt(process.env.SERVER_PORT, 10) || 3001;
const serverUrl = constructServerUrl(protocol, host, port);
const settings = await prisma.settings.create({
data: {
id: uuidv4(),
server_url: serverUrl,
server_protocol: protocol,
server_host: host,
server_port: port,
update_interval: 60,
auto_update: false,
signup_enabled: false,
ignore_ssl_self_signed: false,
updated_at: new Date(),
},
});
console.log("Created settings");
return settings;
}
// Sync environment variables with existing settings
async function syncEnvironmentToSettings(currentSettings) {
const updates = {};
let hasChanges = false;
// Check each environment variable mapping
for (const [envVar, settingsField] of Object.entries(ENV_TO_SETTINGS_MAP)) {
if (process.env[envVar]) {
const envValue = process.env[envVar];
const currentValue = currentSettings[settingsField];
// Convert environment value to appropriate type
let convertedValue = envValue;
if (settingsField === "server_port") {
convertedValue = parseInt(envValue, 10);
}
// Only update if values differ
if (currentValue !== convertedValue) {
updates[settingsField] = convertedValue;
hasChanges = true;
if (process.env.ENABLE_LOGGING === "true") {
console.log(
`Environment variable ${envVar} (${envValue}) differs from settings ${settingsField} (${currentValue}), updating...`,
);
}
}
}
}
// Construct server_url from components if any components were updated
const protocol = updates.server_protocol || currentSettings.server_protocol;
const host = updates.server_host || currentSettings.server_host;
const port = updates.server_port || currentSettings.server_port;
const constructedServerUrl = constructServerUrl(protocol, host, port);
// Update server_url if it differs from the constructed value
if (currentSettings.server_url !== constructedServerUrl) {
updates.server_url = constructedServerUrl;
hasChanges = true;
if (process.env.ENABLE_LOGGING === "true") {
console.log(`Updating server_url to: ${constructedServerUrl}`);
}
}
// Update settings if there are changes
if (hasChanges) {
const updatedSettings = await prisma.settings.update({
where: { id: currentSettings.id },
data: {
...updates,
updated_at: new Date(),
},
});
if (process.env.ENABLE_LOGGING === "true") {
console.log(
`Synced ${Object.keys(updates).length} environment variables to settings`,
);
}
return updatedSettings;
}
return currentSettings;
}
// Initialise settings - create from environment or sync existing
async function initSettings() {
if (cachedSettings) {
return cachedSettings;
}
try {
let settings = await prisma.settings.findFirst({
orderBy: { updated_at: "desc" },
});
if (!settings) {
// No settings exist, create from environment variables and defaults
settings = await createSettingsFromEnvironment();
} else {
// Settings exist, sync with environment variables
settings = await syncEnvironmentToSettings(settings);
}
// Cache the initialised settings
cachedSettings = settings;
return settings;
} catch (error) {
console.error("Failed to initialise settings:", error);
throw error;
}
}
// Get current settings (returns cached if available)
async function getSettings() {
return cachedSettings || (await initSettings());
}
// Update settings and refresh cache
async function updateSettings(id, updateData) {
try {
const updatedSettings = await prisma.settings.update({
where: { id },
data: {
...updateData,
updated_at: new Date(),
},
});
// Reconstruct server_url from components
const serverUrl = constructServerUrl(
updatedSettings.server_protocol,
updatedSettings.server_host,
updatedSettings.server_port,
);
if (updatedSettings.server_url !== serverUrl) {
updatedSettings.server_url = serverUrl;
await prisma.settings.update({
where: { id },
data: { server_url: serverUrl },
});
}
// Update cache
cachedSettings = updatedSettings;
return updatedSettings;
} catch (error) {
console.error("Failed to update settings:", error);
throw error;
}
}
// Invalidate cache (useful for testing or manual refresh)
function invalidateCache() {
cachedSettings = null;
}
module.exports = {
initSettings,
getSettings,
updateSettings,
invalidateCache,
syncEnvironmentToSettings, // Export for startup use
};

View File

@@ -1,247 +0,0 @@
const { PrismaClient } = require('@prisma/client');
const { exec } = require('child_process');
const { promisify } = require('util');
const prisma = new PrismaClient();
const execAsync = promisify(exec);
class UpdateScheduler {
constructor() {
this.isRunning = false;
this.intervalId = null;
this.checkInterval = 24 * 60 * 60 * 1000; // 24 hours in milliseconds
}
// Start the scheduler
start() {
if (this.isRunning) {
console.log('Update scheduler is already running');
return;
}
console.log('🔄 Starting update scheduler...');
this.isRunning = true;
// Run initial check
this.checkForUpdates();
// Schedule regular checks
this.intervalId = setInterval(() => {
this.checkForUpdates();
}, this.checkInterval);
console.log(`✅ Update scheduler started - checking every ${this.checkInterval / (60 * 60 * 1000)} hours`);
}
// Stop the scheduler
stop() {
if (!this.isRunning) {
console.log('Update scheduler is not running');
return;
}
console.log('🛑 Stopping update scheduler...');
this.isRunning = false;
if (this.intervalId) {
clearInterval(this.intervalId);
this.intervalId = null;
}
console.log('✅ Update scheduler stopped');
}
// Check for updates
async checkForUpdates() {
try {
console.log('🔍 Checking for updates...');
// Get settings
const settings = await prisma.settings.findFirst();
if (!settings || !settings.githubRepoUrl) {
console.log('⚠️ No GitHub repository configured, skipping update check');
return;
}
// Extract owner and repo from GitHub URL
const repoUrl = settings.githubRepoUrl;
let owner, repo;
if (repoUrl.includes('git@github.com:')) {
const match = repoUrl.match(/git@github\.com:([^\/]+)\/([^\/]+)\.git/);
if (match) {
[, owner, repo] = match;
}
} else if (repoUrl.includes('github.com/')) {
const match = repoUrl.match(/github\.com\/([^\/]+)\/([^\/]+?)(?:\.git)?$/);
if (match) {
[, owner, repo] = match;
}
}
if (!owner || !repo) {
console.log('⚠️ Could not parse GitHub repository URL, skipping update check');
return;
}
let latestVersion;
const isPrivate = settings.repositoryType === 'private';
if (isPrivate) {
// Use SSH for private repositories
latestVersion = await this.checkPrivateRepo(settings, owner, repo);
} else {
// Use GitHub API for public repositories
latestVersion = await this.checkPublicRepo(owner, repo);
}
if (!latestVersion) {
console.log('⚠️ Could not determine latest version, skipping update check');
return;
}
const currentVersion = '1.2.5';
const isUpdateAvailable = this.compareVersions(latestVersion, currentVersion) > 0;
// Update settings with check results
await prisma.settings.update({
where: { id: settings.id },
data: {
lastUpdateCheck: new Date(),
updateAvailable: isUpdateAvailable,
latestVersion: latestVersion
}
});
console.log(`✅ Update check completed - Current: ${currentVersion}, Latest: ${latestVersion}, Update Available: ${isUpdateAvailable}`);
} catch (error) {
console.error('❌ Error checking for updates:', error.message);
// Update last check time even on error
try {
const settings = await prisma.settings.findFirst();
if (settings) {
await prisma.settings.update({
where: { id: settings.id },
data: {
lastUpdateCheck: new Date(),
updateAvailable: false
}
});
}
} catch (updateError) {
console.error('❌ Error updating last check time:', updateError.message);
}
}
}
// Check private repository using SSH
async checkPrivateRepo(settings, owner, repo) {
try {
let sshKeyPath = settings.sshKeyPath;
// Try to find SSH key if not configured
if (!sshKeyPath) {
const possibleKeyPaths = [
'/root/.ssh/id_ed25519',
'/root/.ssh/id_rsa',
'/home/patchmon/.ssh/id_ed25519',
'/home/patchmon/.ssh/id_rsa',
'/var/www/.ssh/id_ed25519',
'/var/www/.ssh/id_rsa'
];
for (const path of possibleKeyPaths) {
try {
require('fs').accessSync(path);
sshKeyPath = path;
break;
} catch (e) {
// Key not found at this path, try next
}
}
}
if (!sshKeyPath) {
throw new Error('No SSH deploy key found');
}
const sshRepoUrl = `git@github.com:${owner}/${repo}.git`;
const env = {
...process.env,
GIT_SSH_COMMAND: `ssh -i ${sshKeyPath} -o StrictHostKeyChecking=no -o IdentitiesOnly=yes`
};
const { stdout: sshLatestTag } = await execAsync(
`git ls-remote --tags --sort=-version:refname ${sshRepoUrl} | head -n 1 | sed 's/.*refs\\/tags\\///' | sed 's/\\^{}//'`,
{
timeout: 10000,
env: env
}
);
return sshLatestTag.trim().replace('v', '');
} catch (error) {
console.error('SSH Git error:', error.message);
throw error;
}
}
// Check public repository using GitHub API
async checkPublicRepo(owner, repo) {
try {
const httpsRepoUrl = `https://api.github.com/repos/${owner}/${repo}/releases/latest`;
const response = await fetch(httpsRepoUrl, {
method: 'GET',
headers: {
'Accept': 'application/vnd.github.v3+json',
'User-Agent': 'PatchMon-Server/1.2.4'
}
});
if (!response.ok) {
throw new Error(`GitHub API error: ${response.status} ${response.statusText}`);
}
const releaseData = await response.json();
return releaseData.tag_name.replace('v', '');
} catch (error) {
console.error('GitHub API error:', error.message);
throw error;
}
}
// Compare version strings (semantic versioning)
compareVersions(version1, version2) {
const v1parts = version1.split('.').map(Number);
const v2parts = version2.split('.').map(Number);
const maxLength = Math.max(v1parts.length, v2parts.length);
for (let i = 0; i < maxLength; i++) {
const v1part = v1parts[i] || 0;
const v2part = v2parts[i] || 0;
if (v1part > v2part) return 1;
if (v1part < v2part) return -1;
}
return 0;
}
// Get scheduler status
getStatus() {
return {
isRunning: this.isRunning,
checkInterval: this.checkInterval,
nextCheck: this.isRunning ? new Date(Date.now() + this.checkInterval) : null
};
}
}
// Create singleton instance
const updateScheduler = new UpdateScheduler();
module.exports = updateScheduler;

View File

@@ -0,0 +1,499 @@
const jwt = require("jsonwebtoken");
const crypto = require("node:crypto");
const { getPrismaClient } = require("../config/prisma");
const prisma = getPrismaClient();
/**
* Session Manager - Handles secure session management with inactivity timeout
*/
// Configuration
if (!process.env.JWT_SECRET) {
throw new Error("JWT_SECRET environment variable is required");
}
const JWT_SECRET = process.env.JWT_SECRET;
const JWT_EXPIRES_IN = process.env.JWT_EXPIRES_IN || "1h";
const JWT_REFRESH_EXPIRES_IN = process.env.JWT_REFRESH_EXPIRES_IN || "7d";
const TFA_REMEMBER_ME_EXPIRES_IN =
process.env.TFA_REMEMBER_ME_EXPIRES_IN || "30d";
const TFA_MAX_REMEMBER_SESSIONS = parseInt(
process.env.TFA_MAX_REMEMBER_SESSIONS || "5",
10,
);
const TFA_SUSPICIOUS_ACTIVITY_THRESHOLD = parseInt(
process.env.TFA_SUSPICIOUS_ACTIVITY_THRESHOLD || "3",
10,
);
const INACTIVITY_TIMEOUT_MINUTES = parseInt(
process.env.SESSION_INACTIVITY_TIMEOUT_MINUTES || "30",
10,
);
/**
* Generate access token (short-lived)
*/
function generate_access_token(user_id, session_id) {
return jwt.sign({ userId: user_id, sessionId: session_id }, JWT_SECRET, {
expiresIn: JWT_EXPIRES_IN,
});
}
/**
* Generate refresh token (long-lived)
*/
function generate_refresh_token() {
return crypto.randomBytes(64).toString("hex");
}
/**
* Hash token for storage
*/
function hash_token(token) {
return crypto.createHash("sha256").update(token).digest("hex");
}
/**
* Parse expiration string to Date
*/
function parse_expiration(expiration_string) {
const match = expiration_string.match(/^(\d+)([smhd])$/);
if (!match) {
throw new Error("Invalid expiration format");
}
const value = parseInt(match[1], 10);
const unit = match[2];
const now = new Date();
switch (unit) {
case "s":
return new Date(now.getTime() + value * 1000);
case "m":
return new Date(now.getTime() + value * 60 * 1000);
case "h":
return new Date(now.getTime() + value * 60 * 60 * 1000);
case "d":
return new Date(now.getTime() + value * 24 * 60 * 60 * 1000);
default:
throw new Error("Invalid time unit");
}
}
/**
* Generate device fingerprint from request data
*/
function generate_device_fingerprint(req) {
const components = [
req.get("user-agent") || "",
req.get("accept-language") || "",
req.get("accept-encoding") || "",
req.ip || "",
];
// Create a simple hash of device characteristics
const fingerprint = crypto
.createHash("sha256")
.update(components.join("|"))
.digest("hex")
.substring(0, 32); // Use first 32 chars for storage efficiency
return fingerprint;
}
/**
* Check for suspicious activity patterns
*/
async function check_suspicious_activity(
user_id,
_ip_address,
_device_fingerprint,
) {
try {
// Check for multiple sessions from different IPs in short time
const recent_sessions = await prisma.user_sessions.findMany({
where: {
user_id: user_id,
created_at: {
gte: new Date(Date.now() - 24 * 60 * 60 * 1000), // Last 24 hours
},
is_revoked: false,
},
select: {
ip_address: true,
device_fingerprint: true,
created_at: true,
},
});
// Count unique IPs and devices
const unique_ips = new Set(recent_sessions.map((s) => s.ip_address));
const unique_devices = new Set(
recent_sessions.map((s) => s.device_fingerprint),
);
// Flag as suspicious if more than threshold different IPs or devices in 24h
if (
unique_ips.size > TFA_SUSPICIOUS_ACTIVITY_THRESHOLD ||
unique_devices.size > TFA_SUSPICIOUS_ACTIVITY_THRESHOLD
) {
console.warn(
`Suspicious activity detected for user ${user_id}: ${unique_ips.size} IPs, ${unique_devices.size} devices`,
);
return true;
}
return false;
} catch (error) {
console.error("Error checking suspicious activity:", error);
return false;
}
}
/**
* Create a new session for user
*/
async function create_session(
user_id,
ip_address,
user_agent,
remember_me = false,
req = null,
) {
try {
const session_id = crypto.randomUUID();
const refresh_token = generate_refresh_token();
const access_token = generate_access_token(user_id, session_id);
// Generate device fingerprint if request is available
const device_fingerprint = req ? generate_device_fingerprint(req) : null;
// Check for suspicious activity
if (device_fingerprint) {
const is_suspicious = await check_suspicious_activity(
user_id,
ip_address,
device_fingerprint,
);
if (is_suspicious) {
console.warn(
`Suspicious activity detected for user ${user_id}, session creation may be restricted`,
);
}
}
// Check session limits for remember me
if (remember_me) {
const existing_remember_sessions = await prisma.user_sessions.count({
where: {
user_id: user_id,
tfa_remember_me: true,
is_revoked: false,
expires_at: { gt: new Date() },
},
});
// Limit remember me sessions per user
if (existing_remember_sessions >= TFA_MAX_REMEMBER_SESSIONS) {
throw new Error(
"Maximum number of remembered devices reached. Please revoke an existing session first.",
);
}
}
// Use longer expiration for remember me sessions
const expires_at = remember_me
? parse_expiration(TFA_REMEMBER_ME_EXPIRES_IN)
: parse_expiration(JWT_REFRESH_EXPIRES_IN);
// Calculate TFA bypass until date for remember me sessions
const tfa_bypass_until = remember_me
? parse_expiration(TFA_REMEMBER_ME_EXPIRES_IN)
: null;
// Store session in database
await prisma.user_sessions.create({
data: {
id: session_id,
user_id: user_id,
refresh_token: hash_token(refresh_token),
access_token_hash: hash_token(access_token),
ip_address: ip_address || null,
user_agent: user_agent || null,
device_fingerprint: device_fingerprint,
last_login_ip: ip_address || null,
last_activity: new Date(),
expires_at: expires_at,
tfa_remember_me: remember_me,
tfa_bypass_until: tfa_bypass_until,
login_count: 1,
},
});
return {
session_id,
access_token,
refresh_token,
expires_at,
tfa_bypass_until,
};
} catch (error) {
console.error("Error creating session:", error);
throw error;
}
}
/**
* Validate session and check for inactivity timeout
*/
async function validate_session(session_id, access_token) {
try {
const session = await prisma.user_sessions.findUnique({
where: { id: session_id },
include: { users: true },
});
if (!session) {
return { valid: false, reason: "Session not found" };
}
// Check if session is revoked
if (session.is_revoked) {
return { valid: false, reason: "Session revoked" };
}
// Check if session has expired
if (new Date() > session.expires_at) {
await revoke_session(session_id);
return { valid: false, reason: "Session expired" };
}
// Check for inactivity timeout
const inactivity_threshold = new Date(
Date.now() - INACTIVITY_TIMEOUT_MINUTES * 60 * 1000,
);
if (session.last_activity < inactivity_threshold) {
await revoke_session(session_id);
return {
valid: false,
reason: "Session inactive",
message: `Session timed out after ${INACTIVITY_TIMEOUT_MINUTES} minutes of inactivity`,
};
}
// Validate access token hash (optional security check)
if (session.access_token_hash) {
const provided_hash = hash_token(access_token);
if (session.access_token_hash !== provided_hash) {
return { valid: false, reason: "Token mismatch" };
}
}
// Check if user is still active
if (!session.users.is_active) {
await revoke_session(session_id);
return { valid: false, reason: "User inactive" };
}
return {
valid: true,
session,
user: session.users,
};
} catch (error) {
console.error("Error validating session:", error);
return { valid: false, reason: "Validation error" };
}
}
/**
* Update session activity timestamp
*/
async function update_session_activity(session_id) {
try {
await prisma.user_sessions.update({
where: { id: session_id },
data: { last_activity: new Date() },
});
return true;
} catch (error) {
console.error("Error updating session activity:", error);
return false;
}
}
/**
* Refresh access token using refresh token
*/
async function refresh_access_token(refresh_token) {
try {
const hashed_token = hash_token(refresh_token);
const session = await prisma.user_sessions.findUnique({
where: { refresh_token: hashed_token },
include: { users: true },
});
if (!session) {
return { success: false, error: "Invalid refresh token" };
}
// Validate session
const validation = await validate_session(session.id, "");
if (!validation.valid) {
return { success: false, error: validation.reason };
}
// Generate new access token
const new_access_token = generate_access_token(session.user_id, session.id);
// Update access token hash
await prisma.user_sessions.update({
where: { id: session.id },
data: {
access_token_hash: hash_token(new_access_token),
last_activity: new Date(),
},
});
return {
success: true,
access_token: new_access_token,
user: session.users,
};
} catch (error) {
console.error("Error refreshing access token:", error);
return { success: false, error: "Token refresh failed" };
}
}
/**
* Revoke a session
*/
async function revoke_session(session_id) {
try {
await prisma.user_sessions.update({
where: { id: session_id },
data: { is_revoked: true },
});
return true;
} catch (error) {
console.error("Error revoking session:", error);
return false;
}
}
/**
* Revoke all sessions for a user
*/
async function revoke_all_user_sessions(user_id) {
try {
await prisma.user_sessions.updateMany({
where: { user_id: user_id },
data: { is_revoked: true },
});
return true;
} catch (error) {
console.error("Error revoking user sessions:", error);
return false;
}
}
/**
* Clean up expired sessions (should be run periodically)
*/
async function cleanup_expired_sessions() {
try {
const result = await prisma.user_sessions.deleteMany({
where: {
OR: [{ expires_at: { lt: new Date() } }, { is_revoked: true }],
},
});
console.log(`Cleaned up ${result.count} expired sessions`);
return result.count;
} catch (error) {
console.error("Error cleaning up sessions:", error);
return 0;
}
}
/**
* Get active sessions for a user
*/
async function get_user_sessions(user_id) {
try {
return await prisma.user_sessions.findMany({
where: {
user_id: user_id,
is_revoked: false,
expires_at: { gt: new Date() },
},
select: {
id: true,
ip_address: true,
user_agent: true,
last_activity: true,
created_at: true,
expires_at: true,
tfa_remember_me: true,
tfa_bypass_until: true,
},
orderBy: { last_activity: "desc" },
});
} catch (error) {
console.error("Error getting user sessions:", error);
return [];
}
}
/**
* Check if TFA is bypassed for a session
*/
async function is_tfa_bypassed(session_id) {
try {
const session = await prisma.user_sessions.findUnique({
where: { id: session_id },
select: {
tfa_remember_me: true,
tfa_bypass_until: true,
is_revoked: true,
expires_at: true,
},
});
if (!session) {
return false;
}
// Check if session is still valid
if (session.is_revoked || new Date() > session.expires_at) {
return false;
}
// Check if TFA is bypassed and still within bypass period
if (session.tfa_remember_me && session.tfa_bypass_until) {
return new Date() < session.tfa_bypass_until;
}
return false;
} catch (error) {
console.error("Error checking TFA bypass:", error);
return false;
}
}
module.exports = {
create_session,
validate_session,
update_session_activity,
refresh_access_token,
revoke_session,
revoke_all_user_sessions,
cleanup_expired_sessions,
get_user_sessions,
is_tfa_bypassed,
generate_device_fingerprint,
check_suspicious_activity,
generate_access_token,
INACTIVITY_TIMEOUT_MINUTES,
};

View File

@@ -1 +0,0 @@

View File

@@ -1 +0,0 @@

View File

@@ -1 +0,0 @@

View File

@@ -1 +0,0 @@

View File

@@ -1 +0,0 @@

View File

@@ -1 +0,0 @@

View File

@@ -1 +0,0 @@

View File

@@ -1 +0,0 @@

17
biome.json Normal file
View File

@@ -0,0 +1,17 @@
{
"$schema": "https://biomejs.dev/schemas/2.2.4/schema.json",
"vcs": {
"enabled": true,
"clientKind": "git",
"useIgnoreFile": true
},
"formatter": {
"enabled": true
},
"linter": {
"enabled": true
},
"assist": {
"enabled": true
}
}

BIN
dashboard.jpeg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 104 KiB

325
docker/README.md Normal file
View File

@@ -0,0 +1,325 @@
# PatchMon Docker
## Overview
PatchMon is a containerised application that monitors system patches and updates. The application consists of four main services:
- **Database**: PostgreSQL 17
- **Redis**: Redis 7 for BullMQ job queues and caching
- **Backend**: Node.js API server
- **Frontend**: React application served via NGINX
## Images
- **Backend**: [ghcr.io/patchmon/patchmon-backend](https://github.com/patchmon/patchmon.net/pkgs/container/patchmon-backend)
- **Frontend**: [ghcr.io/patchmon/patchmon-frontend](https://github.com/patchmon/patchmon.net/pkgs/container/patchmon-frontend)
### Tags
- `latest`: The latest stable release of PatchMon
- `x.y.z`: Full version tags (e.g. `1.2.3`) - Use this for exact version pinning.
- `x.y`: Minor version tags (e.g. `1.2`) - Use this to get the latest patch release in a minor version series.
- `x`: Major version tags (e.g. `1`) - Use this to get the latest minor and patch release in a major version series.
- `edge`: The latest development build with the most recent features and fixes. This tag may often be unstable and is intended only for testing and development purposes.
These tags are available for both backend and frontend images as they are versioned together.
## Quick Start
### Production Deployment
1. Download the [Docker Compose file](docker-compose.yml)
2. Set a database password in the file where it says:
```yaml
environment:
POSTGRES_PASSWORD: # CREATE A STRONG PASSWORD AND PUT IT HERE
```
3. Update the corresponding `DATABASE_URL` with your password in the backend service where it says:
```yaml
environment:
DATABASE_URL: postgresql://patchmon_user:REPLACE_YOUR_POSTGRES_PASSWORD_HERE@database:5432/patchmon_db
```
4. Set a Redis password in the Redis service command where it says:
```yaml
command: redis-server --requirepass your-redis-password-here
```
Note: The Redis service uses a hardcoded password in the command line for better reliability and to avoid environment variable parsing issues.
5. Update the corresponding `REDIS_PASSWORD` in the backend service where it says:
```yaml
environment:
REDIS_PASSWORD: your-redis-password-here
```
6. Generate a strong JWT secret. You can do this like so:
```bash
openssl rand -hex 64
```
7. Set a JWT secret in the backend service where it says:
```yaml
environment:
JWT_SECRET: # CREATE A STRONG SECRET AND PUT IT HERE
```
8. Configure environment variables (see [Configuration](#configuration) section)
9. Start the application:
```bash
docker compose up -d
```
10. Access the application at `http://localhost:3000`
## Updating
By default, the compose file uses the `latest` tag for both backend and frontend images.
This means you can update PatchMon to the latest version as easily as:
```bash
docker compose up -d --pull
```
This command will:
- Pull the latest images from the registry
- Recreate containers with updated images
- Maintain your data and configuration
### Version-Specific Updates
If you'd like to pin your Docker deployment of PatchMon to a specific version, you can do this in the compose file.
When you do this, updating to a new version requires manually updating the image tags in the compose file yourself:
1. Update the image tags in `docker-compose.yml`. For example:
```yaml
services:
backend:
image: ghcr.io/patchmon/patchmon-backend:1.2.3 # Update version here
...
frontend:
image: ghcr.io/patchmon/patchmon-frontend:1.2.3 # Update version here
...
```
2. Then run the update command:
```bash
docker compose up -d --pull
```
> [!TIP]
> Check the [releases page](https://github.com/PatchMon/PatchMon/releases) for version-specific changes and migration notes.
## Configuration
### Environment Variables
#### Database Service
| Variable | Description | Default |
| ------------------- | ----------------- | ---------------- |
| `POSTGRES_DB` | Database name | `patchmon_db` |
| `POSTGRES_USER` | Database user | `patchmon_user` |
| `POSTGRES_PASSWORD` | Database password | **MUST BE SET!** |
#### Redis Service
| Variable | Description | Default |
| -------------- | ------------------ | ---------------- |
| `REDIS_PASSWORD` | Redis password | **MUST BE SET!** |
> [!NOTE]
> The Redis service uses a hardcoded password in the command line (`redis-server --requirepass your-password`) instead of environment variables or configuration files. This approach eliminates parsing issues and provides better reliability. The password must be set in both the Redis command and the backend service environment variables.
#### Backend Service
##### Database Configuration
| Variable | Description | Default |
| -------------------------- | ---------------------------------------------------- | ------------------------------------------------ |
| `DATABASE_URL` | PostgreSQL connection string | **MUST BE UPDATED WITH YOUR POSTGRES_PASSWORD!** |
| `PM_DB_CONN_MAX_ATTEMPTS` | Maximum database connection attempts | `30` |
| `PM_DB_CONN_WAIT_INTERVAL` | Wait interval between connection attempts in seconds | `2` |
##### Redis Configuration
| Variable | Description | Default |
| --------------- | ------------------------------ | ------- |
| `REDIS_HOST` | Redis server hostname | `redis` |
| `REDIS_PORT` | Redis server port | `6379` |
| `REDIS_PASSWORD` | Redis authentication password | **MUST BE UPDATED WITH YOUR REDIS_PASSWORD!** |
| `REDIS_DB` | Redis database number | `0` |
##### Authentication & Security
| Variable | Description | Default |
| ------------------------------------ | --------------------------------------------------------- | ---------------- |
| `JWT_SECRET` | JWT signing secret - Generate with `openssl rand -hex 64` | **MUST BE SET!** |
| `JWT_EXPIRES_IN` | JWT token expiration time | `1h` |
| `JWT_REFRESH_EXPIRES_IN` | JWT refresh token expiration time | `7d` |
| `SESSION_INACTIVITY_TIMEOUT_MINUTES` | Session inactivity timeout in minutes | `30` |
| `DEFAULT_USER_ROLE` | Default role for new users | `user` |
##### Server & Network Configuration
| Variable | Description | Default |
| ----------------- | ----------------------------------------------------------------------------------------------- | ----------------------- |
| `PORT` | Backend API port | `3001` |
| `SERVER_PROTOCOL` | Frontend server protocol (`http` or `https`) | `http` |
| `SERVER_HOST` | Frontend server host | `localhost` |
| `SERVER_PORT` | Frontend server port | `3000` |
| `CORS_ORIGIN` | CORS origin URL | `http://localhost:3000` |
| `ENABLE_HSTS` | Enable HTTP Strict Transport Security | `true` |
| `TRUST_PROXY` | Trust proxy headers - See [Express.js docs](https://expressjs.com/en/guide/behind-proxies.html) | `true` |
##### Rate Limiting
| Variable | Description | Default |
| ---------------------------- | --------------------------------------------------- | -------- |
| `RATE_LIMIT_WINDOW_MS` | Rate limiting window in milliseconds | `900000` |
| `RATE_LIMIT_MAX` | Maximum requests per window | `5000` |
| `AUTH_RATE_LIMIT_WINDOW_MS` | Authentication rate limiting window in milliseconds | `600000` |
| `AUTH_RATE_LIMIT_MAX` | Maximum authentication requests per window | `500` |
| `AGENT_RATE_LIMIT_WINDOW_MS` | Agent API rate limiting window in milliseconds | `60000` |
| `AGENT_RATE_LIMIT_MAX` | Maximum agent requests per window | `1000` |
##### Logging
| Variable | Description | Default |
| ---------------- | ------------------------------------------------ | ------- |
| `LOG_LEVEL` | Logging level (`debug`, `info`, `warn`, `error`) | `info` |
| `ENABLE_LOGGING` | Enable application logging | `true` |
#### Frontend Service
| Variable | Description | Default |
| -------------- | ------------------------ | --------- |
| `BACKEND_HOST` | Backend service hostname | `backend` |
| `BACKEND_PORT` | Backend service port | `3001` |
### Volumes
The compose file creates three Docker volumes:
* `postgres_data`: PostgreSQL's data directory.
* `redis_data`: Redis's data directory.
* `agent_files`: PatchMon's agent files.
If you wish to bind either if their respective container paths to a host path rather than a Docker volume, you can do so in the Docker Compose file.
> [!TIP]
> The backend container runs as user & group ID 1000. If you plan to re-bind the agent files directory, ensure that the same user and/or group ID has permission to write to the host path to which it's bound.
---
# Development
This section is for developers who want to contribute to PatchMon or run it in development mode.
## Development Setup
For development with live reload and source code mounting:
1. Clone the repository:
```bash
git clone https://github.com/PatchMon/PatchMon.git
cd patchmon.net
```
2. Start development environment:
```bash
docker compose -f docker/docker-compose.dev.yml up
```
_See [Development Commands](#development-commands) for more options._
3. Access the application:
- Frontend: `http://localhost:3000`
- Backend API: `http://localhost:3001`
- Database: `localhost:5432`
- Redis: `localhost:6379`
## Development Docker Compose
The development compose file (`docker/docker-compose.dev.yml`):
- Builds images locally from source using development targets
- Enables hot reload with Docker Compose watch functionality
- Exposes database and backend ports for testing and development
- Mounts source code directly into containers for live development
- Supports debugging with enhanced logging
## Building Images Locally
Both Dockerfiles use multi-stage builds with separate development and production targets:
```bash
# Build development images
docker build -f docker/backend.Dockerfile --target development -t patchmon-backend:dev .
docker build -f docker/frontend.Dockerfile --target development -t patchmon-frontend:dev .
# Build production images (default target)
docker build -f docker/backend.Dockerfile -t patchmon-backend:latest .
docker build -f docker/frontend.Dockerfile -t patchmon-frontend:latest .
```
## Development Commands
### Hot Reload Development
```bash
# Attached, live log output, services stopped on Ctrl+C
docker compose -f docker/docker-compose.dev.yml up
# Attached with Docker Compose watch for hot reload
docker compose -f docker/docker-compose.dev.yml up --watch
# Detached
docker compose -f docker/docker-compose.dev.yml up -d
# Quiet, no log output, with Docker Compose watch for hot reload
docker compose -f docker/docker-compose.dev.yml watch
```
### Rebuild Services
```bash
# Rebuild specific service
docker compose -f docker/docker-compose.dev.yml up -d --build backend
# Rebuild all services
docker compose -f docker/docker-compose.dev.yml up -d --build
```
### Development Ports
The development setup exposes additional ports for debugging:
- **Database**: `5432` - Direct PostgreSQL access
- **Redis**: `6379` - Direct Redis access
- **Backend**: `3001` - API server with development features
- **Frontend**: `3000` - React development server with hot reload
## Development Workflow
1. **Initial Setup**: Clone repository and start development environment
```bash
git clone https://github.com/PatchMon/PatchMon.git
cd patchmon.net
docker compose -f docker/docker-compose.dev.yml up -d --build
```
2. **Hot Reload Development**: Use Docker Compose watch for automatic reload
```bash
docker compose -f docker/docker-compose.dev.yml up --watch --build
```
3. **Code Changes**:
- **Frontend/Backend Source**: Files are synced automatically with watch mode
- **Package.json Changes**: Triggers automatic service rebuild
- **Prisma Schema Changes**: Backend service restarts automatically
4. **Database Access**: Connect database client directly to `localhost:5432`
5. **Redis Access**: Connect Redis client directly to `localhost:6379`
6. **Debug**: If started with `docker compose [...] up -d` or `docker compose [...] watch`, check logs manually:
```bash
docker compose -f docker/docker-compose.dev.yml logs -f
```
Otherwise logs are shown automatically in attached modes (`up`, `up --watch`).
### Features in Development Mode
- **Hot Reload**: Automatic code synchronization and service restarts
- **Enhanced Logging**: Detailed logs for debugging
- **Direct Access**: Exposed ports for database, Redis, and API debugging
- **Health Checks**: Built-in health monitoring for services
- **Volume Persistence**: Development data persists between restarts

89
docker/backend.Dockerfile Normal file
View File

@@ -0,0 +1,89 @@
# Development target
FROM node:lts-alpine AS development
ENV NODE_ENV=development \
NPM_CONFIG_UPDATE_NOTIFIER=false \
ENABLE_LOGGING=true \
LOG_LEVEL=info \
PM_LOG_TO_CONSOLE=true \
PORT=3001
RUN apk add --no-cache openssl tini curl libc6-compat
USER node
WORKDIR /app
COPY --chown=node:node package*.json ./
COPY --chown=node:node backend/ ./backend/
COPY --chown=node:node agents ./agents_backup
COPY --chown=node:node agents ./agents
COPY --chmod=755 docker/backend.docker-entrypoint.sh ./entrypoint.sh
WORKDIR /app/backend
RUN npm ci --ignore-scripts && npx prisma generate
EXPOSE 3001
VOLUME [ "/app/agents" ]
HEALTHCHECK --interval=10s --timeout=5s --start-period=30s --retries=5 \
CMD curl -f http://localhost:3001/health || exit 1
ENTRYPOINT ["/sbin/tini", "--"]
CMD ["/app/entrypoint.sh"]
# Builder stage for production
FROM node:lts-alpine AS builder
RUN apk add --no-cache openssl
WORKDIR /app
COPY --chown=node:node package*.json ./
COPY --chown=node:node backend/ ./backend/
WORKDIR /app/backend
RUN npm ci --ignore-scripts &&\
npx prisma generate &&\
npm prune --omit=dev &&\
npm cache clean --force
# Production stage
FROM node:lts-alpine
ENV NODE_ENV=production \
NPM_CONFIG_UPDATE_NOTIFIER=false \
ENABLE_LOGGING=true \
LOG_LEVEL=info \
PM_LOG_TO_CONSOLE=true \
PORT=3001 \
JWT_EXPIRES_IN=1h \
JWT_REFRESH_EXPIRES_IN=7d \
SESSION_INACTIVITY_TIMEOUT_MINUTES=30
RUN apk add --no-cache openssl tini curl libc6-compat
USER node
WORKDIR /app
COPY --from=builder /app/backend ./backend
COPY --from=builder /app/node_modules ./node_modules
COPY --chown=node:node agents ./agents_backup
COPY --chown=node:node agents ./agents
COPY --chmod=755 docker/backend.docker-entrypoint.sh ./entrypoint.sh
WORKDIR /app/backend
EXPOSE 3001
VOLUME [ "/app/agents" ]
HEALTHCHECK --interval=10s --timeout=5s --start-period=30s --retries=5 \
CMD curl -f http://localhost:3001/health || exit 1
ENTRYPOINT ["/sbin/tini", "--"]
CMD ["/app/entrypoint.sh"]

Some files were not shown because too many files have changed in this diff Show More