mirror of
https://github.com/DumbWareio/DumbDrop.git
synced 2025-10-22 23:31:57 +00:00
feat: Complete Application Infrastructure and Security Overhaul (#28)
Chores & Configuration • Enhanced development setup: optimized Dockerfile, refined scripts, and improved .gitignore. • Updated docker-compose for better dev/prod separation. • Improved documentation in README and source files. Features & Enhancements • Refactored project structure with modular architecture. • Improved testing infrastructure and integration tests. • Enhanced file upload logic, client-side handling, and API routes. • Implemented robust server shutdown, rate limiting, and cleanup mechanisms. • Improved upload progress tracking with UI enhancements. • Strengthened security in PIN authentication and cookie handling. Refactors & Fixes • Cleaned up test infrastructure, logging, and error handling. • Simplified API route paths and improved middleware. • Fixed incorrect total storage size reporting. • Optimized logging verbosity based on environment. Documentation • Expanded project documentation and comments for clarity.
This commit is contained in:
committed by
GitHub
parent
2ec69ba26e
commit
22f79f830b
130
.cursorrules
Normal file
130
.cursorrules
Normal file
@@ -0,0 +1,130 @@
|
||||
/**
|
||||
* Cursor rules for maintaining code quality and consistency
|
||||
*/
|
||||
|
||||
{
|
||||
"rules": {
|
||||
"file-header-docs": {
|
||||
"description": "All source files must have a header comment explaining their purpose",
|
||||
"pattern": "src/**/*.js",
|
||||
"check": {
|
||||
"type": "regex",
|
||||
"value": "^/\\*\\*\\n \\* [^\\n]+\\n \\* [^\\n]+\\n \\* [^\\n]+\\n \\*/\\n",
|
||||
"message": "File must start with a header comment block (3 lines) explaining its purpose"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# Project Principles
|
||||
|
||||
# Code Philosophy
|
||||
- Keep code simple, smart, and follow best practices
|
||||
- Don't over-engineer for the sake of engineering
|
||||
- Use standard conventions and patterns
|
||||
- Write human-readable code
|
||||
- Keep it simple so the app just works
|
||||
- Follow the principle: "Make it work, make it right, make it fast"
|
||||
- Comments should explain "why" behind the code in more complex functions
|
||||
- Overcommented code is better than undercommented code
|
||||
|
||||
# Commit Conventions
|
||||
- Use Conventional Commits format:
|
||||
- feat: new features
|
||||
- fix: bug fixes
|
||||
- docs: documentation changes
|
||||
- style: formatting, missing semi colons, etc.
|
||||
- refactor: code changes that neither fix bugs nor add features
|
||||
- test: adding or modifying tests
|
||||
- chore: updating build tasks, package manager configs, etc.
|
||||
- Each commit should be atomic and focused
|
||||
- Write clear, descriptive commit messages
|
||||
|
||||
# Project Structure
|
||||
|
||||
# Root Directory
|
||||
- Keep root directory clean with only essential files
|
||||
- Production configuration files in root:
|
||||
- docker-compose.yml
|
||||
- Dockerfile
|
||||
- .env.example
|
||||
- package.json
|
||||
- README.md
|
||||
|
||||
# Source Code (/src)
|
||||
- All application source code in /src directory
|
||||
- app.js: Application setup and configuration
|
||||
- server.js: Server entry point
|
||||
- routes/: Route handlers
|
||||
- middleware/: Custom middleware
|
||||
- utils/: Helper functions and utilities
|
||||
- models/: Data models (if applicable)
|
||||
- services/: Business logic
|
||||
|
||||
# Development
|
||||
- All development configurations in /dev directory
|
||||
- Development specific files:
|
||||
- /dev/docker-compose.dev.yml
|
||||
- /dev/.env.dev.example
|
||||
- /dev/README.md (development setup instructions)
|
||||
|
||||
# Static Assets and Uploads
|
||||
- Static assets in /public directory
|
||||
- Upload directories:
|
||||
- /uploads (production)
|
||||
- /local_uploads (local development)
|
||||
|
||||
# Testing
|
||||
- Tests are mandatory for all new features
|
||||
- Test files location:
|
||||
- Unit tests: __tests__/unit/
|
||||
- Integration tests: __tests__/integration/
|
||||
- E2E tests: __tests__/e2e/
|
||||
- Test naming convention:
|
||||
- Unit tests: [feature].test.js
|
||||
- Integration tests: [feature].integration.test.js
|
||||
- E2E tests: [feature].e2e.test.js
|
||||
- Test coverage requirements:
|
||||
- Minimum 80% coverage for new features
|
||||
- Must include happy path and error cases
|
||||
- API endpoints must have integration tests
|
||||
|
||||
# Documentation
|
||||
- Main README.md in root focuses on production deployment
|
||||
- Development documentation in /dev/README.md
|
||||
- Code must be self-documenting with clear naming
|
||||
- Complex logic must include comments explaining "why" not "what"
|
||||
- JSDoc comments for public functions and APIs
|
||||
|
||||
# Docker Configuration
|
||||
- Use environment-specific .dockerignore files:
|
||||
- .dockerignore: Production defaults (most restrictive)
|
||||
- dev/.dockerignore: Development-specific (allows test/dev files)
|
||||
- Production .dockerignore should exclude:
|
||||
- All test files and configurations
|
||||
- Development-only dependencies
|
||||
- Documentation and non-essential files
|
||||
- Local development configurations
|
||||
- Development .dockerignore should:
|
||||
- Allow test files and configurations
|
||||
- Allow development dependencies
|
||||
- Still exclude node_modules and sensitive files
|
||||
- Keep Docker-specific files excluded
|
||||
- Docker Compose configurations:
|
||||
- Production: docker-compose.yml in root
|
||||
- Development: docker-compose.dev.yml in /dev
|
||||
- Use BuildKit features when needed
|
||||
- Document any special build arguments
|
||||
- Multi-stage builds:
|
||||
- Use appropriate base images
|
||||
- Minimize final image size
|
||||
- Separate development and production stages
|
||||
- Use specific version tags for base images
|
||||
|
||||
# Code Style
|
||||
- Follow ESLint and Prettier configurations
|
||||
- Use meaningful variable and function names
|
||||
- Keep functions small and focused
|
||||
- Maximum line length: 100 characters
|
||||
- Use modern JavaScript features appropriately
|
||||
- Prefer clarity over cleverness
|
@@ -1,7 +1,56 @@
|
||||
node_modules
|
||||
npm-debug.log
|
||||
uploads/*
|
||||
.env
|
||||
# Version control
|
||||
.git
|
||||
.gitignore
|
||||
|
||||
# Dependencies
|
||||
node_modules
|
||||
npm-debug.log
|
||||
yarn-debug.log
|
||||
yarn-error.log
|
||||
|
||||
# Environment variables
|
||||
.env
|
||||
.env.*
|
||||
!.env.example
|
||||
|
||||
# Development
|
||||
.vscode
|
||||
.idea
|
||||
*.swp
|
||||
*.swo
|
||||
|
||||
# Build outputs
|
||||
dist
|
||||
build
|
||||
coverage
|
||||
|
||||
# Local uploads (development only)
|
||||
local_uploads
|
||||
|
||||
# Logs
|
||||
logs
|
||||
*.log
|
||||
|
||||
# System files
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
|
||||
# Docker
|
||||
.docker
|
||||
docker-compose*.yml
|
||||
Dockerfile*
|
||||
|
||||
# Documentation
|
||||
README.md
|
||||
CHANGELOG.md
|
||||
docs
|
||||
|
||||
# Keep test files and configs for development builds
|
||||
# __tests__
|
||||
# jest.config.js
|
||||
# *.test.js
|
||||
# *.spec.js
|
||||
# .eslintrc*
|
||||
# .prettierrc*
|
||||
.editorconfig
|
||||
nodemon.json
|
||||
|
14
.eslintignore
Normal file
14
.eslintignore
Normal file
@@ -0,0 +1,14 @@
|
||||
# Dependencies
|
||||
node_modules/
|
||||
|
||||
# Upload directories
|
||||
local_uploads/
|
||||
uploads/
|
||||
test_uploads/
|
||||
|
||||
# Build directories
|
||||
dist/
|
||||
build/
|
||||
|
||||
# Coverage directory
|
||||
coverage/
|
25
.eslintrc.json
Normal file
25
.eslintrc.json
Normal file
@@ -0,0 +1,25 @@
|
||||
{
|
||||
"env": {
|
||||
"node": true,
|
||||
"es2022": true
|
||||
},
|
||||
"extends": [
|
||||
"eslint:recommended",
|
||||
"plugin:node/recommended",
|
||||
"prettier"
|
||||
],
|
||||
"parserOptions": {
|
||||
"ecmaVersion": 2022
|
||||
},
|
||||
"rules": {
|
||||
"node/exports-style": ["error", "module.exports"],
|
||||
"node/file-extension-in-import": ["error", "always"],
|
||||
"node/prefer-global/buffer": ["error", "always"],
|
||||
"node/prefer-global/console": ["error", "always"],
|
||||
"node/prefer-global/process": ["error", "always"],
|
||||
"node/prefer-global/url-search-params": ["error", "always"],
|
||||
"node/prefer-global/url": ["error", "always"],
|
||||
"node/prefer-promises/dns": "error",
|
||||
"node/prefer-promises/fs": "error"
|
||||
}
|
||||
}
|
52
.gitignore
vendored
52
.gitignore
vendored
@@ -149,5 +149,55 @@ Thumbs.db
|
||||
# Development
|
||||
dev/*
|
||||
!dev/docker-compose.dev.yml
|
||||
!dev/Dockerfile.dev
|
||||
!dev/.dockerignore
|
||||
!dev/dev.sh
|
||||
!dev/README.md
|
||||
!dev/README.md
|
||||
|
||||
# Dependencies
|
||||
node_modules/
|
||||
/.pnp
|
||||
.pnp.js
|
||||
|
||||
# Testing
|
||||
/coverage
|
||||
.nyc_output
|
||||
|
||||
# Production
|
||||
/build
|
||||
/dist
|
||||
|
||||
# Development
|
||||
.env
|
||||
.env.local
|
||||
.env.development.local
|
||||
.env.test.local
|
||||
.env.production.local
|
||||
dev/.env.dev
|
||||
|
||||
# Debug
|
||||
npm-debug.log*
|
||||
yarn-debug.log*
|
||||
yarn-error.log*
|
||||
|
||||
# IDE
|
||||
.idea/
|
||||
.vscode/
|
||||
*.swp
|
||||
*.swo
|
||||
|
||||
# OS
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
|
||||
# Application specific
|
||||
/uploads/*
|
||||
/local_uploads/*
|
||||
!uploads/.gitkeep
|
||||
!local_uploads/.gitkeep
|
||||
|
||||
# Misc
|
||||
*.log
|
||||
.env.*
|
||||
!.env.example
|
||||
!dev/.env.dev.example
|
9
.prettierrc
Normal file
9
.prettierrc
Normal file
@@ -0,0 +1,9 @@
|
||||
{
|
||||
"semi": true,
|
||||
"trailingComma": "es5",
|
||||
"singleQuote": true,
|
||||
"printWidth": 100,
|
||||
"tabWidth": 2,
|
||||
"useTabs": false,
|
||||
"endOfLine": "lf"
|
||||
}
|
56
Dockerfile
56
Dockerfile
@@ -1,26 +1,64 @@
|
||||
FROM node:18-alpine
|
||||
# Base stage for shared configurations
|
||||
FROM node:20-alpine as base
|
||||
|
||||
# Install python and create virtual environment
|
||||
# Install python and create virtual environment with minimal dependencies
|
||||
RUN apk add --no-cache python3 py3-pip && \
|
||||
python3 -m venv /opt/venv
|
||||
python3 -m venv /opt/venv && \
|
||||
rm -rf /var/cache/apk/*
|
||||
|
||||
# Activate virtual environment and install apprise
|
||||
RUN . /opt/venv/bin/activate && \
|
||||
pip install --no-cache-dir apprise
|
||||
pip install --no-cache-dir apprise && \
|
||||
find /opt/venv -type d -name "__pycache__" -exec rm -r {} +
|
||||
|
||||
# Add virtual environment to PATH
|
||||
ENV PATH="/opt/venv/bin:$PATH"
|
||||
|
||||
WORKDIR /app
|
||||
WORKDIR /usr/src/app
|
||||
|
||||
# Dependencies stage
|
||||
FROM base as deps
|
||||
|
||||
COPY package*.json ./
|
||||
RUN npm ci --only=production && \
|
||||
# Remove npm cache
|
||||
npm cache clean --force
|
||||
|
||||
RUN npm install
|
||||
# Development stage
|
||||
FROM deps as development
|
||||
ENV NODE_ENV=development
|
||||
|
||||
COPY . .
|
||||
# Install dev dependencies
|
||||
RUN npm install && \
|
||||
npm cache clean --force
|
||||
|
||||
RUN mkdir -p uploads
|
||||
# Create upload directories
|
||||
RUN mkdir -p uploads local_uploads
|
||||
|
||||
# Copy source with specific paths to avoid unnecessary files
|
||||
COPY src/ ./src/
|
||||
COPY public/ ./public/
|
||||
COPY __tests__/ ./__tests__/
|
||||
COPY dev/ ./dev/
|
||||
COPY .eslintrc.json .eslintignore ./
|
||||
|
||||
# Expose port
|
||||
EXPOSE 3000
|
||||
|
||||
CMD ["node", "server.js"]
|
||||
CMD ["npm", "run", "dev"]
|
||||
|
||||
# Production stage
|
||||
FROM deps as production
|
||||
ENV NODE_ENV=production
|
||||
|
||||
# Create upload directory
|
||||
RUN mkdir -p uploads
|
||||
|
||||
# Copy only necessary source files
|
||||
COPY src/ ./src/
|
||||
COPY public/ ./public/
|
||||
|
||||
# Expose port
|
||||
EXPOSE 3000
|
||||
|
||||
CMD ["npm", "start"]
|
||||
|
93
README.md
93
README.md
@@ -2,12 +2,26 @@
|
||||
|
||||
A stupid simple file upload application that provides a clean, modern interface for dragging and dropping files. Built with Node.js and vanilla JavaScript.
|
||||
|
||||

|
||||

|
||||
|
||||
No auth (unless you want it now!), no storage, no nothing. Just a simple file uploader to drop dumb files into a dumb folder.
|
||||
|
||||
## Table of Contents
|
||||
- [Quick Start](#quick-start)
|
||||
- [Features](#features)
|
||||
- [Configuration](#configuration)
|
||||
- [Security](#security)
|
||||
- [Development](#development)
|
||||
- [Technical Details](#technical-details)
|
||||
- [Contributing](#contributing)
|
||||
- [License](#license)
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Prerequisites
|
||||
- Docker (recommended)
|
||||
- Node.js >=20.0.0 (for local development)
|
||||
|
||||
### Option 1: Docker (For Dummies)
|
||||
```bash
|
||||
# Pull and run with one command
|
||||
@@ -77,16 +91,18 @@ docker run -p 3000:3000 -v "${PWD}\local_uploads:/app/uploads" dumbwareio/dumbdr
|
||||
|
||||
## Features
|
||||
|
||||
- Drag and drop file uploads
|
||||
- Multiple file selection
|
||||
- Clean, responsive UI
|
||||
- File size display
|
||||
- Docker support
|
||||
- Dark Mode toggle
|
||||
- Configurable file size limits
|
||||
- Drag and Drop Directory Support (Maintains file structure in upload)
|
||||
- Optional PIN protection (4-10 digits) with secure validation
|
||||
- Configurable notifications via Apprise
|
||||
- 🚀 Drag and drop file uploads
|
||||
- 📁 Multiple file selection
|
||||
- 🎨 Clean, responsive UI with Dark Mode
|
||||
- 📦 Docker support with easy configuration
|
||||
- 📂 Directory upload support (maintains structure)
|
||||
- 🔒 Optional PIN protection
|
||||
- 📱 Mobile-friendly interface
|
||||
- 🔔 Configurable notifications via Apprise
|
||||
- ⚡ Zero dependencies on client-side
|
||||
- 🛡️ Built-in security features
|
||||
- 💾 Configurable file size limits
|
||||
- 🎯 File extension filtering
|
||||
|
||||
## Configuration
|
||||
|
||||
@@ -136,27 +152,50 @@ Both {size} and {storage} use the same formatting rules based on APPRISE_SIZE_UN
|
||||
- Customizable notification messages with filename templating
|
||||
- Optional - disabled if no APPRISE_URL is set
|
||||
|
||||
## Security Features
|
||||
## Security
|
||||
|
||||
### Features
|
||||
- Variable-length PIN support (4-10 digits)
|
||||
- Constant-time PIN comparison to prevent timing attacks
|
||||
- Automatic input sanitization
|
||||
- Secure PIN validation middleware
|
||||
- No PIN storage in browser (memory only)
|
||||
- Rate Limiting to prevent brute force attacks
|
||||
- Optional file extension filtering
|
||||
|
||||
## Development
|
||||
|
||||
Want to contribute or develop locally? Check out our [Development Guide](dev/README.md) - it's stupid simple, just the way we like it! If you're writing complex code to solve a simple problem, you're probably doing it wrong. Keep it dumb, keep it simple.
|
||||
- Constant-time PIN comparison
|
||||
- Input sanitization
|
||||
- Rate limiting
|
||||
- File extension filtering
|
||||
- No client-side PIN storage
|
||||
- Secure file handling
|
||||
|
||||
## Technical Details
|
||||
|
||||
- Backend: Node.js with Express
|
||||
- Frontend: Vanilla JavaScript with modern drag-and-drop API
|
||||
- File handling: Chunked file uploads with configurable size limits
|
||||
- Security: Optional PIN protection for uploads
|
||||
- Containerization: Docker with automated builds via GitHub Actions
|
||||
### Stack
|
||||
- **Backend**: Node.js (>=20.0.0) with Express
|
||||
- **Frontend**: Vanilla JavaScript (ES6+)
|
||||
- **Container**: Docker with multi-stage builds
|
||||
- **Security**: Express security middleware
|
||||
- **Upload**: Chunked file handling via Multer
|
||||
- **Notifications**: Apprise integration
|
||||
|
||||
### Dependencies
|
||||
- express: Web framework
|
||||
- multer: File upload handling
|
||||
- apprise: Notification system
|
||||
- cors: Cross-origin resource sharing
|
||||
- dotenv: Environment configuration
|
||||
- express-rate-limit: Rate limiting
|
||||
|
||||
## Contributing
|
||||
|
||||
1. Fork the repository
|
||||
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
|
||||
3. Commit your changes using conventional commits
|
||||
4. Push to the branch (`git push origin feature/amazing-feature`)
|
||||
5. Open a Pull Request
|
||||
|
||||
See [Development Guide](dev/README.md) for local setup and guidelines.
|
||||
|
||||
|
||||
|
||||
|
||||
---
|
||||
Made with ❤️ by [DumbWare.io](https://dumbware.io)
|
||||
|
||||
## Future Features
|
||||
- Camera Upload for Mobile
|
||||
|
50
dev/.dockerignore
Normal file
50
dev/.dockerignore
Normal file
@@ -0,0 +1,50 @@
|
||||
# Version control
|
||||
.git
|
||||
.gitignore
|
||||
|
||||
# Dependencies
|
||||
node_modules
|
||||
npm-debug.log
|
||||
yarn-debug.log
|
||||
yarn-error.log
|
||||
|
||||
# Environment variables
|
||||
.env
|
||||
.env.*
|
||||
!.env.example
|
||||
|
||||
# Development
|
||||
.vscode
|
||||
.idea
|
||||
*.swp
|
||||
*.swo
|
||||
|
||||
# Build outputs
|
||||
dist
|
||||
build
|
||||
coverage
|
||||
|
||||
# Local uploads (development only)
|
||||
local_uploads
|
||||
|
||||
# Logs
|
||||
logs
|
||||
*.log
|
||||
|
||||
# System files
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
|
||||
# Docker
|
||||
.docker
|
||||
docker-compose*.yml
|
||||
Dockerfile*
|
||||
|
||||
# Documentation
|
||||
README.md
|
||||
CHANGELOG.md
|
||||
docs
|
||||
|
||||
# Development configurations
|
||||
.editorconfig
|
||||
nodemon.json
|
22
dev/.env.dev.example
Normal file
22
dev/.env.dev.example
Normal file
@@ -0,0 +1,22 @@
|
||||
# Development Environment Settings
|
||||
|
||||
# Server Configuration
|
||||
PORT=3000 # Development server port
|
||||
|
||||
# Upload Settings
|
||||
MAX_FILE_SIZE=1024 # Maximum file size in MB for development
|
||||
AUTO_UPLOAD=false # Disable auto-upload by default in development
|
||||
UPLOAD_DIR=../local_uploads # Local development upload directory
|
||||
|
||||
# Development Specific
|
||||
DUMBDROP_TITLE=DumbDrop-Dev # Development environment indicator
|
||||
DUMBDROP_PIN=123456 # Default development PIN (change in production)
|
||||
|
||||
# Optional Development Features
|
||||
NODE_ENV=development # Ensures development mode
|
||||
DEBUG=dumbdrop:* # Enable debug logging (if implemented)
|
||||
|
||||
# Development Notifications (Optional)
|
||||
APPRISE_URL= # Test notification endpoint
|
||||
APPRISE_MESSAGE=[DEV] New file uploaded - {filename} ({size}), Storage used {storage}
|
||||
APPRISE_SIZE_UNIT=auto
|
46
dev/Dockerfile.dev
Normal file
46
dev/Dockerfile.dev
Normal file
@@ -0,0 +1,46 @@
|
||||
# Base stage for shared configurations
|
||||
FROM node:20-alpine as base
|
||||
|
||||
# Install python and create virtual environment with minimal dependencies
|
||||
RUN apk add --no-cache python3 py3-pip && \
|
||||
python3 -m venv /opt/venv && \
|
||||
rm -rf /var/cache/apk/*
|
||||
|
||||
# Activate virtual environment and install apprise
|
||||
RUN . /opt/venv/bin/activate && \
|
||||
pip install --no-cache-dir apprise && \
|
||||
find /opt/venv -type d -name "__pycache__" -exec rm -r {} +
|
||||
|
||||
# Add virtual environment to PATH
|
||||
ENV PATH="/opt/venv/bin:$PATH"
|
||||
|
||||
WORKDIR /usr/src/app
|
||||
|
||||
# Dependencies stage
|
||||
FROM base as deps
|
||||
|
||||
COPY package*.json ./
|
||||
RUN npm ci --only=production && \
|
||||
npm cache clean --force
|
||||
|
||||
# Development stage
|
||||
FROM deps as development
|
||||
ENV NODE_ENV=development
|
||||
|
||||
# Install dev dependencies
|
||||
RUN npm install && \
|
||||
npm cache clean --force
|
||||
|
||||
# Create upload directories
|
||||
RUN mkdir -p uploads local_uploads
|
||||
|
||||
# Copy source with specific paths to avoid unnecessary files
|
||||
COPY src/ ./src/
|
||||
COPY public/ ./public/
|
||||
COPY dev/ ./dev/
|
||||
COPY .eslintrc.json .eslintignore ./
|
||||
|
||||
# Expose port
|
||||
EXPOSE 3000
|
||||
|
||||
CMD ["npm", "run", "dev"]
|
113
dev/README.md
113
dev/README.md
@@ -1,70 +1,73 @@
|
||||
# DumbDrop Development
|
||||
|
||||
Because we're too dumb for complexity, development is super simple!
|
||||
# DumbDrop Development Guide
|
||||
|
||||
## Quick Start
|
||||
|
||||
1. Clone this repo
|
||||
2. Navigate to the `dev` directory
|
||||
3. Use our dumb-simple development script:
|
||||
1. Clone the repository:
|
||||
```bash
|
||||
git clone https://github.com/yourusername/DumbDrop.git
|
||||
cd DumbDrop
|
||||
```
|
||||
|
||||
```bash
|
||||
# Start development environment
|
||||
./dev.sh up
|
||||
2. Set up development environment:
|
||||
```bash
|
||||
cd dev
|
||||
cp .env.dev.example .env.dev
|
||||
```
|
||||
|
||||
# Stop development environment
|
||||
./dev.sh down
|
||||
3. Start development server:
|
||||
```bash
|
||||
docker-compose -f docker-compose.dev.yml up
|
||||
```
|
||||
|
||||
# View logs
|
||||
./dev.sh logs
|
||||
|
||||
# Rebuild without cache
|
||||
./dev.sh rebuild
|
||||
|
||||
# Clean everything up
|
||||
./dev.sh clean
|
||||
```
|
||||
The application will be available at http://localhost:3000 with hot-reloading enabled.
|
||||
|
||||
## Development Environment Features
|
||||
|
||||
Our development setup is sophisticatedly simple:
|
||||
|
||||
- Builds from local Dockerfile instead of pulling image
|
||||
- Mounts local directory for live code changes
|
||||
- Uses development-specific settings
|
||||
- Adds helpful labels for container identification
|
||||
- Hot-reloading for faster development
|
||||
|
||||
## Development-specific Settings
|
||||
|
||||
The `docker-compose.dev.yml` includes:
|
||||
- Local volume mounts for live code updates
|
||||
- Hot-reloading with nodemon
|
||||
- Development-specific environment variables
|
||||
- Container labels for easy identification
|
||||
- Automatic container restart for development
|
||||
- Local file storage in `../local_uploads`
|
||||
- Debug logging enabled
|
||||
- Development-specific notifications
|
||||
|
||||
### Node Modules Handling
|
||||
|
||||
Our volume setup uses a technique called "volume masking" for handling node_modules:
|
||||
```yaml
|
||||
volumes:
|
||||
- ../:/app # Mount local code
|
||||
- /app/node_modules # Mask node_modules directory
|
||||
```
|
||||
|
||||
This setup:
|
||||
- Prevents local node_modules from interfering with container modules
|
||||
- Preserves container's node_modules installed during build
|
||||
- Avoids platform-specific module issues
|
||||
- Keeps development simple and consistent across environments
|
||||
|
||||
## Directory Structure
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
dev/
|
||||
├── README.md # You are here!
|
||||
├── docker-compose.dev.yml # Development-specific Docker setup
|
||||
└── dev.sh # Simple development helper script
|
||||
DumbDrop/
|
||||
├── dev/ # Development configurations
|
||||
│ ├── docker-compose.dev.yml
|
||||
│ ├── .env.dev.example
|
||||
│ └── README.md
|
||||
├── src/ # Application source code
|
||||
├── public/ # Static assets
|
||||
├── local_uploads/ # Development file storage
|
||||
└── [Production files in root]
|
||||
```
|
||||
|
||||
That's it! We told you it was dumb simple! If you need more complexity, you're probably in the wrong place!
|
||||
## Development Workflow
|
||||
|
||||
1. Create feature branches from `main`:
|
||||
```bash
|
||||
git checkout -b feature/your-feature-name
|
||||
```
|
||||
|
||||
2. Make changes and test locally
|
||||
3. Commit using conventional commits:
|
||||
```bash
|
||||
feat: add new feature
|
||||
fix: resolve bug
|
||||
docs: update documentation
|
||||
```
|
||||
|
||||
4. Push and create pull request
|
||||
|
||||
## Debugging
|
||||
|
||||
- Use `DEBUG=dumbdrop:*` for detailed logs
|
||||
- Container shell access: `docker-compose -f docker-compose.dev.yml exec app sh`
|
||||
- Logs: `docker-compose -f docker-compose.dev.yml logs -f app`
|
||||
|
||||
## Common Issues
|
||||
|
||||
1. Port conflicts: Change port in `.env.dev`
|
||||
2. File permissions: Ensure proper ownership of `local_uploads`
|
||||
3. Node modules: Remove and rebuild with `docker-compose -f docker-compose.dev.yml build --no-cache`
|
||||
|
61
dev/dev.sh
61
dev/dev.sh
@@ -1,37 +1,74 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Simple development helper script because we're too dumb for complexity
|
||||
# Set script to exit on error
|
||||
set -e
|
||||
|
||||
# Enable Docker BuildKit
|
||||
export DOCKER_BUILDKIT=1
|
||||
|
||||
# Colors for pretty output
|
||||
GREEN='\033[0;32m'
|
||||
BLUE='\033[0;34m'
|
||||
RED='\033[0;31m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Helper function for pretty printing
|
||||
print_message() {
|
||||
echo -e "${BLUE}🔧 ${1}${NC}"
|
||||
}
|
||||
|
||||
# Ensure we're in the right directory
|
||||
cd "$(dirname "$0")"
|
||||
|
||||
case "$1" in
|
||||
"up")
|
||||
echo "🚀 Starting DumbDrop in development mode..."
|
||||
docker compose -f docker-compose.dev.yml up --build
|
||||
print_message "Starting DumbDrop in development mode..."
|
||||
if [ ! -f .env.dev ]; then
|
||||
print_message "No .env.dev found. Creating from example..."
|
||||
cp .env.dev.example .env.dev
|
||||
fi
|
||||
docker compose -f docker-compose.dev.yml up -d --build
|
||||
print_message "Container logs:"
|
||||
docker compose -f docker-compose.dev.yml logs
|
||||
;;
|
||||
"down")
|
||||
echo "👋 Stopping DumbDrop development environment..."
|
||||
print_message "Stopping DumbDrop development environment..."
|
||||
docker compose -f docker-compose.dev.yml down
|
||||
;;
|
||||
"logs")
|
||||
echo "📝 Showing DumbDrop logs..."
|
||||
print_message "Showing DumbDrop logs..."
|
||||
docker compose -f docker-compose.dev.yml logs -f
|
||||
;;
|
||||
"rebuild")
|
||||
echo "🔨 Rebuilding DumbDrop..."
|
||||
print_message "Rebuilding DumbDrop..."
|
||||
docker compose -f docker-compose.dev.yml build --no-cache
|
||||
docker compose -f docker-compose.dev.yml up
|
||||
;;
|
||||
"clean")
|
||||
echo "🧹 Cleaning up development environment..."
|
||||
docker compose -f docker-compose.dev.yml down -v
|
||||
print_message "Cleaning up development environment..."
|
||||
docker compose -f docker-compose.dev.yml down -v --remove-orphans
|
||||
rm -f .env.dev
|
||||
print_message "Cleaned up containers, volumes, and env file"
|
||||
;;
|
||||
"shell")
|
||||
print_message "Opening shell in container..."
|
||||
docker compose -f docker-compose.dev.yml exec app sh
|
||||
;;
|
||||
"lint")
|
||||
print_message "Running linter..."
|
||||
docker compose -f docker-compose.dev.yml exec app npm run lint
|
||||
;;
|
||||
*)
|
||||
echo "DumbDrop Development Helper"
|
||||
echo -e "${GREEN}DumbDrop Development Helper${NC}"
|
||||
echo "Usage: ./dev.sh [command]"
|
||||
echo ""
|
||||
echo "Commands:"
|
||||
echo " up - Start development environment"
|
||||
echo " up - Start development environment (creates .env.dev if missing)"
|
||||
echo " down - Stop development environment"
|
||||
echo " logs - Show container logs"
|
||||
echo " rebuild - Rebuild container without cache"
|
||||
echo " clean - Clean up everything"
|
||||
echo " rebuild - Rebuild container without cache and start"
|
||||
echo " clean - Clean up everything (containers, volumes, env)"
|
||||
echo " shell - Open shell in container"
|
||||
echo " lint - Run linter"
|
||||
;;
|
||||
esac
|
@@ -1,25 +1,30 @@
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
dumbdrop:
|
||||
app:
|
||||
build:
|
||||
context: ..
|
||||
dockerfile: Dockerfile
|
||||
dockerfile: dev/Dockerfile.dev
|
||||
target: development
|
||||
args:
|
||||
DOCKER_BUILDKIT: 1
|
||||
x-bake:
|
||||
options:
|
||||
dockerignore: dev/.dockerignore
|
||||
volumes:
|
||||
- ..:/usr/src/app
|
||||
- /usr/src/app/node_modules
|
||||
ports:
|
||||
- "3000:3000"
|
||||
volumes:
|
||||
- ../:/app
|
||||
- /app/node_modules
|
||||
- ../local_uploads:/app/uploads
|
||||
environment:
|
||||
NODE_ENV: development
|
||||
DUMBDROP_TITLE: DumbDrop-Dev
|
||||
MAX_FILE_SIZE: 1024
|
||||
DUMBDROP_PIN: 123456
|
||||
APPRISE_MESSAGE: "[DEV] New file uploaded - {filename} ({size}), Storage used {storage}"
|
||||
APPRISE_SIZE_UNIT: auto
|
||||
# Enable container restart during development
|
||||
- NODE_ENV=development
|
||||
- PORT=3000
|
||||
- MAX_FILE_SIZE=1024
|
||||
- AUTO_UPLOAD=false
|
||||
- DUMBDROP_TITLE=DumbDrop-Dev
|
||||
command: npm run dev
|
||||
restart: unless-stopped
|
||||
# Enable container debugging if needed
|
||||
# stdin_open: true
|
||||
# tty: true
|
||||
# Add development labels
|
||||
labels:
|
||||
- "dev.dumbware.environment=development"
|
@@ -6,8 +6,17 @@ services:
|
||||
volumes:
|
||||
# Replace "./local_uploads" ( before the colon ) with the path where the files land
|
||||
- ./local_uploads:/app/uploads
|
||||
environment:
|
||||
DUMBDROP_TITLE: DumbDrop # Replace "DumbDrop" with the title you want to display
|
||||
MAX_FILE_SIZE: 1024 # Replace "1024" with the maximum file size you want to allow in MB
|
||||
DUMBDROP_PIN: 123456 # Replace "123456" with the pin you want to use
|
||||
AUTO_UPLOAD: false # Set to true if you want dont want to have to click the upload button
|
||||
environment: # Environment variables for the DumbDrop service
|
||||
DUMBDROP_TITLE: DumbDrop # The title shown in the web interface
|
||||
MAX_FILE_SIZE: 1024 # Maximum file size in MB
|
||||
DUMBDROP_PIN: 123456 # Optional PIN protection (4-10 digits, leave empty to disable)
|
||||
AUTO_UPLOAD: true # Upload without clicking button
|
||||
|
||||
# Additional available environment variables (commented out with defaults)
|
||||
# PORT: 3000 # Server port (default: 3000)
|
||||
# NODE_ENV: production # Node environment (development/production)
|
||||
# DEBUG: false # Debug mode for verbose logging (default: false in production, true in development)
|
||||
# APPRISE_URL: "" # Apprise notification URL for upload notifications (default: none)
|
||||
# APPRISE_MESSAGE: "New file uploaded - {filename} ({size}), Storage used {storage}" # Notification message template with placeholders: {filename}, {size}, {storage}
|
||||
# APPRISE_SIZE_UNIT: "Auto" # Size unit for notifications (B, KB, MB, GB, TB, or Auto)
|
||||
# ALLOWED_EXTENSIONS: ".jpg,.jpeg,.png,.pdf,.doc,.docx,.txt" # Comma-separated list of allowed file extensions (default: all allowed)
|
5721
package-lock.json
generated
5721
package-lock.json
generated
File diff suppressed because it is too large
Load Diff
18
package.json
18
package.json
@@ -1,11 +1,13 @@
|
||||
{
|
||||
"name": "dumbdrop",
|
||||
"version": "1.0.0",
|
||||
"main": "server.js",
|
||||
"main": "src/server.js",
|
||||
"scripts": {
|
||||
"start": "node server.js",
|
||||
"dev": "nodemon server.js",
|
||||
"test": "echo \"Error: no test specified\" && exit 1"
|
||||
"start": "node src/server.js",
|
||||
"dev": "nodemon --legacy-watch src/server.js",
|
||||
"lint": "eslint .",
|
||||
"lint:fix": "eslint . --fix",
|
||||
"format": "prettier --write ."
|
||||
},
|
||||
"keywords": [],
|
||||
"author": "",
|
||||
@@ -21,7 +23,13 @@
|
||||
"multer": "^1.4.5-lts.1"
|
||||
},
|
||||
"devDependencies": {
|
||||
"eslint": "^8.56.0",
|
||||
"eslint-config-prettier": "^9.1.0",
|
||||
"eslint-plugin-node": "^11.1.0",
|
||||
"nodemon": "^3.1.9",
|
||||
"svg2img": "^1.0.0-beta.2"
|
||||
"prettier": "^3.2.5"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=20.0.0"
|
||||
}
|
||||
}
|
||||
|
File diff suppressed because it is too large
Load Diff
@@ -38,7 +38,7 @@ body {
|
||||
background: var(--bg-color);
|
||||
display: flex;
|
||||
justify-content: center;
|
||||
align-items: center;
|
||||
padding-top: 2rem;
|
||||
color: var(--text-color);
|
||||
transition: background-color 0.3s ease, color 0.3s ease;
|
||||
}
|
||||
@@ -187,17 +187,22 @@ button:disabled {
|
||||
font-size: 0.9rem;
|
||||
}
|
||||
|
||||
.progress-info {
|
||||
.progress-status {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
font-size: 0.8rem;
|
||||
color: var(--text-color);
|
||||
opacity: 0.8;
|
||||
margin-top: 8px;
|
||||
}
|
||||
|
||||
.progress-path {
|
||||
color: var(--text-color);
|
||||
opacity: 0.9;
|
||||
font-weight: 500;
|
||||
word-break: break-all;
|
||||
.progress-info {
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
.progress-details {
|
||||
text-align: right;
|
||||
}
|
||||
|
||||
.progress {
|
||||
@@ -215,19 +220,6 @@ button:disabled {
|
||||
transition: width 0.3s ease;
|
||||
}
|
||||
|
||||
.progress-status {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
font-size: 0.8rem;
|
||||
color: var(--text-color);
|
||||
opacity: 0.8;
|
||||
}
|
||||
|
||||
.progress-details {
|
||||
text-align: right;
|
||||
}
|
||||
|
||||
/* Modal Styles */
|
||||
.modal {
|
||||
position: fixed;
|
||||
|
629
server.js
629
server.js
@@ -1,629 +0,0 @@
|
||||
const express = require('express');
|
||||
const multer = require('multer');
|
||||
const path = require('path');
|
||||
const cors = require('cors');
|
||||
const fs = require('fs');
|
||||
const crypto = require('crypto');
|
||||
const cookieParser = require('cookie-parser');
|
||||
const { exec } = require('child_process');
|
||||
const util = require('util');
|
||||
const execAsync = util.promisify(exec);
|
||||
require('dotenv').config();
|
||||
|
||||
// Rate limiting setup
|
||||
const rateLimit = require('express-rate-limit');
|
||||
|
||||
const app = express();
|
||||
// Add this line to trust the first proxy
|
||||
app.set('trust proxy', 1);
|
||||
const port = process.env.PORT || 3000;
|
||||
const uploadDir = './uploads'; // Local development
|
||||
const maxFileSize = parseInt(process.env.MAX_FILE_SIZE || '1024') * 1024 * 1024; // Convert MB to bytes
|
||||
const APPRISE_URL = process.env.APPRISE_URL;
|
||||
const APPRISE_MESSAGE = process.env.APPRISE_MESSAGE || 'New file uploaded - {filename} ({size}), Storage used {storage}';
|
||||
const siteTitle = process.env.DUMBDROP_TITLE || 'DumbDrop';
|
||||
const APPRISE_SIZE_UNIT = process.env.APPRISE_SIZE_UNIT;
|
||||
const AUTO_UPLOAD = process.env.AUTO_UPLOAD === 'true';
|
||||
|
||||
// Update the chunk size and rate limits
|
||||
const CHUNK_SIZE = 5 * 1024 * 1024; // Increase to 5MB chunks
|
||||
|
||||
// Update rate limiters for large files
|
||||
const initUploadLimiter = rateLimit({
|
||||
windowMs: 60 * 1000, // 1 minute window
|
||||
max: 30, // 30 new upload initializations per minute
|
||||
message: { error: 'Too many upload attempts. Please wait before starting new uploads.' },
|
||||
standardHeaders: true,
|
||||
legacyHeaders: false
|
||||
});
|
||||
|
||||
// Brute force protection setup
|
||||
const loginAttempts = new Map(); // Stores IP addresses and their attempt counts
|
||||
const MAX_ATTEMPTS = 5; // Maximum allowed attempts
|
||||
const LOCKOUT_TIME = 15 * 60 * 1000; // 15 minutes in milliseconds
|
||||
|
||||
// Reset attempts for an IP
|
||||
function resetAttempts(ip) {
|
||||
loginAttempts.delete(ip);
|
||||
}
|
||||
|
||||
// Check if an IP is locked out
|
||||
function isLockedOut(ip) {
|
||||
const attempts = loginAttempts.get(ip);
|
||||
if (!attempts) return false;
|
||||
|
||||
if (attempts.count >= MAX_ATTEMPTS) {
|
||||
const timeElapsed = Date.now() - attempts.lastAttempt;
|
||||
if (timeElapsed < LOCKOUT_TIME) {
|
||||
return true;
|
||||
}
|
||||
resetAttempts(ip);
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
// Record an attempt for an IP
|
||||
function recordAttempt(ip) {
|
||||
const attempts = loginAttempts.get(ip) || { count: 0, lastAttempt: 0 };
|
||||
attempts.count += 1;
|
||||
attempts.lastAttempt = Date.now();
|
||||
loginAttempts.set(ip, attempts);
|
||||
return attempts;
|
||||
}
|
||||
|
||||
// Cleanup old lockouts every minute
|
||||
setInterval(() => {
|
||||
const now = Date.now();
|
||||
for (const [ip, attempts] of loginAttempts.entries()) {
|
||||
if (now - attempts.lastAttempt >= LOCKOUT_TIME) {
|
||||
loginAttempts.delete(ip);
|
||||
}
|
||||
}
|
||||
}, 60000);
|
||||
|
||||
// Validate and set PIN
|
||||
const validatePin = (pin) => {
|
||||
if (!pin) return null;
|
||||
const cleanPin = pin.replace(/\D/g, ''); // Remove non-digits
|
||||
return cleanPin.length >= 4 && cleanPin.length <= 10 ? cleanPin : null;
|
||||
};
|
||||
const PIN = validatePin(process.env.DUMBDROP_PIN);
|
||||
|
||||
// Logging helper
|
||||
const log = {
|
||||
info: (msg) => console.log(`[INFO] ${new Date().toISOString()} - ${msg}`),
|
||||
error: (msg) => console.error(`[ERROR] ${new Date().toISOString()} - ${msg}`),
|
||||
success: (msg) => console.log(`[SUCCESS] ${new Date().toISOString()} - ${msg}`)
|
||||
};
|
||||
|
||||
// Helper function to ensure directory exists
|
||||
async function ensureDirectoryExists(filePath) {
|
||||
const dir = path.dirname(filePath);
|
||||
try {
|
||||
await fs.promises.mkdir(dir, { recursive: true });
|
||||
} catch (err) {
|
||||
log.error(`Failed to create directory ${dir}: ${err.message}`);
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure upload directory exists
|
||||
try {
|
||||
if (!fs.existsSync(uploadDir)) {
|
||||
fs.mkdirSync(uploadDir, { recursive: true });
|
||||
log.info(`Created upload directory: ${uploadDir}`);
|
||||
}
|
||||
fs.accessSync(uploadDir, fs.constants.W_OK);
|
||||
log.success(`Upload directory is writable: ${uploadDir}`);
|
||||
log.info(`Maximum file size set to: ${maxFileSize / (1024 * 1024)}MB`);
|
||||
if (PIN) {
|
||||
log.info('PIN protection enabled');
|
||||
}
|
||||
} catch (err) {
|
||||
log.error(`Directory error: ${err.message}`);
|
||||
log.error(`Failed to access or create upload directory: ${uploadDir}`);
|
||||
log.error('Please check directory permissions and mounting');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Middleware
|
||||
app.use(cors());
|
||||
app.use(cookieParser());
|
||||
app.use(express.json());
|
||||
|
||||
// Security headers middleware
|
||||
app.use((req, res, next) => {
|
||||
// Content Security Policy
|
||||
res.setHeader(
|
||||
'Content-Security-Policy',
|
||||
"default-src 'self'; " +
|
||||
"style-src 'self' 'unsafe-inline' cdn.jsdelivr.net; " +
|
||||
"script-src 'self' 'unsafe-inline' cdn.jsdelivr.net; " +
|
||||
"img-src 'self' data: blob:;"
|
||||
);
|
||||
// X-Content-Type-Options
|
||||
res.setHeader('X-Content-Type-Options', 'nosniff');
|
||||
// X-Frame-Options
|
||||
res.setHeader('X-Frame-Options', 'SAMEORIGIN');
|
||||
// X-XSS-Protection
|
||||
res.setHeader('X-XSS-Protection', '1; mode=block');
|
||||
// Strict Transport Security (when in production)
|
||||
if (process.env.NODE_ENV === 'production') {
|
||||
res.setHeader('Strict-Transport-Security', 'max-age=31536000; includeSubDomains');
|
||||
}
|
||||
next();
|
||||
});
|
||||
|
||||
// Helper function for constant-time string comparison
|
||||
function safeCompare(a, b) {
|
||||
if (typeof a !== 'string' || typeof b !== 'string') {
|
||||
return false;
|
||||
}
|
||||
|
||||
// Use Node's built-in constant-time comparison
|
||||
return crypto.timingSafeEqual(
|
||||
Buffer.from(a.padEnd(32)),
|
||||
Buffer.from(b.padEnd(32))
|
||||
);
|
||||
}
|
||||
|
||||
// Pin verification endpoint
|
||||
app.post('/api/verify-pin', (req, res) => {
|
||||
const { pin } = req.body;
|
||||
const ip = req.ip;
|
||||
|
||||
// If no PIN is set in env, always return success
|
||||
if (!PIN) {
|
||||
return res.json({ success: true });
|
||||
}
|
||||
|
||||
// Check for lockout
|
||||
if (isLockedOut(ip)) {
|
||||
const attempts = loginAttempts.get(ip);
|
||||
const timeLeft = Math.ceil((LOCKOUT_TIME - (Date.now() - attempts.lastAttempt)) / 1000 / 60);
|
||||
return res.status(429).json({
|
||||
error: `Too many attempts. Please try again in ${timeLeft} minutes.`
|
||||
});
|
||||
}
|
||||
|
||||
// Verify the PIN using constant-time comparison
|
||||
if (safeCompare(pin, PIN)) {
|
||||
// Reset attempts on successful login
|
||||
resetAttempts(ip);
|
||||
|
||||
// Set secure cookie
|
||||
res.cookie('DUMBDROP_PIN', pin, {
|
||||
httpOnly: true,
|
||||
secure: process.env.NODE_ENV === 'production',
|
||||
sameSite: 'strict',
|
||||
path: '/'
|
||||
});
|
||||
res.json({ success: true });
|
||||
} else {
|
||||
// Record failed attempt
|
||||
const attempts = recordAttempt(ip);
|
||||
const attemptsLeft = MAX_ATTEMPTS - attempts.count;
|
||||
|
||||
res.status(401).json({
|
||||
success: false,
|
||||
error: attemptsLeft > 0 ?
|
||||
`Invalid PIN. ${attemptsLeft} attempts remaining.` :
|
||||
'Too many attempts. Account locked for 15 minutes.'
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
// Check if PIN is required
|
||||
app.get('/api/pin-required', (req, res) => {
|
||||
res.json({
|
||||
required: !!PIN,
|
||||
length: PIN ? PIN.length : 0
|
||||
});
|
||||
});
|
||||
|
||||
// Pin protection middleware
|
||||
const requirePin = (req, res, next) => {
|
||||
if (!PIN) {
|
||||
return next();
|
||||
}
|
||||
|
||||
const providedPin = req.headers['x-pin'] || req.cookies.DUMBDROP_PIN;
|
||||
if (!safeCompare(providedPin, PIN)) {
|
||||
return res.status(401).json({ error: 'Unauthorized' });
|
||||
}
|
||||
next();
|
||||
};
|
||||
|
||||
// Move the root and login routes before static file serving
|
||||
app.get('/', (req, res) => {
|
||||
if (PIN && !safeCompare(req.cookies.DUMBDROP_PIN, PIN)) {
|
||||
return res.redirect('/login.html');
|
||||
}
|
||||
// Read the file and replace the title
|
||||
let html = fs.readFileSync(path.join(__dirname, 'public', 'index.html'), 'utf8');
|
||||
html = html.replace(/{{SITE_TITLE}}/g, siteTitle);
|
||||
html = html.replace('{{AUTO_UPLOAD}}', AUTO_UPLOAD.toString());
|
||||
res.send(html);
|
||||
});
|
||||
|
||||
app.get('/login.html', (req, res) => {
|
||||
let html = fs.readFileSync(path.join(__dirname, 'public', 'login.html'), 'utf8');
|
||||
html = html.replace(/{{SITE_TITLE}}/g, siteTitle); // Use global replace
|
||||
res.send(html);
|
||||
});
|
||||
|
||||
// Move static file serving after our dynamic routes
|
||||
app.use(express.static('public'));
|
||||
|
||||
// PIN protection middleware should be before the routes that need protection
|
||||
app.use('/upload', requirePin);
|
||||
|
||||
// Store ongoing uploads
|
||||
const uploads = new Map();
|
||||
// Store folder name mappings for batch uploads with timestamps
|
||||
const folderMappings = new Map();
|
||||
// Store batch IDs for folder uploads
|
||||
const batchUploads = new Map();
|
||||
// Store batch activity timestamps
|
||||
const batchActivity = new Map();
|
||||
|
||||
// Add cleanup interval for inactive batches
|
||||
setInterval(() => {
|
||||
const now = Date.now();
|
||||
for (const [batchId, lastActivity] of batchActivity.entries()) {
|
||||
if (now - lastActivity >= 5 * 60 * 1000) { // 5 minutes of inactivity
|
||||
// Clean up all folder mappings for this batch
|
||||
for (const key of folderMappings.keys()) {
|
||||
if (key.endsWith(`-${batchId}`)) {
|
||||
folderMappings.delete(key);
|
||||
}
|
||||
}
|
||||
batchActivity.delete(batchId);
|
||||
log.info(`Cleaned up folder mappings for inactive batch: ${batchId}`);
|
||||
}
|
||||
}
|
||||
}, 60000); // Check every minute
|
||||
|
||||
// Add these helper functions before the routes
|
||||
async function getUniqueFilePath(filePath) {
|
||||
const dir = path.dirname(filePath);
|
||||
const ext = path.extname(filePath);
|
||||
const baseName = path.basename(filePath, ext);
|
||||
let counter = 1;
|
||||
let finalPath = filePath;
|
||||
|
||||
while (true) {
|
||||
try {
|
||||
// Try to create the file exclusively - will fail if file exists
|
||||
const fileHandle = await fs.promises.open(finalPath, 'wx');
|
||||
// Return both the path and handle instead of closing it
|
||||
return { path: finalPath, handle: fileHandle };
|
||||
} catch (err) {
|
||||
if (err.code === 'EEXIST') {
|
||||
// File exists, try next number
|
||||
finalPath = path.join(dir, `${baseName} (${counter})${ext}`);
|
||||
counter++;
|
||||
} else {
|
||||
throw err; // Other errors should be handled by caller
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async function getUniqueFolderPath(folderPath) {
|
||||
let counter = 1;
|
||||
let finalPath = folderPath;
|
||||
|
||||
while (true) {
|
||||
try {
|
||||
// Try to create the directory - mkdir with recursive:false is atomic
|
||||
await fs.promises.mkdir(finalPath, { recursive: false });
|
||||
return finalPath;
|
||||
} catch (err) {
|
||||
if (err.code === 'EEXIST') {
|
||||
// Folder exists, try next number
|
||||
finalPath = `${folderPath} (${counter})`;
|
||||
counter++;
|
||||
} else if (err.code === 'ENOENT') {
|
||||
// Parent directory doesn't exist, create it first
|
||||
await fs.promises.mkdir(path.dirname(finalPath), { recursive: true });
|
||||
// Then try again with the same path
|
||||
continue;
|
||||
} else {
|
||||
throw err; // Other errors should be handled by caller
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Validate batch ID format
|
||||
function isValidBatchId(batchId) {
|
||||
// Batch ID should be in format: timestamp-randomstring
|
||||
return /^\d+-[a-z0-9]{9}$/.test(batchId);
|
||||
}
|
||||
|
||||
// Routes
|
||||
app.post('/upload/init', initUploadLimiter, async (req, res) => {
|
||||
const { filename, fileSize } = req.body;
|
||||
let batchId = req.headers['x-batch-id'];
|
||||
|
||||
// For single file uploads without a batch ID, generate one
|
||||
if (!batchId) {
|
||||
const timestamp = Date.now();
|
||||
const randomStr = crypto.randomBytes(4).toString('hex').substring(0, 9);
|
||||
batchId = `${timestamp}-${randomStr}`;
|
||||
} else if (!isValidBatchId(batchId)) {
|
||||
log.error('Invalid batch ID format');
|
||||
return res.status(400).json({ error: 'Invalid batch ID format' });
|
||||
}
|
||||
|
||||
// Always update batch activity timestamp for any upload
|
||||
batchActivity.set(batchId, Date.now());
|
||||
|
||||
const safeFilename = path.normalize(filename).replace(/^(\.\.(\/|\\|$))+/, '');
|
||||
|
||||
// Validate file extension
|
||||
const allowedExtensions = process.env.ALLOWED_EXTENSIONS ?
|
||||
process.env.ALLOWED_EXTENSIONS.split(',').map(ext => ext.trim().toLowerCase()) :
|
||||
null;
|
||||
|
||||
if (allowedExtensions) {
|
||||
const fileExt = path.extname(safeFilename).toLowerCase();
|
||||
if (!allowedExtensions.includes(fileExt)) {
|
||||
log.error(`File type ${fileExt} not allowed`);
|
||||
return res.status(400).json({
|
||||
error: 'File type not allowed',
|
||||
allowedExtensions
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Check file size limit
|
||||
if (fileSize > maxFileSize) {
|
||||
log.error(`File size ${fileSize} bytes exceeds limit of ${maxFileSize} bytes`);
|
||||
return res.status(413).json({
|
||||
error: 'File too large',
|
||||
limit: maxFileSize,
|
||||
limitInMB: maxFileSize / (1024 * 1024)
|
||||
});
|
||||
}
|
||||
|
||||
const uploadId = crypto.randomBytes(16).toString('hex');
|
||||
let filePath = path.join(uploadDir, safeFilename);
|
||||
let fileHandle;
|
||||
|
||||
try {
|
||||
// Handle file/folder duplication
|
||||
const pathParts = safeFilename.split('/');
|
||||
|
||||
if (pathParts.length > 1) {
|
||||
// This is a file within a folder
|
||||
const originalFolderName = pathParts[0];
|
||||
const folderPath = path.join(uploadDir, originalFolderName);
|
||||
|
||||
// Check if we already have a mapping for this folder in this batch
|
||||
let newFolderName = folderMappings.get(`${originalFolderName}-${batchId}`);
|
||||
|
||||
if (!newFolderName) {
|
||||
try {
|
||||
// Try to create the folder atomically first
|
||||
await fs.promises.mkdir(folderPath, { recursive: false });
|
||||
newFolderName = originalFolderName;
|
||||
} catch (err) {
|
||||
if (err.code === 'EEXIST') {
|
||||
// Folder exists, get a unique name
|
||||
const uniqueFolderPath = await getUniqueFolderPath(folderPath);
|
||||
newFolderName = path.basename(uniqueFolderPath);
|
||||
log.info(`Folder "${originalFolderName}" exists, using "${newFolderName}" instead`);
|
||||
} else {
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
|
||||
folderMappings.set(`${originalFolderName}-${batchId}`, newFolderName);
|
||||
}
|
||||
|
||||
// Replace the original folder path with the mapped one and keep original file name
|
||||
pathParts[0] = newFolderName;
|
||||
filePath = path.join(uploadDir, ...pathParts);
|
||||
|
||||
// Ensure parent directories exist
|
||||
await fs.promises.mkdir(path.dirname(filePath), { recursive: true });
|
||||
}
|
||||
|
||||
// For both single files and files in folders, get a unique path and file handle
|
||||
const result = await getUniqueFilePath(filePath);
|
||||
filePath = result.path;
|
||||
fileHandle = result.handle;
|
||||
|
||||
// Create upload entry (using the file handle we already have)
|
||||
uploads.set(uploadId, {
|
||||
safeFilename: path.relative(uploadDir, filePath),
|
||||
filePath,
|
||||
fileSize,
|
||||
bytesReceived: 0,
|
||||
writeStream: fileHandle.createWriteStream()
|
||||
});
|
||||
|
||||
log.info(`Initialized upload for ${path.relative(uploadDir, filePath)} (${fileSize} bytes)`);
|
||||
res.json({ uploadId });
|
||||
} catch (err) {
|
||||
// Clean up file handle if something went wrong
|
||||
if (fileHandle) {
|
||||
await fileHandle.close().catch(() => {});
|
||||
// Try to remove the file if it was created
|
||||
fs.unlink(filePath).catch(() => {});
|
||||
}
|
||||
log.error(`Failed to initialize upload: ${err.message}`);
|
||||
res.status(500).json({ error: 'Failed to initialize upload' });
|
||||
}
|
||||
});
|
||||
|
||||
app.post('/upload/chunk/:uploadId', express.raw({
|
||||
limit: '10mb',
|
||||
type: 'application/octet-stream'
|
||||
}), async (req, res) => {
|
||||
const { uploadId } = req.params;
|
||||
const upload = uploads.get(uploadId);
|
||||
const chunkSize = req.body.length;
|
||||
|
||||
if (!upload) {
|
||||
return res.status(404).json({ error: 'Upload not found' });
|
||||
}
|
||||
|
||||
try {
|
||||
// Get the batch ID from the request headers
|
||||
const batchId = req.headers['x-batch-id'];
|
||||
if (batchId && isValidBatchId(batchId)) {
|
||||
// Update batch activity timestamp
|
||||
batchActivity.set(batchId, Date.now());
|
||||
}
|
||||
|
||||
upload.writeStream.write(Buffer.from(req.body));
|
||||
upload.bytesReceived += chunkSize;
|
||||
|
||||
const progress = Math.round((upload.bytesReceived / upload.fileSize) * 100);
|
||||
log.info(`Received chunk for ${upload.safeFilename}: ${progress}%`);
|
||||
|
||||
res.json({
|
||||
bytesReceived: upload.bytesReceived,
|
||||
progress
|
||||
});
|
||||
|
||||
// Check if upload is complete
|
||||
if (upload.bytesReceived >= upload.fileSize) {
|
||||
upload.writeStream.end();
|
||||
uploads.delete(uploadId);
|
||||
log.success(`Upload completed: ${upload.safeFilename}`);
|
||||
|
||||
// Update notification call to use safeFilename
|
||||
await sendNotification(upload.safeFilename, upload.fileSize);
|
||||
}
|
||||
} catch (err) {
|
||||
log.error(`Chunk upload failed: ${err.message}`);
|
||||
res.status(500).json({ error: 'Failed to process chunk' });
|
||||
}
|
||||
});
|
||||
|
||||
app.post('/upload/cancel/:uploadId', (req, res) => {
|
||||
const { uploadId } = req.params;
|
||||
const upload = uploads.get(uploadId);
|
||||
|
||||
if (upload) {
|
||||
upload.writeStream.end();
|
||||
fs.unlink(upload.filePath, (err) => {
|
||||
if (err) log.error(`Failed to delete incomplete upload: ${err.message}`);
|
||||
});
|
||||
uploads.delete(uploadId);
|
||||
log.info(`Upload cancelled: ${upload.safeFilename}`);
|
||||
}
|
||||
|
||||
res.json({ message: 'Upload cancelled' });
|
||||
});
|
||||
|
||||
// Error handling middleware
|
||||
app.use((err, req, res, next) => {
|
||||
log.error(`Unhandled error: ${err.message}`);
|
||||
res.status(500).json({ message: 'Internal server error', error: err.message });
|
||||
});
|
||||
|
||||
// Start server
|
||||
app.listen(port, () => {
|
||||
log.info(`Server running at http://localhost:${port}`);
|
||||
log.info(`Upload directory: ${uploadDir}`);
|
||||
|
||||
// Log custom title if set
|
||||
if (process.env.DUMBDROP_TITLE) {
|
||||
log.info(`Custom title set to: ${siteTitle}`);
|
||||
}
|
||||
|
||||
// Add auto upload status logging
|
||||
log.info(`Auto upload is ${AUTO_UPLOAD ? 'enabled' : 'disabled'}`);
|
||||
|
||||
// Add Apprise configuration logging
|
||||
if (APPRISE_URL) {
|
||||
log.info('Apprise notifications enabled');
|
||||
} else {
|
||||
log.info('Apprise notifications disabled - no URL configured');
|
||||
}
|
||||
|
||||
// List directory contents
|
||||
try {
|
||||
const files = fs.readdirSync(uploadDir);
|
||||
log.info(`Current directory contents (${files.length} files):`);
|
||||
files.forEach(file => {
|
||||
log.info(`- ${file}`);
|
||||
});
|
||||
} catch (err) {
|
||||
log.error(`Failed to list directory contents: ${err.message}`);
|
||||
}
|
||||
});
|
||||
|
||||
// Remove async from formatFileSize function
|
||||
function formatFileSize(bytes) {
|
||||
const units = ['B', 'KB', 'MB', 'GB', 'TB'];
|
||||
let size = bytes;
|
||||
let unitIndex = 0;
|
||||
|
||||
// If a specific unit is requested
|
||||
if (APPRISE_SIZE_UNIT) {
|
||||
const requestedUnit = APPRISE_SIZE_UNIT.toUpperCase();
|
||||
const unitIndex = units.indexOf(requestedUnit);
|
||||
if (unitIndex !== -1) {
|
||||
size = bytes / Math.pow(1024, unitIndex);
|
||||
return size.toFixed(2) + requestedUnit;
|
||||
}
|
||||
}
|
||||
|
||||
// Auto format to nearest unit
|
||||
while (size >= 1024 && unitIndex < units.length - 1) {
|
||||
size /= 1024;
|
||||
unitIndex++;
|
||||
}
|
||||
|
||||
// Round to 2 decimal places
|
||||
return size.toFixed(2) + units[unitIndex];
|
||||
}
|
||||
|
||||
// Add this helper function
|
||||
function calculateDirectorySize(directoryPath) {
|
||||
let totalSize = 0;
|
||||
const files = fs.readdirSync(directoryPath);
|
||||
|
||||
files.forEach(file => {
|
||||
const filePath = path.join(directoryPath, file);
|
||||
const stats = fs.statSync(filePath);
|
||||
if (stats.isFile()) {
|
||||
totalSize += stats.size;
|
||||
}
|
||||
});
|
||||
|
||||
return totalSize;
|
||||
}
|
||||
|
||||
// Modify the sendNotification function to safely escape the message
|
||||
async function sendNotification(filename, fileSize) {
|
||||
if (!APPRISE_URL) return;
|
||||
|
||||
try {
|
||||
const formattedSize = formatFileSize(fileSize);
|
||||
const totalStorage = formatFileSize(calculateDirectorySize(uploadDir));
|
||||
|
||||
// Sanitize the message components
|
||||
const sanitizedFilename = JSON.stringify(filename).slice(1, -1); // Escape special characters
|
||||
const message = APPRISE_MESSAGE
|
||||
.replace('{filename}', sanitizedFilename)
|
||||
.replace('{size}', formattedSize)
|
||||
.replace('{storage}', totalStorage);
|
||||
|
||||
// Use a string command instead of an array
|
||||
const command = `apprise ${APPRISE_URL} -b "${message}"`;
|
||||
await execAsync(command, {
|
||||
shell: true
|
||||
});
|
||||
|
||||
log.info(`Notification sent for: ${sanitizedFilename} (${formattedSize}, Total storage: ${totalStorage})`);
|
||||
} catch (err) {
|
||||
log.error(`Failed to send notification: ${err.message}`);
|
||||
}
|
||||
}
|
101
src/app.js
Normal file
101
src/app.js
Normal file
@@ -0,0 +1,101 @@
|
||||
/**
|
||||
* Main application setup and configuration.
|
||||
* Initializes Express app, middleware, routes, and static file serving.
|
||||
* Handles core application bootstrapping and configuration validation.
|
||||
*/
|
||||
|
||||
const express = require('express');
|
||||
const cors = require('cors');
|
||||
const cookieParser = require('cookie-parser');
|
||||
const path = require('path');
|
||||
const fs = require('fs');
|
||||
|
||||
const { config, validateConfig } = require('./config');
|
||||
const logger = require('./utils/logger');
|
||||
const { ensureDirectoryExists } = require('./utils/fileUtils');
|
||||
const { securityHeaders, requirePin } = require('./middleware/security');
|
||||
const { initUploadLimiter, pinVerifyLimiter, downloadLimiter } = require('./middleware/rateLimiter');
|
||||
|
||||
// Create Express app
|
||||
const app = express();
|
||||
|
||||
// Add this line to trust the first proxy
|
||||
app.set('trust proxy', 1);
|
||||
|
||||
// Middleware setup
|
||||
app.use(cors());
|
||||
app.use(cookieParser());
|
||||
app.use(express.json());
|
||||
app.use(securityHeaders);
|
||||
|
||||
// Import routes
|
||||
const { router: uploadRouter } = require('./routes/upload');
|
||||
const fileRoutes = require('./routes/files');
|
||||
const authRoutes = require('./routes/auth');
|
||||
|
||||
// Use routes with appropriate middleware
|
||||
app.use('/api/auth', pinVerifyLimiter, authRoutes);
|
||||
app.use('/api/upload', requirePin(config.pin), initUploadLimiter, uploadRouter);
|
||||
app.use('/api/files', requirePin(config.pin), downloadLimiter, fileRoutes);
|
||||
|
||||
// Root route
|
||||
app.get('/', (req, res) => {
|
||||
if (config.pin && !req.cookies.DUMBDROP_PIN) {
|
||||
return res.redirect('/login.html');
|
||||
}
|
||||
|
||||
let html = fs.readFileSync(path.join(__dirname, '../public', 'index.html'), 'utf8');
|
||||
html = html.replace(/{{SITE_TITLE}}/g, config.siteTitle);
|
||||
html = html.replace('{{AUTO_UPLOAD}}', config.autoUpload.toString());
|
||||
res.send(html);
|
||||
});
|
||||
|
||||
// Login route
|
||||
app.get('/login.html', (req, res) => {
|
||||
let html = fs.readFileSync(path.join(__dirname, '../public', 'login.html'), 'utf8');
|
||||
html = html.replace(/{{SITE_TITLE}}/g, config.siteTitle);
|
||||
res.send(html);
|
||||
});
|
||||
|
||||
// Serve static files
|
||||
app.use(express.static('public'));
|
||||
|
||||
// Error handling middleware
|
||||
app.use((err, req, res, next) => { // eslint-disable-line no-unused-vars
|
||||
logger.error(`Unhandled error: ${err.message}`);
|
||||
res.status(500).json({
|
||||
message: 'Internal server error',
|
||||
error: process.env.NODE_ENV === 'development' ? err.message : undefined
|
||||
});
|
||||
});
|
||||
|
||||
/**
|
||||
* Initialize the application
|
||||
* Sets up required directories and validates configuration
|
||||
*/
|
||||
async function initialize() {
|
||||
try {
|
||||
// Validate configuration
|
||||
validateConfig();
|
||||
|
||||
// Ensure upload directory exists and is writable
|
||||
await ensureDirectoryExists(config.uploadDir);
|
||||
|
||||
// Log configuration
|
||||
logger.info(`Maximum file size set to: ${config.maxFileSize / (1024 * 1024)}MB`);
|
||||
if (config.pin) {
|
||||
logger.info('PIN protection enabled');
|
||||
}
|
||||
logger.info(`Auto upload is ${config.autoUpload ? 'enabled' : 'disabled'}`);
|
||||
if (config.appriseUrl) {
|
||||
logger.info('Apprise notifications enabled');
|
||||
}
|
||||
|
||||
return app;
|
||||
} catch (err) {
|
||||
logger.error(`Initialization failed: ${err.message}`);
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { app, initialize, config };
|
96
src/config/index.js
Normal file
96
src/config/index.js
Normal file
@@ -0,0 +1,96 @@
|
||||
require('dotenv').config();
|
||||
const { validatePin } = require('../utils/security');
|
||||
const logger = require('../utils/logger');
|
||||
const fs = require('fs');
|
||||
|
||||
/**
|
||||
* Get the host path from Docker mount point
|
||||
* @returns {string} Host path or fallback to container path
|
||||
*/
|
||||
function getHostPath() {
|
||||
try {
|
||||
// Read Docker mountinfo to get the host path
|
||||
const mountInfo = fs.readFileSync('/proc/self/mountinfo', 'utf8');
|
||||
const lines = mountInfo.split('\n');
|
||||
|
||||
// Find the line containing our upload directory
|
||||
const uploadMount = lines.find(line => line.includes('/app/uploads'));
|
||||
if (uploadMount) {
|
||||
// Extract the host path from the mount info
|
||||
const parts = uploadMount.split(' ');
|
||||
// The host path is typically in the 4th space-separated field
|
||||
const hostPath = parts[3];
|
||||
return hostPath;
|
||||
}
|
||||
} catch (err) {
|
||||
logger.debug('Could not determine host path from mount info');
|
||||
}
|
||||
|
||||
// Fallback to container path if we can't determine host path
|
||||
return '/app/uploads';
|
||||
}
|
||||
|
||||
/**
|
||||
* Application configuration
|
||||
* Loads and validates environment variables
|
||||
*/
|
||||
const config = {
|
||||
// Server settings
|
||||
port: process.env.PORT || 3000,
|
||||
nodeEnv: process.env.NODE_ENV || 'development',
|
||||
|
||||
// Upload settings
|
||||
uploadDir: '/app/uploads', // Internal Docker path
|
||||
uploadDisplayPath: getHostPath(), // Dynamically determined from Docker mount
|
||||
maxFileSize: (() => {
|
||||
const sizeInMB = parseInt(process.env.MAX_FILE_SIZE || '1024', 10);
|
||||
if (isNaN(sizeInMB) || sizeInMB <= 0) {
|
||||
throw new Error('MAX_FILE_SIZE must be a positive number');
|
||||
}
|
||||
return sizeInMB * 1024 * 1024; // Convert MB to bytes
|
||||
})(),
|
||||
autoUpload: process.env.AUTO_UPLOAD === 'true',
|
||||
|
||||
// Security
|
||||
pin: validatePin(process.env.DUMBDROP_PIN),
|
||||
|
||||
// UI settings
|
||||
siteTitle: process.env.DUMBDROP_TITLE || 'DumbDrop',
|
||||
|
||||
// Notification settings
|
||||
appriseUrl: process.env.APPRISE_URL,
|
||||
appriseMessage: process.env.APPRISE_MESSAGE || 'New file uploaded - {filename} ({size}), Storage used {storage}',
|
||||
appriseSizeUnit: process.env.APPRISE_SIZE_UNIT,
|
||||
|
||||
// File extensions
|
||||
allowedExtensions: process.env.ALLOWED_EXTENSIONS ?
|
||||
process.env.ALLOWED_EXTENSIONS.split(',').map(ext => ext.trim().toLowerCase()) :
|
||||
null
|
||||
};
|
||||
|
||||
// Validate required settings
|
||||
function validateConfig() {
|
||||
const errors = [];
|
||||
|
||||
if (config.maxFileSize <= 0) {
|
||||
errors.push('MAX_FILE_SIZE must be greater than 0');
|
||||
}
|
||||
|
||||
if (config.nodeEnv === 'production') {
|
||||
if (!config.appriseUrl) {
|
||||
logger.info('Notifications disabled - No Configuration');
|
||||
}
|
||||
}
|
||||
|
||||
if (errors.length > 0) {
|
||||
throw new Error('Configuration validation failed:\n' + errors.join('\n'));
|
||||
}
|
||||
}
|
||||
|
||||
// Freeze configuration to prevent modifications
|
||||
Object.freeze(config);
|
||||
|
||||
module.exports = {
|
||||
config,
|
||||
validateConfig
|
||||
};
|
82
src/middleware/rateLimiter.js
Normal file
82
src/middleware/rateLimiter.js
Normal file
@@ -0,0 +1,82 @@
|
||||
const rateLimit = require('express-rate-limit');
|
||||
const { registerCleanupTask } = require('../utils/cleanup');
|
||||
|
||||
// Create rate limiters
|
||||
const createLimiter = (options) => {
|
||||
const limiter = rateLimit(options);
|
||||
// Register cleanup for the rate limiter's store
|
||||
if (limiter.store && typeof limiter.store.resetAll === 'function') {
|
||||
registerCleanupTask(async () => {
|
||||
await limiter.store.resetAll();
|
||||
});
|
||||
}
|
||||
return limiter;
|
||||
};
|
||||
|
||||
/**
|
||||
* Rate limiter for upload initialization
|
||||
* Limits the number of new upload jobs/batches that can be started
|
||||
* Does not limit the number of files within a batch or chunks within a file
|
||||
*/
|
||||
const initUploadLimiter = createLimiter({
|
||||
windowMs: 60 * 1000, // 1 minute window
|
||||
max: 30, // 30 upload jobs per minute
|
||||
message: {
|
||||
error: 'Too many upload jobs started. Please wait before starting new uploads.'
|
||||
},
|
||||
standardHeaders: true,
|
||||
legacyHeaders: false,
|
||||
// Skip rate limiting for chunk uploads within an existing batch
|
||||
skip: (req) => {
|
||||
return req.headers['x-batch-id'] !== undefined;
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* Rate limiter for chunk uploads
|
||||
* More permissive to allow large file uploads
|
||||
*/
|
||||
const chunkUploadLimiter = createLimiter({
|
||||
windowMs: 60 * 1000, // 1 minute window
|
||||
max: 300, // 300 chunks per minute (5 per second)
|
||||
message: {
|
||||
error: 'Upload rate limit exceeded. Please wait before continuing.'
|
||||
},
|
||||
standardHeaders: true,
|
||||
legacyHeaders: false
|
||||
});
|
||||
|
||||
/**
|
||||
* Rate limiter for PIN verification attempts
|
||||
* Prevents brute force attacks
|
||||
*/
|
||||
const pinVerifyLimiter = createLimiter({
|
||||
windowMs: 15 * 60 * 1000, // 15 minutes
|
||||
max: 5, // 5 attempts per 15 minutes
|
||||
message: {
|
||||
error: 'Too many PIN verification attempts. Please try again later.'
|
||||
},
|
||||
standardHeaders: true,
|
||||
legacyHeaders: false
|
||||
});
|
||||
|
||||
/**
|
||||
* Rate limiter for file downloads
|
||||
* Prevents abuse of the download system
|
||||
*/
|
||||
const downloadLimiter = createLimiter({
|
||||
windowMs: 60 * 1000, // 1 minute window
|
||||
max: 60, // 60 downloads per minute
|
||||
message: {
|
||||
error: 'Download rate limit exceeded. Please wait before downloading more files.'
|
||||
},
|
||||
standardHeaders: true,
|
||||
legacyHeaders: false
|
||||
});
|
||||
|
||||
module.exports = {
|
||||
initUploadLimiter,
|
||||
chunkUploadLimiter,
|
||||
pinVerifyLimiter,
|
||||
downloadLimiter
|
||||
};
|
81
src/middleware/security.js
Normal file
81
src/middleware/security.js
Normal file
@@ -0,0 +1,81 @@
|
||||
/**
|
||||
* Security middleware implementations for HTTP-level protection.
|
||||
* Sets security headers (CSP, HSTS) and implements PIN-based authentication.
|
||||
* Provides Express middleware for securing routes and responses.
|
||||
*/
|
||||
|
||||
const { safeCompare } = require('../utils/security');
|
||||
const logger = require('../utils/logger');
|
||||
|
||||
/**
|
||||
* Security headers middleware
|
||||
*/
|
||||
function securityHeaders(req, res, next) {
|
||||
// Content Security Policy
|
||||
res.setHeader(
|
||||
'Content-Security-Policy',
|
||||
"default-src 'self'; " +
|
||||
"style-src 'self' 'unsafe-inline' cdn.jsdelivr.net; " +
|
||||
"script-src 'self' 'unsafe-inline' cdn.jsdelivr.net; " +
|
||||
"img-src 'self' data: blob:;"
|
||||
);
|
||||
|
||||
// X-Content-Type-Options
|
||||
res.setHeader('X-Content-Type-Options', 'nosniff');
|
||||
|
||||
// X-Frame-Options
|
||||
res.setHeader('X-Frame-Options', 'SAMEORIGIN');
|
||||
|
||||
// X-XSS-Protection
|
||||
res.setHeader('X-XSS-Protection', '1; mode=block');
|
||||
|
||||
// Strict Transport Security (when in production)
|
||||
if (process.env.NODE_ENV === 'production') {
|
||||
res.setHeader('Strict-Transport-Security', 'max-age=31536000; includeSubDomains');
|
||||
}
|
||||
|
||||
next();
|
||||
}
|
||||
|
||||
/**
|
||||
* PIN protection middleware
|
||||
* @param {string} PIN - Valid PIN for comparison
|
||||
*/
|
||||
function requirePin(PIN) {
|
||||
return (req, res, next) => {
|
||||
// Skip PIN check if no PIN is configured
|
||||
if (!PIN) {
|
||||
return next();
|
||||
}
|
||||
|
||||
// Check cookie first
|
||||
const cookiePin = req.cookies?.DUMBDROP_PIN;
|
||||
if (cookiePin && safeCompare(cookiePin, PIN)) {
|
||||
return next();
|
||||
}
|
||||
|
||||
// Check header as fallback
|
||||
const headerPin = req.headers['x-pin'];
|
||||
if (headerPin && safeCompare(headerPin, PIN)) {
|
||||
// Set cookie for subsequent requests with enhanced security
|
||||
const cookieOptions = {
|
||||
httpOnly: true, // Always enable HttpOnly
|
||||
secure: req.secure || req.headers['x-forwarded-proto'] === 'https', // Enable secure flag only if the request is over HTTPS
|
||||
sameSite: 'strict',
|
||||
path: '/',
|
||||
maxAge: 24 * 60 * 60 * 1000 // 24 hour expiry
|
||||
};
|
||||
|
||||
res.cookie('DUMBDROP_PIN', headerPin, cookieOptions);
|
||||
return next();
|
||||
}
|
||||
|
||||
logger.warn(`Unauthorized access attempt from IP: ${req.ip}`);
|
||||
res.status(401).json({ error: 'Unauthorized' });
|
||||
};
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
securityHeaders,
|
||||
requirePin
|
||||
};
|
119
src/routes/auth.js
Normal file
119
src/routes/auth.js
Normal file
@@ -0,0 +1,119 @@
|
||||
const express = require('express');
|
||||
const router = express.Router();
|
||||
const { config } = require('../config');
|
||||
const logger = require('../utils/logger');
|
||||
const {
|
||||
validatePin,
|
||||
safeCompare,
|
||||
isLockedOut,
|
||||
recordAttempt,
|
||||
resetAttempts,
|
||||
MAX_ATTEMPTS,
|
||||
LOCKOUT_DURATION
|
||||
} = require('../utils/security');
|
||||
|
||||
/**
|
||||
* Verify PIN
|
||||
*/
|
||||
router.post('/verify-pin', (req, res) => {
|
||||
const { pin } = req.body;
|
||||
const ip = req.ip;
|
||||
|
||||
try {
|
||||
// If no PIN is set in config, always return success
|
||||
if (!config.pin) {
|
||||
res.cookie('DUMBDROP_PIN', '', {
|
||||
httpOnly: true,
|
||||
secure: process.env.NODE_ENV === 'production',
|
||||
sameSite: 'strict',
|
||||
path: '/'
|
||||
});
|
||||
return res.json({ success: true });
|
||||
}
|
||||
|
||||
// Validate PIN format
|
||||
const cleanedPin = validatePin(pin);
|
||||
if (!cleanedPin) {
|
||||
logger.warn(`Invalid PIN format from IP: ${ip}`);
|
||||
return res.status(401).json({
|
||||
error: 'Invalid PIN format. PIN must be 4-10 digits.'
|
||||
});
|
||||
}
|
||||
|
||||
// Check for lockout
|
||||
if (isLockedOut(ip)) {
|
||||
const attempts = recordAttempt(ip);
|
||||
const timeLeft = Math.ceil(
|
||||
(LOCKOUT_DURATION - (Date.now() - attempts.lastAttempt)) / 1000 / 60
|
||||
);
|
||||
|
||||
logger.warn(`Login attempt from locked out IP: ${ip}`);
|
||||
return res.status(429).json({
|
||||
error: `Too many attempts. Please try again in ${timeLeft} minutes.`
|
||||
});
|
||||
}
|
||||
|
||||
// Verify the PIN using constant-time comparison
|
||||
if (safeCompare(cleanedPin, config.pin)) {
|
||||
// Reset attempts on successful login
|
||||
resetAttempts(ip);
|
||||
|
||||
// Set secure cookie with cleaned PIN
|
||||
res.cookie('DUMBDROP_PIN', cleanedPin, {
|
||||
httpOnly: true,
|
||||
secure: process.env.NODE_ENV === 'production',
|
||||
sameSite: 'strict',
|
||||
path: '/'
|
||||
});
|
||||
|
||||
logger.info(`Successful PIN verification from IP: ${ip}`);
|
||||
res.json({ success: true });
|
||||
} else {
|
||||
// Record failed attempt
|
||||
const attempts = recordAttempt(ip);
|
||||
const attemptsLeft = MAX_ATTEMPTS - attempts.count;
|
||||
|
||||
logger.warn(`Failed PIN verification from IP: ${ip} (${attemptsLeft} attempts remaining)`);
|
||||
res.status(401).json({
|
||||
success: false,
|
||||
error: attemptsLeft > 0 ?
|
||||
`Invalid PIN. ${attemptsLeft} attempts remaining.` :
|
||||
'Too many attempts. Account locked for 15 minutes.'
|
||||
});
|
||||
}
|
||||
} catch (err) {
|
||||
logger.error(`PIN verification error: ${err.message}`);
|
||||
res.status(500).json({ error: 'Authentication failed' });
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* Check if PIN protection is enabled
|
||||
*/
|
||||
router.get('/pin-required', (req, res) => {
|
||||
try {
|
||||
res.json({
|
||||
required: !!config.pin,
|
||||
length: config.pin ? config.pin.length : 0
|
||||
});
|
||||
} catch (err) {
|
||||
logger.error(`PIN check error: ${err.message}`);
|
||||
res.status(500).json({ error: 'Failed to check PIN status' });
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* Logout (clear PIN cookie)
|
||||
*/
|
||||
router.post('/logout', (req, res) => {
|
||||
try {
|
||||
res.clearCookie('DUMBDROP_PIN', { path: '/' });
|
||||
logger.info(`Logout successful for IP: ${req.ip}`);
|
||||
res.json({ success: true });
|
||||
} catch (err) {
|
||||
logger.error(`Logout error: ${err.message}`);
|
||||
res.status(500).json({ error: 'Logout failed' });
|
||||
}
|
||||
});
|
||||
|
||||
module.exports = router;
|
133
src/routes/files.js
Normal file
133
src/routes/files.js
Normal file
@@ -0,0 +1,133 @@
|
||||
/**
|
||||
* File management and listing route handlers.
|
||||
* Provides endpoints for listing, downloading, and managing uploaded files.
|
||||
* Handles file metadata, stats, and directory operations.
|
||||
*/
|
||||
|
||||
const express = require('express');
|
||||
const router = express.Router();
|
||||
const path = require('path');
|
||||
const fs = require('fs').promises;
|
||||
const { config } = require('../config');
|
||||
const logger = require('../utils/logger');
|
||||
const { formatFileSize } = require('../utils/fileUtils');
|
||||
|
||||
/**
|
||||
* Get file information
|
||||
*/
|
||||
router.get('/:filename/info', async (req, res) => {
|
||||
const filePath = path.join(config.uploadDir, req.params.filename);
|
||||
|
||||
try {
|
||||
const stats = await fs.stat(filePath);
|
||||
const fileInfo = {
|
||||
filename: req.params.filename,
|
||||
size: stats.size,
|
||||
formattedSize: formatFileSize(stats.size),
|
||||
uploadDate: stats.mtime,
|
||||
mimetype: path.extname(req.params.filename).slice(1)
|
||||
};
|
||||
|
||||
res.json(fileInfo);
|
||||
} catch (err) {
|
||||
logger.error(`Failed to get file info: ${err.message}`);
|
||||
res.status(404).json({ error: 'File not found' });
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* Download file
|
||||
*/
|
||||
router.get('/:filename/download', async (req, res) => {
|
||||
const filePath = path.join(config.uploadDir, req.params.filename);
|
||||
|
||||
try {
|
||||
await fs.access(filePath);
|
||||
|
||||
// Set headers for download
|
||||
res.setHeader('Content-Disposition', `attachment; filename="${req.params.filename}"`);
|
||||
res.setHeader('Content-Type', 'application/octet-stream');
|
||||
|
||||
// Stream the file
|
||||
const fileStream = require('fs').createReadStream(filePath);
|
||||
fileStream.pipe(res);
|
||||
|
||||
// Handle errors during streaming
|
||||
fileStream.on('error', (err) => {
|
||||
logger.error(`File streaming error: ${err.message}`);
|
||||
if (!res.headersSent) {
|
||||
res.status(500).json({ error: 'Failed to download file' });
|
||||
}
|
||||
});
|
||||
|
||||
logger.info(`File download started: ${req.params.filename}`);
|
||||
} catch (err) {
|
||||
logger.error(`File download failed: ${err.message}`);
|
||||
res.status(404).json({ error: 'File not found' });
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* List all files
|
||||
*/
|
||||
router.get('/', async (req, res) => {
|
||||
try {
|
||||
const files = await fs.readdir(config.uploadDir);
|
||||
|
||||
// Get stats for all files first
|
||||
const fileStatsPromises = files.map(async filename => {
|
||||
try {
|
||||
const stats = await fs.stat(path.join(config.uploadDir, filename));
|
||||
return { filename, stats, valid: stats.isFile() };
|
||||
} catch (err) {
|
||||
logger.error(`Failed to get stats for file ${filename}: ${err.message}`);
|
||||
return { filename, valid: false };
|
||||
}
|
||||
});
|
||||
|
||||
const fileStats = await Promise.all(fileStatsPromises);
|
||||
|
||||
// Filter and map valid files
|
||||
const fileList = fileStats
|
||||
.filter(file => file.valid)
|
||||
.map(({ filename, stats }) => ({
|
||||
filename,
|
||||
size: stats.size,
|
||||
formattedSize: formatFileSize(stats.size),
|
||||
uploadDate: stats.mtime
|
||||
}));
|
||||
|
||||
// Sort files by upload date (newest first)
|
||||
fileList.sort((a, b) => b.uploadDate - a.uploadDate);
|
||||
|
||||
res.json({
|
||||
files: fileList,
|
||||
totalFiles: fileList.length,
|
||||
totalSize: fileList.reduce((acc, file) => acc + file.size, 0)
|
||||
});
|
||||
} catch (err) {
|
||||
logger.error(`Failed to list files: ${err.message}`);
|
||||
res.status(500).json({ error: 'Failed to list files' });
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* Delete file
|
||||
*/
|
||||
router.delete('/:filename', async (req, res) => {
|
||||
const filePath = path.join(config.uploadDir, req.params.filename);
|
||||
|
||||
try {
|
||||
await fs.access(filePath);
|
||||
await fs.unlink(filePath);
|
||||
logger.info(`File deleted: ${req.params.filename}`);
|
||||
res.json({ message: 'File deleted successfully' });
|
||||
} catch (err) {
|
||||
logger.error(`File deletion failed: ${err.message}`);
|
||||
res.status(err.code === 'ENOENT' ? 404 : 500).json({
|
||||
error: err.code === 'ENOENT' ? 'File not found' : 'Failed to delete file'
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
module.exports = router;
|
410
src/routes/upload.js
Normal file
410
src/routes/upload.js
Normal file
@@ -0,0 +1,410 @@
|
||||
/**
|
||||
* File upload route handlers and batch upload management.
|
||||
* Handles file uploads, chunked transfers, and folder creation.
|
||||
* Manages upload sessions, batch timeouts, and cleanup.
|
||||
*/
|
||||
|
||||
const express = require('express');
|
||||
const router = express.Router();
|
||||
const crypto = require('crypto');
|
||||
const path = require('path');
|
||||
const { config } = require('../config');
|
||||
const logger = require('../utils/logger');
|
||||
const { getUniqueFilePath, getUniqueFolderPath } = require('../utils/fileUtils');
|
||||
const { sendNotification } = require('../services/notifications');
|
||||
const fs = require('fs');
|
||||
const { cleanupIncompleteUploads } = require('../utils/cleanup');
|
||||
|
||||
// Store ongoing uploads
|
||||
const uploads = new Map();
|
||||
// Store folder name mappings for batch uploads with timestamps
|
||||
const folderMappings = new Map();
|
||||
// Store batch activity timestamps
|
||||
const batchActivity = new Map();
|
||||
// Store upload to batch mappings
|
||||
const uploadToBatch = new Map();
|
||||
|
||||
const BATCH_TIMEOUT = 30 * 60 * 1000; // 30 minutes
|
||||
|
||||
let cleanupInterval;
|
||||
|
||||
/**
|
||||
* Start the cleanup interval for inactive batches
|
||||
* @returns {NodeJS.Timeout} The interval handle
|
||||
*/
|
||||
function startBatchCleanup() {
|
||||
if (cleanupInterval) {
|
||||
clearInterval(cleanupInterval);
|
||||
}
|
||||
|
||||
cleanupInterval = setInterval(() => {
|
||||
const now = Date.now();
|
||||
logger.info(`Running batch cleanup, checking ${batchActivity.size} active batches`);
|
||||
|
||||
for (const [batchId, lastActivity] of batchActivity.entries()) {
|
||||
if (now - lastActivity >= BATCH_TIMEOUT) {
|
||||
logger.info(`Cleaning up inactive batch: ${batchId}`);
|
||||
batchActivity.delete(batchId);
|
||||
}
|
||||
}
|
||||
}, 5 * 60 * 1000); // 5 minutes
|
||||
|
||||
return cleanupInterval;
|
||||
}
|
||||
|
||||
/**
|
||||
* Stop the batch cleanup interval
|
||||
*/
|
||||
function stopBatchCleanup() {
|
||||
if (cleanupInterval) {
|
||||
clearInterval(cleanupInterval);
|
||||
cleanupInterval = null;
|
||||
}
|
||||
}
|
||||
|
||||
// Start cleanup interval unless disabled
|
||||
if (!process.env.DISABLE_BATCH_CLEANUP) {
|
||||
startBatchCleanup();
|
||||
}
|
||||
|
||||
// Run cleanup periodically
|
||||
const CLEANUP_INTERVAL = 5 * 60 * 1000; // 5 minutes
|
||||
const cleanupTimer = setInterval(() => {
|
||||
cleanupIncompleteUploads(uploads, uploadToBatch, batchActivity)
|
||||
.catch(err => logger.error(`Cleanup failed: ${err.message}`));
|
||||
}, CLEANUP_INTERVAL);
|
||||
|
||||
// Handle cleanup timer errors
|
||||
cleanupTimer.unref(); // Don't keep process alive just for cleanup
|
||||
process.on('SIGTERM', () => {
|
||||
clearInterval(cleanupTimer);
|
||||
// Final cleanup
|
||||
cleanupIncompleteUploads(uploads, uploadToBatch, batchActivity)
|
||||
.catch(err => logger.error(`Final cleanup failed: ${err.message}`));
|
||||
});
|
||||
|
||||
/**
|
||||
* Log the current state of uploads and mappings
|
||||
* @param {string} context - The context where this log is being called from
|
||||
*/
|
||||
function logUploadState(context) {
|
||||
logger.debug(`Upload State [${context}]:
|
||||
Active Uploads: ${uploads.size}
|
||||
Active Batches: ${batchActivity.size}
|
||||
Folder Mappings: ${folderMappings.size}
|
||||
Upload-Batch Mappings: ${uploadToBatch.size}
|
||||
`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate batch ID format
|
||||
* @param {string} batchId - Batch ID to validate
|
||||
* @returns {boolean} True if valid
|
||||
*/
|
||||
function isValidBatchId(batchId) {
|
||||
return /^\d+-[a-z0-9]{9}$/.test(batchId);
|
||||
}
|
||||
|
||||
// Initialize upload
|
||||
router.post('/init', async (req, res) => {
|
||||
const { filename, fileSize } = req.body;
|
||||
const clientBatchId = req.headers['x-batch-id'];
|
||||
|
||||
try {
|
||||
// Log request details for debugging
|
||||
if (process.env.DEBUG === 'true' || process.env.NODE_ENV === 'development') {
|
||||
logger.info(`Upload init request:
|
||||
Filename: ${filename}
|
||||
Size: ${fileSize} (${typeof fileSize})
|
||||
Batch ID: ${clientBatchId || 'none'}
|
||||
`);
|
||||
} else {
|
||||
logger.info(`Upload init request: ${filename} (${fileSize} bytes)`);
|
||||
}
|
||||
|
||||
// Validate required fields with detailed errors
|
||||
if (!filename) {
|
||||
return res.status(400).json({
|
||||
error: 'Missing filename',
|
||||
details: 'The filename field is required'
|
||||
});
|
||||
}
|
||||
|
||||
if (fileSize === undefined || fileSize === null) {
|
||||
return res.status(400).json({
|
||||
error: 'Missing fileSize',
|
||||
details: 'The fileSize field is required'
|
||||
});
|
||||
}
|
||||
|
||||
// Convert fileSize to number if it's a string
|
||||
const size = Number(fileSize);
|
||||
if (isNaN(size) || size < 0) { // Changed from size <= 0 to allow zero-byte files
|
||||
return res.status(400).json({
|
||||
error: 'Invalid file size',
|
||||
details: `File size must be a non-negative number, received: ${fileSize} (${typeof fileSize})`
|
||||
});
|
||||
}
|
||||
|
||||
// Validate file size
|
||||
const maxSizeInBytes = config.maxFileSize;
|
||||
if (size > maxSizeInBytes) {
|
||||
const message = `File size ${size} bytes exceeds limit of ${maxSizeInBytes} bytes`;
|
||||
logger.warn(message);
|
||||
return res.status(413).json({
|
||||
error: 'File too large',
|
||||
message,
|
||||
limit: maxSizeInBytes,
|
||||
limitInMB: Math.floor(maxSizeInBytes / (1024 * 1024))
|
||||
});
|
||||
}
|
||||
|
||||
// Generate batch ID from header or create new one
|
||||
const batchId = req.headers['x-batch-id'] || `${Date.now()}-${crypto.randomBytes(4).toString('hex').substring(0, 9)}`;
|
||||
|
||||
// Validate batch ID if provided in header
|
||||
if (req.headers['x-batch-id'] && !isValidBatchId(batchId)) {
|
||||
return res.status(400).json({
|
||||
error: 'Invalid batch ID format',
|
||||
details: `Batch ID must match format: timestamp-[9 alphanumeric chars], received: ${batchId}`
|
||||
});
|
||||
}
|
||||
|
||||
// Update batch activity
|
||||
batchActivity.set(batchId, Date.now());
|
||||
|
||||
// Sanitize filename and convert to forward slashes
|
||||
const safeFilename = path.normalize(filename)
|
||||
.replace(/^(\.\.(\/|\\|$))+/, '')
|
||||
.replace(/\\/g, '/')
|
||||
.replace(/^\/+/, ''); // Remove leading slashes
|
||||
|
||||
// Log sanitized filename
|
||||
logger.info(`Processing upload: ${safeFilename}`);
|
||||
|
||||
// Validate file extension if configured
|
||||
if (config.allowedExtensions) {
|
||||
const fileExt = path.extname(safeFilename).toLowerCase();
|
||||
if (!config.allowedExtensions.includes(fileExt)) {
|
||||
return res.status(400).json({
|
||||
error: 'File type not allowed',
|
||||
allowedExtensions: config.allowedExtensions,
|
||||
receivedExtension: fileExt
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
const uploadId = crypto.randomBytes(16).toString('hex');
|
||||
let filePath = path.join(config.uploadDir, safeFilename);
|
||||
let fileHandle;
|
||||
|
||||
try {
|
||||
// Handle file/folder paths
|
||||
const pathParts = safeFilename.split('/').filter(Boolean); // Remove empty parts
|
||||
|
||||
if (pathParts.length > 1) {
|
||||
// Handle files within folders
|
||||
const originalFolderName = pathParts[0];
|
||||
const folderPath = path.join(config.uploadDir, originalFolderName);
|
||||
let newFolderName = folderMappings.get(`${originalFolderName}-${batchId}`);
|
||||
|
||||
if (!newFolderName) {
|
||||
try {
|
||||
// First ensure parent directories exist
|
||||
await fs.promises.mkdir(path.dirname(folderPath), { recursive: true });
|
||||
// Then try to create the target folder
|
||||
await fs.promises.mkdir(folderPath, { recursive: false });
|
||||
newFolderName = originalFolderName;
|
||||
} catch (err) {
|
||||
if (err.code === 'EEXIST') {
|
||||
const uniqueFolderPath = await getUniqueFolderPath(folderPath);
|
||||
newFolderName = path.basename(uniqueFolderPath);
|
||||
logger.info(`Folder "${originalFolderName}" exists, using "${newFolderName}"`);
|
||||
} else {
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
|
||||
folderMappings.set(`${originalFolderName}-${batchId}`, newFolderName);
|
||||
}
|
||||
|
||||
pathParts[0] = newFolderName;
|
||||
filePath = path.join(config.uploadDir, ...pathParts);
|
||||
|
||||
// Ensure all parent directories exist
|
||||
await fs.promises.mkdir(path.dirname(filePath), { recursive: true });
|
||||
}
|
||||
|
||||
// Get unique file path and handle
|
||||
const result = await getUniqueFilePath(filePath);
|
||||
filePath = result.path;
|
||||
fileHandle = result.handle;
|
||||
|
||||
// Create upload entry
|
||||
uploads.set(uploadId, {
|
||||
safeFilename: path.relative(config.uploadDir, filePath),
|
||||
filePath,
|
||||
fileSize: size,
|
||||
bytesReceived: 0,
|
||||
writeStream: fileHandle.createWriteStream()
|
||||
});
|
||||
|
||||
// Associate upload with batch
|
||||
uploadToBatch.set(uploadId, batchId);
|
||||
|
||||
logger.info(`Initialized upload for ${path.relative(config.uploadDir, filePath)} (${size} bytes)`);
|
||||
|
||||
// Log state after initialization
|
||||
logUploadState('After Upload Init');
|
||||
|
||||
// Handle zero-byte files immediately
|
||||
if (size === 0) {
|
||||
const upload = uploads.get(uploadId);
|
||||
upload.writeStream.end();
|
||||
uploads.delete(uploadId);
|
||||
logger.success(`Completed zero-byte file upload: ${upload.safeFilename}`);
|
||||
await sendNotification(upload.safeFilename, 0, config);
|
||||
}
|
||||
|
||||
// Send response
|
||||
return res.json({ uploadId });
|
||||
|
||||
} catch (err) {
|
||||
if (fileHandle) {
|
||||
await fileHandle.close().catch(() => {});
|
||||
fs.promises.unlink(filePath).catch(() => {});
|
||||
}
|
||||
throw err;
|
||||
}
|
||||
} catch (err) {
|
||||
logger.error(`Upload initialization failed:
|
||||
Error: ${err.message}
|
||||
Stack: ${err.stack}
|
||||
Filename: ${filename}
|
||||
Size: ${fileSize}
|
||||
Batch ID: ${clientBatchId || 'none'}
|
||||
`);
|
||||
return res.status(500).json({
|
||||
error: 'Failed to initialize upload',
|
||||
details: err.message
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
// Upload chunk
|
||||
router.post('/chunk/:uploadId', express.raw({
|
||||
limit: '10mb',
|
||||
type: 'application/octet-stream'
|
||||
}), async (req, res) => {
|
||||
const { uploadId } = req.params;
|
||||
const upload = uploads.get(uploadId);
|
||||
const chunkSize = req.body.length;
|
||||
const batchId = req.headers['x-batch-id'];
|
||||
|
||||
if (!upload) {
|
||||
logger.warn(`Upload not found: ${uploadId}, Batch ID: ${batchId || 'none'}`);
|
||||
return res.status(404).json({ error: 'Upload not found' });
|
||||
}
|
||||
|
||||
try {
|
||||
// Update batch activity if batch ID provided
|
||||
if (batchId && isValidBatchId(batchId)) {
|
||||
batchActivity.set(batchId, Date.now());
|
||||
}
|
||||
|
||||
// Write chunk
|
||||
await new Promise((resolve, reject) => {
|
||||
upload.writeStream.write(Buffer.from(req.body), (err) => {
|
||||
if (err) reject(err);
|
||||
else resolve();
|
||||
});
|
||||
});
|
||||
upload.bytesReceived += chunkSize;
|
||||
|
||||
// Calculate progress, ensuring it doesn't exceed 100%
|
||||
const progress = Math.min(
|
||||
Math.round((upload.bytesReceived / upload.fileSize) * 100),
|
||||
100
|
||||
);
|
||||
|
||||
logger.debug(`Chunk received:
|
||||
File: ${upload.safeFilename}
|
||||
Progress: ${progress}%
|
||||
Bytes Received: ${upload.bytesReceived}/${upload.fileSize}
|
||||
Chunk Size: ${chunkSize}
|
||||
Upload ID: ${uploadId}
|
||||
Batch ID: ${batchId || 'none'}
|
||||
`);
|
||||
|
||||
// Check if upload is complete
|
||||
if (upload.bytesReceived >= upload.fileSize) {
|
||||
await new Promise((resolve, reject) => {
|
||||
upload.writeStream.end((err) => {
|
||||
if (err) reject(err);
|
||||
else resolve();
|
||||
});
|
||||
});
|
||||
uploads.delete(uploadId);
|
||||
|
||||
// Format completion message based on debug mode
|
||||
if (process.env.DEBUG === 'true' || process.env.NODE_ENV === 'development') {
|
||||
logger.success(`Upload completed:
|
||||
File: ${upload.safeFilename}
|
||||
Size: ${upload.fileSize}
|
||||
Upload ID: ${uploadId}
|
||||
Batch ID: ${batchId || 'none'}
|
||||
`);
|
||||
} else {
|
||||
logger.success(`Upload completed: ${upload.safeFilename} (${upload.fileSize} bytes)`);
|
||||
}
|
||||
|
||||
// Send notification
|
||||
await sendNotification(upload.safeFilename, upload.fileSize, config);
|
||||
logUploadState('After Upload Complete');
|
||||
}
|
||||
|
||||
res.json({
|
||||
bytesReceived: upload.bytesReceived,
|
||||
progress
|
||||
});
|
||||
} catch (err) {
|
||||
logger.error(`Chunk upload failed:
|
||||
Error: ${err.message}
|
||||
Stack: ${err.stack}
|
||||
File: ${upload.safeFilename}
|
||||
Upload ID: ${uploadId}
|
||||
Batch ID: ${batchId || 'none'}
|
||||
Bytes Received: ${upload.bytesReceived}/${upload.fileSize}
|
||||
`);
|
||||
res.status(500).json({ error: 'Failed to process chunk' });
|
||||
}
|
||||
});
|
||||
|
||||
// Cancel upload
|
||||
router.post('/cancel/:uploadId', async (req, res) => {
|
||||
const { uploadId } = req.params;
|
||||
const upload = uploads.get(uploadId);
|
||||
|
||||
if (upload) {
|
||||
upload.writeStream.end();
|
||||
try {
|
||||
await fs.promises.unlink(upload.filePath);
|
||||
} catch (err) {
|
||||
logger.error(`Failed to delete incomplete upload: ${err.message}`);
|
||||
}
|
||||
uploads.delete(uploadId);
|
||||
uploadToBatch.delete(uploadId);
|
||||
logger.info(`Upload cancelled: ${upload.safeFilename}`);
|
||||
}
|
||||
|
||||
res.json({ message: 'Upload cancelled' });
|
||||
});
|
||||
|
||||
module.exports = {
|
||||
router,
|
||||
startBatchCleanup,
|
||||
stopBatchCleanup,
|
||||
// Export for testing
|
||||
batchActivity,
|
||||
BATCH_TIMEOUT
|
||||
};
|
117
src/server.js
Normal file
117
src/server.js
Normal file
@@ -0,0 +1,117 @@
|
||||
/**
|
||||
* Server entry point that starts the HTTP server and manages connections.
|
||||
* Handles graceful shutdown, connection tracking, and server initialization.
|
||||
* Provides development mode directory listing functionality.
|
||||
*/
|
||||
|
||||
const { app, initialize, config } = require('./app');
|
||||
const logger = require('./utils/logger');
|
||||
const fs = require('fs');
|
||||
const { executeCleanup } = require('./utils/cleanup');
|
||||
|
||||
// Track open connections
|
||||
const connections = new Set();
|
||||
|
||||
/**
|
||||
* Start the server and initialize the application
|
||||
* @returns {Promise<http.Server>} The HTTP server instance
|
||||
*/
|
||||
async function startServer() {
|
||||
try {
|
||||
// Initialize the application
|
||||
await initialize();
|
||||
|
||||
// Start the server
|
||||
const server = app.listen(config.port, () => {
|
||||
logger.info(`Server running at http://localhost:${config.port}`);
|
||||
logger.info(`Upload directory: ${config.uploadDisplayPath}`);
|
||||
|
||||
// List directory contents in development
|
||||
if (config.nodeEnv === 'development') {
|
||||
try {
|
||||
const files = fs.readdirSync(config.uploadDir);
|
||||
logger.info(`Current directory contents (${files.length} files):`);
|
||||
files.forEach(file => {
|
||||
logger.info(`- ${file}`);
|
||||
});
|
||||
} catch (err) {
|
||||
logger.error(`Failed to list directory contents: ${err.message}`);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// Track new connections
|
||||
server.on('connection', (connection) => {
|
||||
connections.add(connection);
|
||||
connection.on('close', () => {
|
||||
connections.delete(connection);
|
||||
});
|
||||
});
|
||||
|
||||
// Shutdown handler function
|
||||
const shutdownHandler = async (signal) => {
|
||||
logger.info(`${signal} received. Shutting down gracefully...`);
|
||||
|
||||
// Start a shorter force shutdown timer
|
||||
const forceShutdownTimer = setTimeout(() => {
|
||||
logger.error('Force shutdown initiated');
|
||||
throw new Error('Force shutdown due to timeout');
|
||||
}, 3000); // 3 seconds maximum for total shutdown
|
||||
|
||||
try {
|
||||
// 1. Stop accepting new connections immediately
|
||||
server.unref();
|
||||
|
||||
// 2. Close all existing connections with a shorter timeout
|
||||
const connectionClosePromises = Array.from(connections).map(conn => {
|
||||
return new Promise(resolve => {
|
||||
conn.end(() => {
|
||||
connections.delete(conn);
|
||||
resolve();
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
// Wait for connections to close with a timeout
|
||||
await Promise.race([
|
||||
Promise.all(connectionClosePromises),
|
||||
new Promise(resolve => setTimeout(resolve, 1000)) // 1 second timeout for connections
|
||||
]);
|
||||
|
||||
// 3. Close the server
|
||||
await new Promise((resolve) => server.close(resolve));
|
||||
logger.info('Server closed');
|
||||
|
||||
// 4. Run cleanup tasks with a shorter timeout
|
||||
await executeCleanup(1000); // 1 second timeout for cleanup
|
||||
|
||||
// Clear the force shutdown timer since we completed gracefully
|
||||
clearTimeout(forceShutdownTimer);
|
||||
process.exitCode = 0;
|
||||
} catch (error) {
|
||||
logger.error(`Error during shutdown: ${error.message}`);
|
||||
throw error;
|
||||
}
|
||||
};
|
||||
|
||||
// Handle both SIGTERM and SIGINT
|
||||
process.on('SIGTERM', () => shutdownHandler('SIGTERM'));
|
||||
process.on('SIGINT', () => shutdownHandler('SIGINT'));
|
||||
|
||||
return server;
|
||||
} catch (error) {
|
||||
logger.error('Failed to start server:', error);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
// Only start the server if this file is run directly
|
||||
if (require.main === module) {
|
||||
startServer().catch((error) => {
|
||||
logger.error('Server failed to start:', error);
|
||||
process.exitCode = 1;
|
||||
throw error;
|
||||
});
|
||||
}
|
||||
|
||||
module.exports = { app, startServer };
|
52
src/services/notifications.js
Normal file
52
src/services/notifications.js
Normal file
@@ -0,0 +1,52 @@
|
||||
/**
|
||||
* Notification service for file upload events.
|
||||
* Integrates with Apprise for sending notifications about uploads.
|
||||
* Handles message formatting and notification delivery.
|
||||
*/
|
||||
|
||||
const { exec } = require('child_process');
|
||||
const util = require('util');
|
||||
const { formatFileSize, calculateDirectorySize } = require('../utils/fileUtils');
|
||||
const logger = require('../utils/logger');
|
||||
|
||||
const execAsync = util.promisify(exec);
|
||||
|
||||
/**
|
||||
* Send a notification using Apprise
|
||||
* @param {string} filename - Name of uploaded file
|
||||
* @param {number} fileSize - Size of uploaded file in bytes
|
||||
* @param {Object} config - Configuration object
|
||||
* @returns {Promise<void>}
|
||||
*/
|
||||
async function sendNotification(filename, fileSize, config) {
|
||||
const { APPRISE_URL, APPRISE_MESSAGE, APPRISE_SIZE_UNIT, uploadDir } = config;
|
||||
|
||||
if (!APPRISE_URL) {
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
const formattedSize = formatFileSize(fileSize, APPRISE_SIZE_UNIT);
|
||||
const dirSize = await calculateDirectorySize(uploadDir);
|
||||
const totalStorage = formatFileSize(dirSize);
|
||||
|
||||
// Sanitize the message components
|
||||
const sanitizedFilename = JSON.stringify(filename).slice(1, -1);
|
||||
const message = APPRISE_MESSAGE
|
||||
.replace('{filename}', sanitizedFilename)
|
||||
.replace('{size}', formattedSize)
|
||||
.replace('{storage}', totalStorage);
|
||||
|
||||
// Use string command for better escaping
|
||||
const command = `apprise ${APPRISE_URL} -b "${message}"`;
|
||||
await execAsync(command, { shell: true });
|
||||
|
||||
logger.info(`Notification sent for: ${sanitizedFilename} (${formattedSize}, Total storage: ${totalStorage})`);
|
||||
} catch (err) {
|
||||
logger.error(`Failed to send notification: ${err.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
sendNotification
|
||||
};
|
175
src/utils/cleanup.js
Normal file
175
src/utils/cleanup.js
Normal file
@@ -0,0 +1,175 @@
|
||||
/**
|
||||
* Cleanup utilities for managing application resources.
|
||||
* Handles incomplete uploads, empty folders, and shutdown tasks.
|
||||
* Provides cleanup task registration and execution system.
|
||||
*/
|
||||
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const logger = require('./logger');
|
||||
const { config } = require('../config');
|
||||
|
||||
/**
|
||||
* Stores cleanup tasks that need to be run during shutdown
|
||||
* @type {Set<Function>}
|
||||
*/
|
||||
const cleanupTasks = new Set();
|
||||
|
||||
/**
|
||||
* Register a cleanup task to be executed during shutdown
|
||||
* @param {Function} task - Async function to be executed during cleanup
|
||||
*/
|
||||
function registerCleanupTask(task) {
|
||||
cleanupTasks.add(task);
|
||||
}
|
||||
|
||||
/**
|
||||
* Remove a cleanup task
|
||||
* @param {Function} task - Task to remove
|
||||
*/
|
||||
function removeCleanupTask(task) {
|
||||
cleanupTasks.delete(task);
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute all registered cleanup tasks
|
||||
* @param {number} [timeout=1000] - Maximum time in ms to wait for cleanup
|
||||
* @returns {Promise<void>}
|
||||
*/
|
||||
async function executeCleanup(timeout = 1000) {
|
||||
const taskCount = cleanupTasks.size;
|
||||
if (taskCount === 0) {
|
||||
logger.info('No cleanup tasks to execute');
|
||||
return;
|
||||
}
|
||||
|
||||
logger.info(`Executing ${taskCount} cleanup tasks...`);
|
||||
|
||||
try {
|
||||
// Run all cleanup tasks in parallel with timeout
|
||||
await Promise.race([
|
||||
Promise.all(
|
||||
Array.from(cleanupTasks).map(async (task) => {
|
||||
try {
|
||||
await Promise.race([
|
||||
task(),
|
||||
new Promise((_, reject) =>
|
||||
setTimeout(() => reject(new Error('Task timeout')), timeout / 2)
|
||||
)
|
||||
]);
|
||||
} catch (error) {
|
||||
if (error.message === 'Task timeout') {
|
||||
logger.warn('Cleanup task timed out');
|
||||
} else {
|
||||
logger.error(`Cleanup task failed: ${error.message}`);
|
||||
}
|
||||
}
|
||||
})
|
||||
),
|
||||
new Promise((_, reject) =>
|
||||
setTimeout(() => reject(new Error('Global timeout')), timeout)
|
||||
)
|
||||
]);
|
||||
|
||||
logger.info('Cleanup completed successfully');
|
||||
} catch (error) {
|
||||
if (error.message === 'Global timeout') {
|
||||
logger.warn(`Cleanup timed out after ${timeout}ms`);
|
||||
} else {
|
||||
logger.error(`Cleanup failed: ${error.message}`);
|
||||
}
|
||||
} finally {
|
||||
// Clear all tasks regardless of success/failure
|
||||
cleanupTasks.clear();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up incomplete uploads and temporary files
|
||||
* @param {Map} uploads - Map of active uploads
|
||||
* @param {Map} uploadToBatch - Map of upload IDs to batch IDs
|
||||
* @param {Map} batchActivity - Map of batch IDs to last activity timestamp
|
||||
*/
|
||||
async function cleanupIncompleteUploads(uploads, uploadToBatch, batchActivity) {
|
||||
try {
|
||||
// Get current time
|
||||
const now = Date.now();
|
||||
const inactivityThreshold = config.uploadTimeout || 30 * 60 * 1000; // 30 minutes default
|
||||
|
||||
// Check each upload
|
||||
for (const [uploadId, upload] of uploads.entries()) {
|
||||
try {
|
||||
const batchId = uploadToBatch.get(uploadId);
|
||||
const lastActivity = batchActivity.get(batchId);
|
||||
|
||||
// If upload is inactive for too long
|
||||
if (now - lastActivity > inactivityThreshold) {
|
||||
// Close write stream
|
||||
if (upload.writeStream) {
|
||||
await new Promise((resolve) => {
|
||||
upload.writeStream.end(() => resolve());
|
||||
});
|
||||
}
|
||||
|
||||
// Delete incomplete file
|
||||
try {
|
||||
await fs.promises.unlink(upload.filePath);
|
||||
logger.info(`Cleaned up incomplete upload: ${upload.safeFilename}`);
|
||||
} catch (err) {
|
||||
if (err.code !== 'ENOENT') {
|
||||
logger.error(`Failed to delete incomplete upload ${upload.safeFilename}: ${err.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Remove from maps
|
||||
uploads.delete(uploadId);
|
||||
uploadToBatch.delete(uploadId);
|
||||
}
|
||||
} catch (err) {
|
||||
logger.error(`Error cleaning up upload ${uploadId}: ${err.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Clean up empty folders
|
||||
await cleanupEmptyFolders(config.uploadDir);
|
||||
|
||||
} catch (err) {
|
||||
logger.error(`Cleanup error: ${err.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Recursively remove empty folders
|
||||
* @param {string} dir - Directory to clean
|
||||
*/
|
||||
async function cleanupEmptyFolders(dir) {
|
||||
try {
|
||||
const files = await fs.promises.readdir(dir);
|
||||
|
||||
for (const file of files) {
|
||||
const fullPath = path.join(dir, file);
|
||||
const stats = await fs.promises.stat(fullPath);
|
||||
|
||||
if (stats.isDirectory()) {
|
||||
await cleanupEmptyFolders(fullPath);
|
||||
|
||||
// Check if directory is empty after cleaning subdirectories
|
||||
const remaining = await fs.promises.readdir(fullPath);
|
||||
if (remaining.length === 0) {
|
||||
await fs.promises.rmdir(fullPath);
|
||||
logger.info(`Removed empty directory: ${fullPath}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch (err) {
|
||||
logger.error(`Failed to clean empty folders: ${err.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
registerCleanupTask,
|
||||
removeCleanupTask,
|
||||
executeCleanup,
|
||||
cleanupIncompleteUploads,
|
||||
cleanupEmptyFolders
|
||||
};
|
169
src/utils/fileUtils.js
Normal file
169
src/utils/fileUtils.js
Normal file
@@ -0,0 +1,169 @@
|
||||
/**
|
||||
* File system utility functions for file operations.
|
||||
* Handles file paths, sizes, directory operations, and path mapping.
|
||||
* Provides helper functions for file system operations.
|
||||
*/
|
||||
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const logger = require('./logger');
|
||||
const { config } = require('../config');
|
||||
|
||||
/**
|
||||
* Get display path for logs
|
||||
* @param {string} internalPath - Internal Docker path
|
||||
* @returns {string} Display path for host machine
|
||||
*/
|
||||
function getDisplayPath(internalPath) {
|
||||
if (!internalPath.startsWith(config.uploadDir)) return internalPath;
|
||||
|
||||
// Replace the container path with the host path
|
||||
const relativePath = path.relative(config.uploadDir, internalPath);
|
||||
return path.join(config.uploadDisplayPath, relativePath);
|
||||
}
|
||||
|
||||
/**
|
||||
* Format file size to human readable format
|
||||
* @param {number} bytes - Size in bytes
|
||||
* @param {string} [unit] - Force specific unit (B, KB, MB, GB, TB)
|
||||
* @returns {string} Formatted size with unit
|
||||
*/
|
||||
function formatFileSize(bytes, unit = null) {
|
||||
const units = ['B', 'KB', 'MB', 'GB', 'TB'];
|
||||
let size = bytes;
|
||||
let unitIndex = 0;
|
||||
|
||||
// If a specific unit is requested
|
||||
if (unit) {
|
||||
const requestedUnit = unit.toUpperCase();
|
||||
const unitIndex = units.indexOf(requestedUnit);
|
||||
if (unitIndex !== -1) {
|
||||
size = bytes / Math.pow(1024, unitIndex);
|
||||
return size.toFixed(2) + requestedUnit;
|
||||
}
|
||||
}
|
||||
|
||||
// Auto format to nearest unit
|
||||
while (size >= 1024 && unitIndex < units.length - 1) {
|
||||
size /= 1024;
|
||||
unitIndex++;
|
||||
}
|
||||
|
||||
return size.toFixed(2) + units[unitIndex];
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate total size of files in a directory recursively
|
||||
* @param {string} directoryPath - Path to directory
|
||||
* @returns {Promise<number>} Total size in bytes
|
||||
*/
|
||||
async function calculateDirectorySize(directoryPath) {
|
||||
let totalSize = 0;
|
||||
try {
|
||||
const files = await fs.promises.readdir(directoryPath);
|
||||
const fileSizePromises = files.map(async file => {
|
||||
const filePath = path.join(directoryPath, file);
|
||||
const stats = await fs.promises.stat(filePath);
|
||||
if (stats.isFile()) {
|
||||
return stats.size;
|
||||
} else if (stats.isDirectory()) {
|
||||
// Recursively calculate size for subdirectories
|
||||
return await calculateDirectorySize(filePath);
|
||||
}
|
||||
return 0;
|
||||
});
|
||||
|
||||
const sizes = await Promise.all(fileSizePromises);
|
||||
totalSize = sizes.reduce((acc, size) => acc + size, 0);
|
||||
} catch (err) {
|
||||
logger.error(`Failed to calculate directory size: ${err.message}`);
|
||||
}
|
||||
return totalSize;
|
||||
}
|
||||
|
||||
/**
|
||||
* Ensure a directory exists and is writable
|
||||
* @param {string} directoryPath - Path to directory
|
||||
* @returns {Promise<void>}
|
||||
*/
|
||||
async function ensureDirectoryExists(directoryPath) {
|
||||
try {
|
||||
if (!fs.existsSync(directoryPath)) {
|
||||
await fs.promises.mkdir(directoryPath, { recursive: true });
|
||||
logger.info(`Created directory: ${getDisplayPath(directoryPath)}`);
|
||||
}
|
||||
await fs.promises.access(directoryPath, fs.constants.W_OK);
|
||||
logger.success(`Directory is writable: ${getDisplayPath(directoryPath)}`);
|
||||
} catch (err) {
|
||||
logger.error(`Directory error: ${err.message}`);
|
||||
throw new Error(`Failed to access or create directory: ${getDisplayPath(directoryPath)}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get a unique file path by appending numbers if file exists
|
||||
* @param {string} filePath - Original file path
|
||||
* @returns {Promise<{path: string, handle: FileHandle}>} Unique path and file handle
|
||||
*/
|
||||
async function getUniqueFilePath(filePath) {
|
||||
const dir = path.dirname(filePath);
|
||||
const ext = path.extname(filePath);
|
||||
const baseName = path.basename(filePath, ext);
|
||||
let counter = 1;
|
||||
let finalPath = filePath;
|
||||
let fileHandle = null;
|
||||
|
||||
// Try until we find a unique path or hit an error
|
||||
let pathFound = false;
|
||||
while (!pathFound) {
|
||||
try {
|
||||
fileHandle = await fs.promises.open(finalPath, 'wx');
|
||||
pathFound = true;
|
||||
} catch (err) {
|
||||
if (err.code === 'EEXIST') {
|
||||
finalPath = path.join(dir, `${baseName} (${counter})${ext}`);
|
||||
counter++;
|
||||
} else {
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Log using display path
|
||||
logger.info(`Using unique path: ${getDisplayPath(finalPath)}`);
|
||||
return { path: finalPath, handle: fileHandle };
|
||||
}
|
||||
|
||||
/**
|
||||
* Get a unique folder path by appending numbers if folder exists
|
||||
* @param {string} folderPath - Original folder path
|
||||
* @returns {Promise<string>} Unique folder path
|
||||
*/
|
||||
async function getUniqueFolderPath(folderPath) {
|
||||
let counter = 1;
|
||||
let finalPath = folderPath;
|
||||
let pathFound = false;
|
||||
|
||||
while (!pathFound) {
|
||||
try {
|
||||
await fs.promises.mkdir(finalPath, { recursive: false });
|
||||
pathFound = true;
|
||||
} catch (err) {
|
||||
if (err.code === 'EEXIST') {
|
||||
finalPath = `${folderPath} (${counter})`;
|
||||
counter++;
|
||||
} else {
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
}
|
||||
return finalPath;
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
formatFileSize,
|
||||
calculateDirectorySize,
|
||||
ensureDirectoryExists,
|
||||
getUniqueFilePath,
|
||||
getUniqueFolderPath
|
||||
};
|
45
src/utils/logger.js
Normal file
45
src/utils/logger.js
Normal file
@@ -0,0 +1,45 @@
|
||||
/**
|
||||
* Logger utility for consistent logging across the application
|
||||
* Provides standardized timestamp and log level formatting
|
||||
*/
|
||||
|
||||
// Debug mode can be enabled via environment variable
|
||||
const DEBUG_MODE = process.env.DEBUG === 'true' || process.env.NODE_ENV === 'development';
|
||||
|
||||
const logger = {
|
||||
/**
|
||||
* Log debug message (only in debug mode)
|
||||
* @param {string} msg - Message to log
|
||||
*/
|
||||
debug: (msg) => {
|
||||
if (DEBUG_MODE) {
|
||||
console.log(`[DEBUG] ${new Date().toISOString()} - ${msg}`);
|
||||
}
|
||||
},
|
||||
|
||||
/**
|
||||
* Log warning message
|
||||
* @param {string} msg - Message to log
|
||||
*/
|
||||
warn: (msg) => console.warn(`[WARN] ${new Date().toISOString()} - ${msg}`),
|
||||
|
||||
/**
|
||||
* Log informational message
|
||||
* @param {string} msg - Message to log
|
||||
*/
|
||||
info: (msg) => console.log(`[INFO] ${new Date().toISOString()} - ${msg}`),
|
||||
|
||||
/**
|
||||
* Log error message
|
||||
* @param {string} msg - Message to log
|
||||
*/
|
||||
error: (msg) => console.error(`[ERROR] ${new Date().toISOString()} - ${msg}`),
|
||||
|
||||
/**
|
||||
* Log success message
|
||||
* @param {string} msg - Message to log
|
||||
*/
|
||||
success: (msg) => console.log(`[SUCCESS] ${new Date().toISOString()} - ${msg}`)
|
||||
};
|
||||
|
||||
module.exports = logger;
|
148
src/utils/security.js
Normal file
148
src/utils/security.js
Normal file
@@ -0,0 +1,148 @@
|
||||
/**
|
||||
* Core security utilities for authentication and protection.
|
||||
* Implements rate limiting, PIN validation, and secure string comparison.
|
||||
* Manages login attempts and security-related cleanup tasks.
|
||||
*/
|
||||
|
||||
const crypto = require('crypto');
|
||||
const logger = require('./logger');
|
||||
|
||||
/**
|
||||
* Store for login attempts with rate limiting
|
||||
* @type {Map<string, {count: number, lastAttempt: number}>}
|
||||
*/
|
||||
const loginAttempts = new Map();
|
||||
|
||||
// Constants
|
||||
const MAX_ATTEMPTS = 5;
|
||||
const LOCKOUT_DURATION = 15 * 60 * 1000; // 15 minutes
|
||||
|
||||
let cleanupInterval;
|
||||
|
||||
/**
|
||||
* Start the cleanup interval for old lockouts
|
||||
* @returns {NodeJS.Timeout} The interval handle
|
||||
*/
|
||||
function startCleanupInterval() {
|
||||
if (cleanupInterval) {
|
||||
clearInterval(cleanupInterval);
|
||||
}
|
||||
|
||||
cleanupInterval = setInterval(() => {
|
||||
const now = Date.now();
|
||||
let cleaned = 0;
|
||||
for (const [ip, attempts] of loginAttempts.entries()) {
|
||||
if (now - attempts.lastAttempt >= LOCKOUT_DURATION) {
|
||||
loginAttempts.delete(ip);
|
||||
cleaned++;
|
||||
}
|
||||
}
|
||||
if (cleaned > 0) {
|
||||
logger.info(`Cleaned up ${cleaned} expired lockouts`);
|
||||
}
|
||||
}, 60000); // Check every minute
|
||||
|
||||
return cleanupInterval;
|
||||
}
|
||||
|
||||
/**
|
||||
* Stop the cleanup interval
|
||||
*/
|
||||
function stopCleanupInterval() {
|
||||
if (cleanupInterval) {
|
||||
clearInterval(cleanupInterval);
|
||||
cleanupInterval = null;
|
||||
}
|
||||
}
|
||||
|
||||
// Start cleanup interval unless disabled
|
||||
if (!process.env.DISABLE_SECURITY_CLEANUP) {
|
||||
startCleanupInterval();
|
||||
}
|
||||
|
||||
/**
|
||||
* Reset login attempts for an IP
|
||||
* @param {string} ip - IP address
|
||||
*/
|
||||
function resetAttempts(ip) {
|
||||
loginAttempts.delete(ip);
|
||||
logger.info(`Reset login attempts for IP: ${ip}`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if an IP is locked out
|
||||
* @param {string} ip - IP address
|
||||
* @returns {boolean} True if IP is locked out
|
||||
*/
|
||||
function isLockedOut(ip) {
|
||||
const attempts = loginAttempts.get(ip);
|
||||
if (!attempts) return false;
|
||||
|
||||
if (attempts.count >= MAX_ATTEMPTS) {
|
||||
const timeElapsed = Date.now() - attempts.lastAttempt;
|
||||
if (timeElapsed < LOCKOUT_DURATION) {
|
||||
return true;
|
||||
}
|
||||
resetAttempts(ip);
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Record a login attempt for an IP
|
||||
* @param {string} ip - IP address
|
||||
* @returns {{count: number, lastAttempt: number}} Attempt details
|
||||
*/
|
||||
function recordAttempt(ip) {
|
||||
const attempts = loginAttempts.get(ip) || { count: 0, lastAttempt: 0 };
|
||||
attempts.count += 1;
|
||||
attempts.lastAttempt = Date.now();
|
||||
loginAttempts.set(ip, attempts);
|
||||
logger.warn(`Recorded failed login attempt for IP: ${ip} (attempt ${attempts.count})`);
|
||||
return attempts;
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate and clean PIN
|
||||
* @param {string} pin - PIN to validate
|
||||
* @returns {string|null} Cleaned PIN or null if invalid
|
||||
*/
|
||||
function validatePin(pin) {
|
||||
if (!pin || typeof pin !== 'string') return null;
|
||||
const cleanPin = pin.replace(/\D/g, '');
|
||||
return cleanPin.length >= 4 && cleanPin.length <= 10 ? cleanPin : null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Compare two strings in constant time
|
||||
* @param {string} a - First string
|
||||
* @param {string} b - Second string
|
||||
* @returns {boolean} True if strings match
|
||||
*/
|
||||
function safeCompare(a, b) {
|
||||
if (typeof a !== 'string' || typeof b !== 'string') {
|
||||
return false;
|
||||
}
|
||||
|
||||
try {
|
||||
return crypto.timingSafeEqual(
|
||||
Buffer.from(a.padEnd(32)),
|
||||
Buffer.from(b.padEnd(32))
|
||||
);
|
||||
} catch (err) {
|
||||
logger.error(`Safe compare error: ${err.message}`);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
MAX_ATTEMPTS,
|
||||
LOCKOUT_DURATION,
|
||||
resetAttempts,
|
||||
isLockedOut,
|
||||
recordAttempt,
|
||||
validatePin,
|
||||
safeCompare,
|
||||
startCleanupInterval,
|
||||
stopCleanupInterval
|
||||
};
|
Reference in New Issue
Block a user