10 Commits

Author SHA1 Message Date
Greirson Lee-Thorp
105d2a7412 feat(upload): Implement persistent state via metadata for resumability (#50)
* feat: Enhance chunk upload functionality with configurable retry logic

- Introduced MAX_RETRIES configuration to allow dynamic adjustment of retry attempts for chunk uploads.
- Updated index.html to read MAX_RETRIES from server-side configuration, providing a default value if not set.
- Implemented retry logic in uploadChunkWithRetry method, including exponential backoff and error handling for network issues.
- Added console warnings for invalid or missing MAX_RETRIES values to improve debugging.

This commit improves the robustness of file uploads by allowing configurable retry behavior, enhancing user experience during upload failures.

* feat: Enhance upload functionality with metadata management and improved error handling

- Introduced persistent metadata management for uploads, allowing resumability and better tracking of upload states.
- Added special handling for 404 responses during chunk uploads, logging warnings and marking uploads as complete if previously finished.
- Implemented metadata directory creation and validation in app.js to ensure proper upload management.
- Updated upload.js to include metadata read/write functions, improving the robustness of the upload process.
- Enhanced cleanup routines to handle stale metadata and incomplete uploads, ensuring a cleaner state.

This commit significantly improves the upload process by adding metadata support, enhancing error handling, and ensuring better resource management during uploads.
2025-05-04 11:33:01 -07:00
Greirson Lee-Thorp
e963f2bcde feat: Improve dev experience, Improve Environmental Variable and Folder Control, resolves BASE_URL junk (#49)
* feat: Add ALLOWED_IFRAME_ORIGINS configuration and update security headers (#47)

- Introduced ALLOWED_IFRAME_ORIGINS environment variable to specify trusted origins for iframe embedding.
- Updated security headers middleware to conditionally allow specified origins in Content Security Policy.
- Enhanced documentation in README.md to explain the new configuration and its security implications.

Fixes #35

* feat: Update .env.example and .gitignore for improved configuration management

- Enhanced .env.example with detailed comments for environment variables, including upload settings, security options, and notification configurations.
- Updated .gitignore to include additional editor and OS-specific files, ensuring a cleaner repository.
- Modified package.json to add a predev script for Node.js version validation and adjusted the dev script for nodemon.
- Improved server.js shutdown handling to prevent multiple shutdowns and ensure graceful exits.
- Refactored config/index.js to log loaded environment variables and ensure the upload directory exists based on environment settings.
- Cleaned up fileUtils.js by removing unused functions and improving logging for directory creation.

This commit enhances clarity and maintainability of configuration settings and improves application shutdown behavior.

* feat: Update Docker configuration and documentation for upload handling

- Explicitly set the upload directory environment variable in docker-compose.yml to ensure clarity in file storage.
- Simplified the Dockerfile by removing the creation of the local_uploads directory, as it is now managed by the host system.
- Enhanced README.md to reflect changes in upload directory management and provide clearer instructions for users.
- Removed outdated development configuration files to streamline the development setup.

This commit improves the clarity and usability of the Docker setup for file uploads.

* feat: Add Local Development Guide and update README for clarity

- Introduced a comprehensive LOCAL_DEVELOPMENT.md file with setup instructions, testing guidelines, and troubleshooting tips for local development.
- Updated README.md to include a link to the new Local Development Guide and revised sections for clarity regarding upload directory management.
- Enhanced the Quick Start section to direct users to the dedicated local development documentation.

This commit improves the onboarding experience for developers and provides clear instructions for local setup.

* feat: Implement BASE_URL configuration for asset management and API requests

- Added BASE_URL configuration to README.md, emphasizing the need for a trailing slash when deploying under a subpath.
- Updated index.html and login.html to utilize BASE_URL for linking stylesheets, icons, and API requests, ensuring correct asset loading.
- Enhanced app.js to replace placeholders with the actual BASE_URL during HTML rendering.
- Implemented a validation check in config/index.js to ensure BASE_URL is a valid URL and ends with a trailing slash.

This commit improves the flexibility of the application for different deployment scenarios and enhances asset management.

Fixes #34, Fixes #39, Fixes #38

* Update app.js, borked some of the css n such

* resolved BASE_URL breaking frontend

* fix: Update BASE_URL handling and security headers

- Ensured BASE_URL has a trailing slash in app.js to prevent asset loading issues.
- Refactored index.html and login.html to remove leading slashes from API paths for correct concatenation with BASE_URL.
- Enhanced security headers middleware to include 'connect-src' directive in Content Security Policy.

This commit addresses issues with asset management and improves security configurations.
2025-05-04 10:29:48 -07:00
Greirson Lee-Thorp
107684fe6a feat: Add ALLOWED_IFRAME_ORIGINS configuration and update security headers (#47) (#48)
- Introduced ALLOWED_IFRAME_ORIGINS environment variable to specify trusted origins for iframe embedding.
- Updated security headers middleware to conditionally allow specified origins in Content Security Policy.
- Enhanced documentation in README.md to explain the new configuration and its security implications.

Fixes #35
2025-05-02 17:25:27 -07:00
V
12ae628bd4 Merge pull request #46 from DumbWareio/greirson/issue45
Tested and working.

tree
├── test
│   ├── dumb.png
│   ├── dumb.txt
│   └── test2
│       ├── dumb.png
│       └── dumb.txt
2025-05-02 15:13:17 -07:00
greirson
ccd06f92bb feat: Enhance folder upload handling and filename sanitation
- Added support for checking webkitRelativePath in folder uploads, alerting users if their browser does not support this feature.
- Introduced sanitizePathPreserveDirs function to sanitize filenames while preserving directory structure.
- Updated upload route to utilize the new sanitation function and ensure consistent folder naming during uploads.

Fixes #45
2025-05-02 14:38:28 -07:00
abite
8f4b2ea873 Merge pull request #41 from gitmotion/fix/apprise-notifications-not-working-and-cve-fix
Fixed notifications config mapping and filename sanitation/use of spawn for cve/rce
2025-03-13 17:58:37 -04:00
gitmotion
e11c9261f7 Fixed notifications config mapping and filename sanitation for cve/rce
add svg to login / index for favicon

ensure file sanitization before and during notification
2025-03-13 14:24:03 -07:00
abite
81baf87e93 Merge pull request #40 from gitmotion/feature/add-pwa-registration
Add PWA Registration
2025-03-12 16:22:25 -05:00
gitmotion
c4a806604a Add PWA Registration 2025-03-12 14:00:38 -07:00
V
fc83e527b7 feat: Add Demo Mode for Testing and Evaluation (#37)
* demo things.
2025-02-27 11:25:25 -08:00
31 changed files with 1885 additions and 6630 deletions

View File

@@ -1,16 +1,68 @@
# Server Configuration
PORT=3000 # The port the server will listen on
BASE_URL=http://localhost:3000 # The base URL for the application
#########################################
# SERVER CONFIGURATION
#########################################
# Upload Settings
MAX_FILE_SIZE=1024 # Maximum file size in MB
AUTO_UPLOAD=false # Enable automatic upload on file selection
# Port for the server (default: 3000)
PORT=3000
# Security
DUMBDROP_PIN= # Optional PIN protection (4-10 digits)
DUMBDROP_TITLE=DumbDrop # Site title displayed in header
# Base URL for the application (default: http://localhost:PORT)
BASE_URL=http://localhost:3000/
# Notifications (Optional)
APPRISE_URL= # Apprise URL for notifications (e.g., tgram://bottoken/ChatID)
APPRISE_MESSAGE=New file uploaded - {filename} ({size}), Storage used {storage}
APPRISE_SIZE_UNIT=auto # Size unit for notifications (auto, B, KB, MB, GB, TB)
# Node environment (default: development)
NODE_ENV=development
#########################################
# FILE UPLOAD SETTINGS
#########################################
# Maximum file size in MB (default: 1024)
MAX_FILE_SIZE=1024
# Directory for uploads (Docker/production; optional)
UPLOAD_DIR=
# Directory for uploads (local dev, fallback: './local_uploads')
LOCAL_UPLOAD_DIR=./local_uploads
# Comma-separated list of allowed file extensions (optional, e.g. .jpg,.png,.pdf)
# ALLOWED_EXTENSIONS=.jpg,.png,.pdf
ALLOWED_EXTENSIONS=
#########################################
# SECURITY
#########################################
# PIN protection (4-10 digits, optional)
# DUMBDROP_PIN=1234
DUMBDROP_PIN=
#########################################
# UI SETTINGS
#########################################
# Site title displayed in header (default: DumbDrop)
DUMBDROP_TITLE=DumbDrop
#########################################
# NOTIFICATION SETTINGS
#########################################
# Apprise URL for notifications (optional)
APPRISE_URL=
# Notification message template (default: New file uploaded {filename} ({size}), Storage used {storage})
APPRISE_MESSAGE=New file uploaded {filename} ({size}), Storage used {storage}
# Size unit for notifications (B, KB, MB, GB, TB, or Auto; default: Auto)
APPRISE_SIZE_UNIT=Auto
#########################################
# ADVANCED
#########################################
# Enable automatic upload on file selection (true/false, default: false)
AUTO_UPLOAD=false
# Comma-separated list of origins allowed to embed the app in an iframe (optional)
# ALLOWED_IFRAME_ORIGINS=https://example.com,https://another.com
ALLOWED_IFRAME_ORIGINS=

39
.gitignore vendored
View File

@@ -196,8 +196,45 @@ Thumbs.db
!uploads/.gitkeep
!local_uploads/.gitkeep
# Generated PWA Files
/public/*manifest.json
# Misc
*.log
.env.*
!.env.example
!dev/.env.dev.example
!dev/.env.dev.example
# Added by Claude Task Master
dev-debug.log
# Environment variables
# Editor directories and files
.idea
.vscode
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?
# OS specific
# Task files
.windsurfrules
README-task-master.md
.cursor/mcp.json
.cursor/rules/cursor_rules.mdc
.cursor/rules/dev_workflow.mdc
.cursor/rules/self_improve.mdc
.cursor/rules/taskmaster.mdc
scripts/example_prd.txt
scripts/prd.txt
tasks/task_001.txt
tasks/task_002.txt
tasks/task_003.txt
tasks/task_004.txt
tasks/task_005.txt
tasks/task_006.txt
tasks/task_007.txt
tasks/task_008.txt
tasks/task_009.txt
tasks/task_010.txt
tasks/tasks.json

View File

@@ -32,8 +32,8 @@ ENV NODE_ENV=development
RUN npm install && \
npm cache clean --force
# Create upload directories
RUN mkdir -p uploads local_uploads
# Create upload directory
RUN mkdir -p uploads
# Copy source with specific paths to avoid unnecessary files
COPY src/ ./src/

122
LOCAL_DEVELOPMENT.md Normal file
View File

@@ -0,0 +1,122 @@
# Local Development (Recommended Quick Start)
## Prerequisites
- **Node.js** >= 20.0.0
_Why?_: The app uses features only available in Node 20+.
- **npm** (comes with Node.js)
- **Python 3** (for notification testing, optional)
- **Apprise** (for notification testing, optional)
## Setup Instructions
1. **Clone the repository**
```bash
git clone https://github.com/yourusername/dumbdrop.git
cd dumbdrop
```
2. **Copy and configure environment variables**
```bash
cp .env.example .env
```
- Open `.env` in your editor and review the variables.
- At minimum, set:
- `PORT=3000`
- `LOCAL_UPLOAD_DIR=./local_uploads`
- `MAX_FILE_SIZE=1024`
- `DUMBDROP_PIN=` (optional, for PIN protection)
- `APPRISE_URL=` (optional, for notifications)
3. **Install dependencies**
```bash
npm install
```
4. **Start the development server**
```bash
npm run dev
```
- You should see output like:
```
DumbDrop server running on http://localhost:3000
```
5. **Open the app**
- Go to [http://localhost:3000](http://localhost:3000) in your browser.
---
## Testing File Uploads
- Drag and drop files onto the web interface.
- Supported file types: _All_, unless restricted by `ALLOWED_EXTENSIONS` in `.env`.
- Maximum file size: as set by `MAX_FILE_SIZE` (default: 1024 MB).
- Uploaded files are stored in the directory specified by `LOCAL_UPLOAD_DIR` (default: `./local_uploads`).
- To verify uploads:
- Check the `local_uploads` folder for your files.
- The UI will show a success message on upload.
---
## Notification Testing (Python/Apprise)
If you want to test notifications (e.g., for new uploads):
1. **Install Python 3**
- [Download Python](https://www.python.org/downloads/) if not already installed.
2. **Install Apprise**
```bash
pip install apprise
```
3. **Configure Apprise in `.env`**
- Set `APPRISE_URL` to your notification service URL (see [Apprise documentation](https://github.com/caronc/apprise)).
- Example for a local test:
```
APPRISE_URL=mailto://your@email.com
```
4. **Trigger a test notification**
- Upload a file via the web UI.
- If configured, you should receive a notification.
---
## Troubleshooting
**Problem:** Port already in use
**Solution:**
- Change the `PORT` in `.env` to a free port.
**Problem:** "Cannot find module 'express'"
**Solution:**
- Run `npm install` to install dependencies.
**Problem:** File uploads not working
**Solution:**
- Ensure `LOCAL_UPLOAD_DIR` exists and is writable.
- Check file size and extension restrictions in `.env`.
**Problem:** Notifications not sent
**Solution:**
- Verify `APPRISE_URL` is set and correct.
- Ensure Apprise is installed and accessible.
**Problem:** Permission denied on uploads
**Solution:**
- Make sure your user has write permissions to `local_uploads`.
**Problem:** Environment variables not loading
**Solution:**
- Double-check that `.env` exists and is formatted correctly.
- Restart the server after making changes.
---
## Additional Notes
- For Docker-based development, see the "Quick Start" and "Docker Compose" sections in the main README.
- For more advanced configuration, review the "Configuration" section in the main README.
- If you encounter issues not listed here, please open an issue on GitHub or check the Discussions tab.

109
README.md
View File

@@ -8,27 +8,25 @@ No auth (unless you want it now!), no storage, no nothing. Just a simple file up
## Table of Contents
- [Quick Start](#quick-start)
- [Production Deployment with Docker](#production-deployment-with-docker)
- [Local Development (Recommended Quick Start)](LOCAL_DEVELOPMENT.md)
- [Features](#features)
- [Configuration](#configuration)
- [Security](#security)
- [Development](#development)
- [Technical Details](#technical-details)
- [Demo Mode](demo.md)
- [Contributing](#contributing)
- [License](#license)
## Quick Start
### Prerequisites
- Docker (recommended)
- Node.js >=20.0.0 (for local development)
### Option 1: Docker (For Dummies)
```bash
# Pull and run with one command
docker run -p 3000:3000 -v ./local_uploads:/app/uploads dumbwareio/dumbdrop:latest
docker run -p 3000:3000 -v ./uploads:/app/uploads dumbwareio/dumbdrop:latest
```
1. Go to http://localhost:3000
2. Upload a File - It'll show up in ./local_uploads
2. Upload a File - It'll show up in ./uploads
3. Celebrate on how dumb easy this was
### Option 2: Docker Compose (For Dummies who like customizing)
@@ -41,8 +39,10 @@ services:
- 3000:3000
volumes:
# Where your uploaded files will land
- ./local_uploads:/app/uploads
- ./uploads:/app/uploads
environment:
# Explicitly set upload directory inside the container
UPLOAD_DIR: /app/uploads
# The title shown in the web interface
DUMBDROP_TITLE: DumbDrop
# Maximum file size in MB
@@ -54,42 +54,21 @@ services:
# The base URL for the application
BASE_URL: http://localhost:3000
```
Then run:
```bash
docker compose up -d
```
1. Go to http://localhost:3000
2. Upload a File - It'll show up in ./local_uploads
2. Upload a File - It'll show up in ./uploads
3. Rejoice in the glory of your dumb uploads
> **Note:** The `UPLOAD_DIR` environment variable is now explicitly set to `/app/uploads` in the container. The Dockerfile only creates the `uploads` directory, not `local_uploads`. The host directory `./uploads` is mounted to `/app/uploads` for persistent storage.
### Option 3: Running Locally (For Developers)
> If you're a developer, check out our [Dev Guide](#development) for the dumb setup.
For local development setup, troubleshooting, and advanced usage, see the dedicated guide:
1. Install dependencies:
```bash
npm install
```
2. Set environment variables in `.env`:
```env
PORT=3000 # Port to run the server on
MAX_FILE_SIZE=1024 # Maximum file size in MB
DUMBDROP_PIN=123456 # Optional PIN protection
```
3. Start the server:
```bash
npm start
```
#### Windows Users
If you're using Windows PowerShell with Docker, use this format for paths:
```bash
docker run -p 3000:3000 -v "${PWD}\local_uploads:/app/uploads" dumbwareio/dumbdrop:latest
```
👉 [Local Development Guide](LOCAL_DEVELOPMENT.md)
## Features
@@ -110,27 +89,55 @@ docker run -p 3000:3000 -v "${PWD}\local_uploads:/app/uploads" dumbwareio/dumbdr
### Environment Variables
| Variable | Description | Default | Required |
|------------------|---------------------------------------|---------|----------|
| PORT | Server port | 3000 | No |
| BASE_URL | Base URL for the application | http://localhost:PORT | No |
| MAX_FILE_SIZE | Maximum file size in MB | 1024 | No |
| DUMBDROP_PIN | PIN protection (4-10 digits) | None | No |
| DUMBDROP_TITLE | Site title displayed in header | DumbDrop| No |
| APPRISE_URL | Apprise URL for notifications | None | No |
| APPRISE_MESSAGE | Notification message template | New file uploaded {filename} ({size}), Storage used {storage} | No |
| APPRISE_SIZE_UNIT| Size unit for notifications | Auto | No |
| AUTO_UPLOAD | Enable automatic upload on file selection | false | No |
| ALLOWED_EXTENSIONS| Comma-separated list of allowed file extensions | None | No |
| Variable | Description | Default | Required |
|------------------------|------------------------------------------------------------------|-----------------------------------------|----------|
| PORT | Server port | 3000 | No |
| BASE_URL | Base URL for the application | http://localhost:PORT | No |
| MAX_FILE_SIZE | Maximum file size in MB | 1024 | No |
| DUMBDROP_PIN | PIN protection (4-10 digits) | None | No |
| DUMBDROP_TITLE | Site title displayed in header | DumbDrop | No |
| APPRISE_URL | Apprise URL for notifications | None | No |
| APPRISE_MESSAGE | Notification message template | New file uploaded {filename} ({size}), Storage used {storage} | No |
| APPRISE_SIZE_UNIT | Size unit for notifications (B, KB, MB, GB, TB, or Auto) | Auto | No |
| AUTO_UPLOAD | Enable automatic upload on file selection | false | No |
| ALLOWED_EXTENSIONS | Comma-separated list of allowed file extensions | None | No |
| ALLOWED_IFRAME_ORIGINS | Comma-separated list of origins allowed to embed the app in an iframe | None | No |
| UPLOAD_DIR | Directory for uploads (Docker/production; should be `/app/uploads` in container) | None (see LOCAL_UPLOAD_DIR fallback) | No |
| LOCAL_UPLOAD_DIR | Directory for uploads (local dev, fallback: './local_uploads') | ./local_uploads | No |
- **UPLOAD_DIR** is used in Docker/production. If not set, LOCAL_UPLOAD_DIR is used for local development. If neither is set, the default is `./local_uploads`.
- **Docker Note:** The Dockerfile now only creates the `uploads` directory inside the container. The host's `./local_uploads` is mounted to `/app/uploads` and should be managed on the host system.
- **BASE_URL**: If you are deploying DumbDrop under a subpath (e.g., `https://example.com/watchfolder/`), you **must** set `BASE_URL` to the full path including the trailing slash (e.g., `https://example.com/watchfolder/`). All API and asset requests will be prefixed with this value. If you deploy at the root, use `https://example.com/`.
- **BASE_URL** must end with a trailing slash. The app will fail to start if this is not the case.
See `.env.example` for a template and more details.
<details>
<summary>ALLOWED_IFRAME_ORIGINS</summary>
To allow this app to be embedded in an iframe on specific origins (such as Organizr), set the `ALLOWED_IFRAME_ORIGINS` environment variable. For example:
```env
ALLOWED_IFRAME_ORIGINS=https://organizr.example.com,https://myportal.com
```
- If not set, the app will only allow itself to be embedded in an iframe on the same origin (default security).
- If set, the app will allow embedding in iframes on the specified origins and itself.
- **Security Note:** Only add trusted origins. Allowing arbitrary origins can expose your app to clickjacking and other attacks.
</details>
<details>
<summary>File Extension Filtering</summary>
### File Extension Filtering
To restrict which file types can be uploaded, set the `ALLOWED_EXTENSIONS` environment variable. For example:
```env
ALLOWED_EXTENSIONS=.jpg,.jpeg,.png,.pdf,.doc,.docx,.txt
```
If not set, all file extensions will be allowed.
</details>
### Notification Setup
<details>
<summary>Notification Setup</summary>
#### Message Templates
The notification message supports the following placeholders:
@@ -154,6 +161,7 @@ Both {size} and {storage} use the same formatting rules based on APPRISE_SIZE_UN
- Support for all Apprise notification services
- Customizable notification messages with filename templating
- Optional - disabled if no APPRISE_URL is set
</details>
## Security
@@ -192,10 +200,7 @@ Both {size} and {storage} use the same formatting rules based on APPRISE_SIZE_UN
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
See [Development Guide](dev/README.md) for local setup and guidelines.
See [Local Development (Recommended Quick Start)](LOCAL_DEVELOPMENT.md) for local setup and guidelines.
---
Made with ❤️ by [DumbWare.io](https://dumbware.io)

28
demo.md Normal file
View File

@@ -0,0 +1,28 @@
## Demo Mode
### Overview
DumbDrop includes a demo mode that allows testing the application without actually storing files. Perfect for trying out the interface or development testing.
### Enabling Demo Mode
Set in your environment or docker-compose.yml:
```env
DEMO_MODE=true
```
### Demo Features
- 🚫 No actual file storage - files are processed in memory
- 🎯 Full UI experience with upload/download simulation
- 🔄 Maintains all functionality including:
- Drag and drop
- Progress tracking
- Multiple file uploads
- Directory structure
- File listings
- 🚨 Clear visual indicator (red banner) showing demo status
- 🧹 Auto-cleans upload directory on startup
- Files are processed but not written to disk
- Upload progress is simulated
- File metadata stored in memory
- Maintains same API responses as production
- Cleared on server restart

View File

@@ -1,50 +0,0 @@
# Version control
.git
.gitignore
# Dependencies
node_modules
npm-debug.log
yarn-debug.log
yarn-error.log
# Environment variables
.env
.env.*
!.env.example
# Development
.vscode
.idea
*.swp
*.swo
# Build outputs
dist
build
coverage
# Local uploads (development only)
local_uploads
# Logs
logs
*.log
# System files
.DS_Store
Thumbs.db
# Docker
.docker
docker-compose*.yml
Dockerfile*
# Documentation
README.md
CHANGELOG.md
docs
# Development configurations
.editorconfig
nodemon.json

View File

@@ -1,22 +0,0 @@
# Development Environment Settings
# Server Configuration
PORT=3000 # Development server port
# Upload Settings
MAX_FILE_SIZE=1024 # Maximum file size in MB for development
AUTO_UPLOAD=false # Disable auto-upload by default in development
UPLOAD_DIR=../local_uploads # Local development upload directory
# Development Specific
DUMBDROP_TITLE=DumbDrop-Dev # Development environment indicator
DUMBDROP_PIN=123456 # Default development PIN (change in production)
# Optional Development Features
NODE_ENV=development # Ensures development mode
DEBUG=dumbdrop:* # Enable debug logging (if implemented)
# Development Notifications (Optional)
APPRISE_URL= # Test notification endpoint
APPRISE_MESSAGE=[DEV] New file uploaded - {filename} ({size}), Storage used {storage}
APPRISE_SIZE_UNIT=auto

View File

@@ -1,46 +0,0 @@
# Base stage for shared configurations
FROM node:20-alpine as base
# Install python and create virtual environment with minimal dependencies
RUN apk add --no-cache python3 py3-pip && \
python3 -m venv /opt/venv && \
rm -rf /var/cache/apk/*
# Activate virtual environment and install apprise
RUN . /opt/venv/bin/activate && \
pip install --no-cache-dir apprise && \
find /opt/venv -type d -name "__pycache__" -exec rm -r {} +
# Add virtual environment to PATH
ENV PATH="/opt/venv/bin:$PATH"
WORKDIR /usr/src/app
# Dependencies stage
FROM base as deps
COPY package*.json ./
RUN npm ci --only=production && \
npm cache clean --force
# Development stage
FROM deps as development
ENV NODE_ENV=development
# Install dev dependencies
RUN npm install && \
npm cache clean --force
# Create upload directories
RUN mkdir -p uploads local_uploads
# Copy source with specific paths to avoid unnecessary files
COPY src/ ./src/
COPY public/ ./public/
COPY dev/ ./dev/
COPY .eslintrc.json .eslintignore ./
# Expose port
EXPOSE 3000
CMD ["npm", "run", "dev"]

View File

@@ -1,73 +0,0 @@
# DumbDrop Development Guide
## Quick Start
1. Clone the repository:
```bash
git clone https://github.com/yourusername/DumbDrop.git
cd DumbDrop
```
2. Set up development environment:
```bash
cd dev
cp .env.dev.example .env.dev
```
3. Start development server:
```bash
docker-compose -f docker-compose.dev.yml up
```
The application will be available at http://localhost:3000 with hot-reloading enabled.
## Development Environment Features
- Hot-reloading with nodemon
- Development-specific environment variables
- Local file storage in `../local_uploads`
- Debug logging enabled
- Development-specific notifications
## Project Structure
```
DumbDrop/
├── dev/ # Development configurations
│ ├── docker-compose.dev.yml
│ ├── .env.dev.example
│ └── README.md
├── src/ # Application source code
├── public/ # Static assets
├── local_uploads/ # Development file storage
└── [Production files in root]
```
## Development Workflow
1. Create feature branches from `main`:
```bash
git checkout -b feature/your-feature-name
```
2. Make changes and test locally
3. Commit using conventional commits:
```bash
feat: add new feature
fix: resolve bug
docs: update documentation
```
4. Push and create pull request
## Debugging
- Use `DEBUG=dumbdrop:*` for detailed logs
- Container shell access: `docker-compose -f docker-compose.dev.yml exec app sh`
- Logs: `docker-compose -f docker-compose.dev.yml logs -f app`
## Common Issues
1. Port conflicts: Change port in `.env.dev`
2. File permissions: Ensure proper ownership of `local_uploads`
3. Node modules: Remove and rebuild with `docker-compose -f docker-compose.dev.yml build --no-cache`

View File

@@ -1,74 +0,0 @@
#!/bin/bash
# Set script to exit on error
set -e
# Enable Docker BuildKit
export DOCKER_BUILDKIT=1
# Colors for pretty output
GREEN='\033[0;32m'
BLUE='\033[0;34m'
RED='\033[0;31m'
NC='\033[0m' # No Color
# Helper function for pretty printing
print_message() {
echo -e "${BLUE}🔧 ${1}${NC}"
}
# Ensure we're in the right directory
cd "$(dirname "$0")"
case "$1" in
"up")
print_message "Starting DumbDrop in development mode..."
if [ ! -f .env.dev ]; then
print_message "No .env.dev found. Creating from example..."
cp .env.dev.example .env.dev
fi
docker compose -f docker-compose.dev.yml up -d --build
print_message "Container logs:"
docker compose -f docker-compose.dev.yml logs
;;
"down")
print_message "Stopping DumbDrop development environment..."
docker compose -f docker-compose.dev.yml down
;;
"logs")
print_message "Showing DumbDrop logs..."
docker compose -f docker-compose.dev.yml logs -f
;;
"rebuild")
print_message "Rebuilding DumbDrop..."
docker compose -f docker-compose.dev.yml build --no-cache
docker compose -f docker-compose.dev.yml up
;;
"clean")
print_message "Cleaning up development environment..."
docker compose -f docker-compose.dev.yml down -v --remove-orphans
rm -f .env.dev
print_message "Cleaned up containers, volumes, and env file"
;;
"shell")
print_message "Opening shell in container..."
docker compose -f docker-compose.dev.yml exec app sh
;;
"lint")
print_message "Running linter..."
docker compose -f docker-compose.dev.yml exec app npm run lint
;;
*)
echo -e "${GREEN}DumbDrop Development Helper${NC}"
echo "Usage: ./dev.sh [command]"
echo ""
echo "Commands:"
echo " up - Start development environment (creates .env.dev if missing)"
echo " down - Stop development environment"
echo " logs - Show container logs"
echo " rebuild - Rebuild container without cache and start"
echo " clean - Clean up everything (containers, volumes, env)"
echo " shell - Open shell in container"
echo " lint - Run linter"
;;
esac

View File

@@ -1,30 +0,0 @@
services:
app:
build:
context: ..
dockerfile: dev/Dockerfile.dev
target: development
args:
DOCKER_BUILDKIT: 1
x-bake:
options:
dockerignore: dev/.dockerignore
volumes:
- ..:/usr/src/app
- /usr/src/app/node_modules
ports:
- "3000:3000"
environment:
- NODE_ENV=development
- PORT=3000
- MAX_FILE_SIZE=1024
- AUTO_UPLOAD=false
- DUMBDROP_TITLE=DumbDrop-Dev
command: npm run dev
restart: unless-stopped
# Enable container debugging if needed
# stdin_open: true
# tty: true
# Add development labels
labels:
- "dev.dumbware.environment=development"

View File

@@ -7,6 +7,8 @@ services:
# Replace "./local_uploads" ( before the colon ) with the path where the files land
- ./local_uploads:/app/uploads
environment: # Environment variables for the DumbDrop service
# Explicitly set upload directory inside the container
UPLOAD_DIR: /app/uploads
DUMBDROP_TITLE: DumbDrop # The title shown in the web interface
MAX_FILE_SIZE: 1024 # Maximum file size in MB
DUMBDROP_PIN: 123456 # Optional PIN protection (4-10 digits, leave empty to disable)

3
nodemon.json Normal file
View File

@@ -0,0 +1,3 @@
{
"ignore": ["asset-manifest.json", "manifest.json"]
}

5986
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -4,10 +4,11 @@
"main": "src/server.js",
"scripts": {
"start": "node src/server.js",
"dev": "nodemon --legacy-watch src/server.js",
"dev": "nodemon src/server.js",
"lint": "eslint .",
"lint:fix": "eslint . --fix",
"format": "prettier --write ."
"format": "prettier --write .",
"predev": "node -e \"const v=process.versions.node.split('.');if(v[0]<20) {console.error('Node.js >=20.0.0 required');process.exit(1)}\""
},
"keywords": [],
"author": "",

View File

Before

Width:  |  Height:  |  Size: 3.6 KiB

After

Width:  |  Height:  |  Size: 3.6 KiB

View File

Before

Width:  |  Height:  |  Size: 639 B

After

Width:  |  Height:  |  Size: 639 B

View File

@@ -4,9 +4,12 @@
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{{SITE_TITLE}} - Simple File Upload</title>
<link rel="stylesheet" href="styles.css">
<link rel="stylesheet" href="{{BASE_URL}}styles.css">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/toastify-js/src/toastify.min.css">
<script src="https://cdn.jsdelivr.net/npm/toastify-js"></script>
<link rel="manifest" href="{{BASE_URL}}manifest.json">
<link rel="icon" type="image/svg+xml" href="{{BASE_URL}}assets/icon.svg">
<script>window.BASE_URL = '{{BASE_URL}}';</script>
</head>
<body>
<div class="container">
@@ -50,9 +53,26 @@
<script defer>
const CHUNK_SIZE = 1024 * 1024; // 1MB chunks
const MAX_RETRIES = 3;
const RETRY_DELAY = 1000;
const AUTO_UPLOAD = ['true', '1', 'yes'].includes('{{AUTO_UPLOAD}}'.toLowerCase());
const RETRY_DELAY = 1000; // 1 second delay between retries
// Read MAX_RETRIES from the injected server value, with a fallback
const MAX_RETRIES_STR = '{{MAX_RETRIES}}';
let maxRetries = 5; // Default value
if (MAX_RETRIES_STR && MAX_RETRIES_STR !== '{{MAX_RETRIES}}') {
const parsedRetries = parseInt(MAX_RETRIES_STR, 10);
if (!isNaN(parsedRetries) && parsedRetries >= 0) {
maxRetries = parsedRetries;
} else {
console.warn(`Invalid MAX_RETRIES value "${MAX_RETRIES_STR}" received from server, defaulting to ${maxRetries}.`);
}
} else {
console.warn('MAX_RETRIES not injected by server, defaulting to 5.');
}
window.MAX_RETRIES = maxRetries; // Assign to window for potential global use/debugging
console.log(`Max retries for chunk uploads: ${window.MAX_RETRIES}`);
const AUTO_UPLOAD_STR = '{{AUTO_UPLOAD}}';
const AUTO_UPLOAD = ['true', '1', 'yes'].includes(AUTO_UPLOAD_STR.toLowerCase());
// Utility function to generate a unique batch ID
function generateBatchId() {
@@ -79,12 +99,21 @@
this.lastUploadedBytes = 0;
this.lastUploadTime = null;
this.uploadRate = 0;
this.maxRetries = window.MAX_RETRIES; // Use configured retries
this.retryDelay = RETRY_DELAY; // Use constant
}
async start() {
try {
this.updateProgress(0); // Initial progress update
await this.initUpload();
await this.uploadChunks();
if (this.file.size > 0) { // Only upload chunks if file is not empty
await this.uploadChunks();
} else {
console.log(`Skipping chunk upload for zero-byte file: ${this.file.name}`);
// Server handles zero-byte completion in /init
this.updateProgress(100); // Mark as complete on client too
}
return true;
} catch (error) {
console.error('Upload failed:', error);
@@ -114,7 +143,9 @@
headers['X-Batch-ID'] = this.batchId;
}
const response = await fetch('/api/upload/init', {
// Remove leading slash from API path before concatenating
const apiUrl = '/api/upload/init'.startsWith('/') ? '/api/upload/init'.substring(1) : '/api/upload/init';
const response = await fetch(window.BASE_URL + apiUrl, {
method: 'POST',
headers,
body: JSON.stringify({
@@ -134,10 +165,22 @@
async uploadChunks() {
this.createProgressElement();
let currentChunkStartPosition = this.position; // Track start position for retries
while (this.position < this.file.size) {
const chunk = await this.readChunk();
await this.uploadChunk(chunk);
const chunk = await this.readChunk(); // Reads based on current this.position
try {
// Attempt to upload the chunk with retry logic
// Pass the position *before* reading the chunk, as that's the start of the data being sent
await this.uploadChunkWithRetry(chunk, currentChunkStartPosition);
// If successful, update the start position for the *next* chunk read
// this.position is updated internally by readChunk, so currentChunkStartPosition reflects the next read point
currentChunkStartPosition = this.position;
} catch (error) {
// If uploadChunkWithRetry fails after all retries, propagate the error
console.error(`UploadChunks failed after retries for chunk starting at ${currentChunkStartPosition}. File: ${this.file.webkitRelativePath || this.file.name}`);
throw error; // Propagate up to the start() method's catch block
}
}
}
@@ -149,22 +192,94 @@
return await blob.arrayBuffer();
}
async uploadChunk(chunk) {
const response = await fetch(`/api/upload/chunk/${this.uploadId}`, {
method: 'POST',
headers: {
'Content-Type': 'application/octet-stream',
'X-Batch-ID': this.batchId
},
body: chunk
});
async uploadChunkWithRetry(chunk, chunkStartPosition) {
const chunkApiUrlPath = `/api/upload/chunk/${this.uploadId}`;
const chunkApiUrl = chunkApiUrlPath.startsWith('/') ? chunkApiUrlPath.substring(1) : chunkApiUrlPath;
let lastError = null;
if (!response.ok) {
throw new Error(`Failed to upload chunk: ${response.statusText}`);
for (let attempt = 0; attempt <= this.maxRetries; attempt++) {
try {
if (attempt > 0) {
console.warn(`Retrying chunk (start: ${chunkStartPosition}) upload for ${this.file.webkitRelativePath || this.file.name} (Attempt ${attempt}/${this.maxRetries})...`);
this.updateProgressElementInfo(`Retrying attempt ${attempt}...`, 'var(--warning-color)');
}
// Use AbortController for potential timeout or cancellation during fetch
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), 30000); // 30-second timeout per attempt
const response = await fetch(window.BASE_URL + chunkApiUrl, {
method: 'POST',
headers: {
'Content-Type': 'application/octet-stream',
'X-Batch-ID': this.batchId
// Consider adding 'Content-Range': `bytes ${chunkStartPosition}-${chunkStartPosition + chunk.byteLength - 1}/${this.file.size}`
// If the server supports handling potential duplicate chunks via Content-Range
},
body: chunk,
signal: controller.signal // Add abort signal
});
clearTimeout(timeoutId); // Clear timeout if fetch completes
if (response.ok) {
const data = await response.json();
if (attempt > 0) {
console.log(`Chunk upload successful on retry attempt ${attempt} for ${this.file.webkitRelativePath || this.file.name}`);
}
// Update progress based on server response
// this.position is updated by readChunk(), so progress reflects total uploaded
this.updateProgress(data.progress);
// Success! Exit the retry loop.
this.updateProgressElementInfo('uploading...'); // Reset info message
return;
} else {
// Server responded with an error status (4xx, 5xx)
let errorText = 'Unknown server error';
try {
errorText = await response.text();
} catch (textError) { /* ignore if reading text fails */ }
// --- Add Special 404 Handling ---
if (response.status === 404 && attempt > 0) {
console.warn(`Received 404 Not Found on retry attempt ${attempt} for ${this.file.webkitRelativePath || this.file.name}. Assuming upload completed previously.`);
this.updateProgress(100); // Mark as complete
return; // Exit retry loop successfully
}
// --- End Special 404 Handling ---
lastError = new Error(`Failed to upload chunk: ${response.status} ${response.statusText}. Server response: ${errorText}`);
console.error(`Chunk upload attempt ${attempt} failed: ${lastError.message}`);
this.updateProgressElementInfo(`Attempt ${attempt} failed: ${response.statusText}`, 'var(--danger-color)');
}
} catch (error) {
// Network error, fetch failed completely, or timeout
lastError = error;
if (error.name === 'AbortError') {
console.error(`Chunk upload attempt ${attempt} timed out after 30 seconds.`);
this.updateProgressElementInfo(`Attempt ${attempt} timed out`, 'var(--danger-color)');
} else {
console.error(`Chunk upload attempt ${attempt} failed with network error: ${error.message}`);
this.updateProgressElementInfo(`Attempt ${attempt} network error`, 'var(--danger-color)');
}
}
// If not the last attempt, wait before retrying
if (attempt < this.maxRetries) {
// Exponential backoff: 1s, 2s, 4s, ... but capped
const delay = Math.min(this.retryDelay * Math.pow(2, attempt), 30000); // Max 30s delay
await new Promise(resolve => setTimeout(resolve, delay));
}
}
const data = await response.json();
this.updateProgress(data.progress);
// If we exit the loop, all retries have failed.
// Position reset is tricky. If the server *did* receive a chunk but failed to respond OK,
// simply resending might corrupt data unless the server handles it idempotently.
// Failing the whole upload is often safer.
// this.position = chunkStartPosition; // Re-enable if server can handle duplicate chunks safely
console.error(`Chunk upload failed permanently after ${this.maxRetries} retries for ${this.file.webkitRelativePath || this.file.name}, chunk starting at ${chunkStartPosition}.`);
this.updateProgressElementInfo(`Upload failed after ${this.maxRetries} retries`, 'var(--danger-color)');
throw lastError || new Error(`Chunk upload failed after ${this.maxRetries} retries.`);
}
createProgressElement() {
@@ -232,8 +347,12 @@
}
// Update progress info
this.progressElement.infoSpan.textContent = `${rateText} · ${percent < 100 ? 'uploading...' : 'complete'}`;
this.progressElement.detailsSpan.textContent =
const statusText = percent < 100 ? 'uploading...' : 'complete';
// Use the helper for info updates, only update if not showing a retry message
if (!this.progressElement.infoSpan.textContent.startsWith('Retry') && !this.progressElement.infoSpan.textContent.startsWith('Attempt')) {
this.updateProgressElementInfo(`${rateText} · ${statusText}`);
}
this.progressElement.detailsSpan.textContent =
`${formatFileSize(this.position)} of ${formatFileSize(this.file.size)} (${percent.toFixed(1)}%)`;
// Update tracking variables
@@ -247,6 +366,31 @@
}
}
}
// Helper to update the info message and color in the progress element
updateProgressElementInfo(message, color = '') {
if (this.progressElement && this.progressElement.infoSpan) {
this.progressElement.infoSpan.textContent = message;
this.progressElement.infoSpan.style.color = color; // Reset if color is empty string
}
}
// Helper to attempt cancellation on the server
async cancelUploadOnServer() {
if (!this.uploadId) return;
console.log(`Attempting to cancel upload ${this.uploadId} on server due to error.`);
try {
const cancelApiUrlPath = `/api/upload/cancel/${this.uploadId}`;
const cancelApiUrl = cancelApiUrlPath.startsWith('/') ? cancelApiUrlPath.substring(1) : cancelApiUrlPath;
// No need to wait for response here, just fire and forget
fetch(window.BASE_URL + cancelApiUrl, { method: 'POST' }).catch(err => {
console.warn(`Sending cancel request failed for upload ${this.uploadId}:`, err);
});
} catch (cancelError) {
// Catch synchronous errors, though unlikely with fetch
console.warn(`Error initiating cancel request for upload ${this.uploadId}:`, cancelError);
} // Add closing brace for try block
}
}
// UI Event Handlers
@@ -514,6 +658,15 @@
// Reset the input to allow selecting the same folder again
const input = e.target;
files = [...input.files];
// Check for webkitRelativePath support
const missingRelPath = files.some(f => !('webkitRelativePath' in f) || !f.webkitRelativePath);
if (missingRelPath) {
alert('Your browser does not support folder uploads with structure. Please use a modern browser like Chrome or Edge.');
files = [];
updateFileList();
input.value = '';
return;
}
console.log('Folder selection files:', files.map(f => ({
name: f.name,
path: f.webkitRelativePath,

View File

@@ -4,7 +4,8 @@
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{{SITE_TITLE}} - Login</title>
<link rel="stylesheet" href="styles.css">
<link rel="stylesheet" href="{{BASE_URL}}styles.css">
<link rel="icon" type="image/svg+xml" href="{{BASE_URL}}assets/icon.svg">
<style>
.login-container {
display: flex;
@@ -53,6 +54,7 @@
background-color: var(--textarea-bg);
}
</style>
<script>window.BASE_URL = '{{BASE_URL}}';</script>
</head>
<body>
<div class="login-container">
@@ -124,7 +126,7 @@
// Handle form submission
const verifyPin = async (pin) => {
try {
const response = await fetch('/api/auth/verify-pin', {
const response = await fetch(window.BASE_URL + '/api/auth/verify-pin', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ pin })
@@ -210,7 +212,7 @@
};
// Check PIN length and initialize
fetch('/api/auth/pin-required')
fetch(window.BASE_URL + '/api/auth/pin-required')
.then(response => {
if (response.status === 429) {
throw new Error('Too many attempts. Please wait before trying again.');
@@ -239,6 +241,17 @@
pinContainer.style.pointerEvents = 'none';
}
});
document.addEventListener('DOMContentLoaded', function() {
// Rewrite asset URLs to use BASE_URL as prefix if not absolute
const baseUrl = window.BASE_URL;
document.querySelectorAll('link[rel="stylesheet"], link[rel="icon"]').forEach(link => {
const href = link.getAttribute('href');
if (href && !href.startsWith('http') && !href.startsWith('data:') && !href.startsWith(baseUrl)) {
link.setAttribute('href', baseUrl + href.replace(/^\//, ''));
}
});
});
</script>
</body>
</html>

32
public/service-worker.js Normal file
View File

@@ -0,0 +1,32 @@
const CACHE_NAME = "DUMBDROP_PWA_CACHE_V1";
const ASSETS_TO_CACHE = [];
const preload = async () => {
console.log("Installing web app");
return await caches.open(CACHE_NAME)
.then(async (cache) => {
console.log("caching index and important routes");
const response = await fetch("/asset-manifest.json");
const assets = await response.json();
ASSETS_TO_CACHE.push(...assets);
console.log("Assets Cached:", ASSETS_TO_CACHE);
return cache.addAll(ASSETS_TO_CACHE);
});
}
// Fetch asset manifest dynamically
globalThis.addEventListener("install", (event) => {
event.waitUntil(preload());
});
globalThis.addEventListener("activate", (event) => {
event.waitUntil(clients.claim());
});
globalThis.addEventListener("fetch", (event) => {
event.respondWith(
caches.match(event.request).then((cachedResponse) => {
return cachedResponse || fetch(event.request);
})
);
});

View File

@@ -9,6 +9,7 @@ const cors = require('cors');
const cookieParser = require('cookie-parser');
const path = require('path');
const fs = require('fs');
const fsPromises = require('fs').promises;
const { config, validateConfig } = require('./config');
const logger = require('./utils/logger');
@@ -16,6 +17,7 @@ const { ensureDirectoryExists } = require('./utils/fileUtils');
const { securityHeaders, requirePin } = require('./middleware/security');
const { safeCompare } = require('./utils/security');
const { initUploadLimiter, pinVerifyLimiter, downloadLimiter } = require('./middleware/rateLimiter');
const { injectDemoBanner, demoMiddleware } = require('./utils/demoMode');
// Create Express app
const app = express();
@@ -34,6 +36,9 @@ const { router: uploadRouter } = require('./routes/upload');
const fileRoutes = require('./routes/files');
const authRoutes = require('./routes/auth');
// Add demo middleware before your routes
app.use(demoMiddleware);
// Use routes with appropriate middleware
app.use('/api/auth', pinVerifyLimiter, authRoutes);
app.use('/api/upload', requirePin(config.pin), initUploadLimiter, uploadRouter);
@@ -49,6 +54,11 @@ app.get('/', (req, res) => {
let html = fs.readFileSync(path.join(__dirname, '../public', 'index.html'), 'utf8');
html = html.replace(/{{SITE_TITLE}}/g, config.siteTitle);
html = html.replace('{{AUTO_UPLOAD}}', config.autoUpload.toString());
html = html.replace('{{MAX_RETRIES}}', config.clientMaxRetries.toString());
// Ensure baseUrl has a trailing slash for correct asset linking
const baseUrlWithSlash = config.baseUrl.endsWith('/') ? config.baseUrl : config.baseUrl + '/';
html = html.replace(/{{BASE_URL}}/g, baseUrlWithSlash);
html = injectDemoBanner(html);
res.send(html);
});
@@ -61,6 +71,10 @@ app.get('/login.html', (req, res) => {
let html = fs.readFileSync(path.join(__dirname, '../public', 'login.html'), 'utf8');
html = html.replace(/{{SITE_TITLE}}/g, config.siteTitle);
// Ensure baseUrl has a trailing slash
const baseUrlWithSlash = config.baseUrl.endsWith('/') ? config.baseUrl : config.baseUrl + '/';
html = html.replace(/{{BASE_URL}}/g, baseUrlWithSlash);
html = injectDemoBanner(html);
res.send(html);
});
@@ -74,9 +88,14 @@ app.use((req, res, next) => {
const filePath = path.join(__dirname, '../public', req.path);
let html = fs.readFileSync(filePath, 'utf8');
html = html.replace(/{{SITE_TITLE}}/g, config.siteTitle);
if (req.path === 'index.html') {
if (req.path === '/index.html' || req.path === 'index.html') {
html = html.replace('{{AUTO_UPLOAD}}', config.autoUpload.toString());
html = html.replace('{{MAX_RETRIES}}', config.clientMaxRetries.toString());
}
// Ensure baseUrl has a trailing slash
const baseUrlWithSlash = config.baseUrl.endsWith('/') ? config.baseUrl : config.baseUrl + '/';
html = html.replace(/{{BASE_URL}}/g, baseUrlWithSlash);
html = injectDemoBanner(html);
res.send(html);
} catch (err) {
next();
@@ -95,6 +114,10 @@ app.use((err, req, res, next) => { // eslint-disable-line no-unused-vars
});
});
// --- Add this after config is loaded ---
const METADATA_DIR = path.join(config.uploadDir, '.metadata');
// --- End addition ---
/**
* Initialize the application
* Sets up required directories and validates configuration
@@ -106,6 +129,25 @@ async function initialize() {
// Ensure upload directory exists and is writable
await ensureDirectoryExists(config.uploadDir);
// --- Add this section ---
// Ensure metadata directory exists
try {
if (!fs.existsSync(METADATA_DIR)) {
await fsPromises.mkdir(METADATA_DIR, { recursive: true });
logger.info(`Created metadata directory: ${METADATA_DIR}`);
} else {
logger.info(`Metadata directory exists: ${METADATA_DIR}`);
}
// Check writability (optional but good practice)
await fsPromises.access(METADATA_DIR, fs.constants.W_OK);
logger.success(`Metadata directory is writable: ${METADATA_DIR}`);
} catch (err) {
logger.error(`Metadata directory error (${METADATA_DIR}): ${err.message}`);
// Decide if this is fatal. If resumability is critical, maybe throw.
throw new Error(`Failed to access or create metadata directory: ${METADATA_DIR}`);
}
// --- End added section ---
// Log configuration
logger.info(`Maximum file size set to: ${config.maxFileSize / (1024 * 1024)}MB`);
@@ -117,6 +159,21 @@ async function initialize() {
logger.info('Apprise notifications enabled');
}
// After initializing demo middleware
if (process.env.DEMO_MODE === 'true') {
logger.info('[DEMO] Running in demo mode - uploads will not be saved');
// Clear any existing files in upload directory
try {
const files = fs.readdirSync(config.uploadDir);
for (const file of files) {
fs.unlinkSync(path.join(config.uploadDir, file));
}
logger.info('[DEMO] Cleared upload directory');
} catch (err) {
logger.error(`[DEMO] Failed to clear upload directory: ${err.message}`);
}
}
return app;
} catch (err) {
logger.error(`Initialization failed: ${err.message}`);

View File

@@ -1,48 +1,151 @@
require('dotenv').config();
console.log('Loaded ENV:', {
PORT: process.env.PORT,
UPLOAD_DIR: process.env.UPLOAD_DIR,
LOCAL_UPLOAD_DIR: process.env.LOCAL_UPLOAD_DIR,
NODE_ENV: process.env.NODE_ENV
});
console.log('Loaded ENV:', {
PORT: process.env.PORT,
UPLOAD_DIR: process.env.UPLOAD_DIR,
LOCAL_UPLOAD_DIR: process.env.LOCAL_UPLOAD_DIR,
NODE_ENV: process.env.NODE_ENV
});
const { validatePin } = require('../utils/security');
const logger = require('../utils/logger');
const fs = require('fs');
const path = require('path');
const { version } = require('../../package.json'); // Get version from package.json
/**
* Get the host path from Docker mount point
* @returns {string} Host path or fallback to container path
* Environment Variables Reference
*
* PORT - Port for the server (default: 3000)
* NODE_ENV - Node environment (default: 'development')
* BASE_URL - Base URL for the app (default: http://localhost:${PORT})
* UPLOAD_DIR - Directory for uploads (Docker/production)
* LOCAL_UPLOAD_DIR - Directory for uploads (local dev, fallback: './local_uploads')
* MAX_FILE_SIZE - Max upload size in MB (default: 1024)
* AUTO_UPLOAD - Enable auto-upload (true/false, default: false)
* DUMBDROP_PIN - Security PIN for uploads (required for protected endpoints)
* DUMBDROP_TITLE - Site title (default: 'DumbDrop')
* APPRISE_URL - Apprise notification URL (optional)
* APPRISE_MESSAGE - Notification message template (default provided)
* APPRISE_SIZE_UNIT - Size unit for notifications (optional)
* ALLOWED_EXTENSIONS - Comma-separated list of allowed file extensions (optional)
* ALLOWED_IFRAME_ORIGINS - Comma-separated list of allowed iframe origins (optional)
*/
function getHostPath() {
// Helper for clear configuration logging
const logConfig = (message, level = 'info') => {
const prefix = level === 'warning' ? '⚠️ WARNING:' : ' INFO:';
console.log(`${prefix} CONFIGURATION: ${message}`);
};
// Default configurations
const DEFAULT_PORT = 3000;
const DEFAULT_CHUNK_SIZE = 1024 * 1024 * 100; // 100MB
const DEFAULT_SITE_TITLE = 'DumbDrop';
const DEFAULT_BASE_URL = 'http://localhost:3000';
const DEFAULT_CLIENT_MAX_RETRIES = 5; // Default retry count
const logAndReturn = (key, value, isDefault = false) => {
logConfig(`${key}: ${value}${isDefault ? ' (default)' : ''}`);
return value;
};
/**
* Determine the upload directory based on environment variables.
* Priority:
* 1. UPLOAD_DIR (for Docker/production)
* 2. LOCAL_UPLOAD_DIR (for local development)
* 3. './local_uploads' (default fallback)
* @returns {string} The upload directory path
*/
function determineUploadDirectory() {
let uploadDir;
if (process.env.UPLOAD_DIR) {
uploadDir = process.env.UPLOAD_DIR;
logConfig(`Upload directory set from UPLOAD_DIR: ${uploadDir}`);
} else if (process.env.LOCAL_UPLOAD_DIR) {
uploadDir = process.env.LOCAL_UPLOAD_DIR;
logConfig(`Upload directory using LOCAL_UPLOAD_DIR fallback: ${uploadDir}`, 'warning');
} else {
uploadDir = './local_uploads';
logConfig(`Upload directory using default fallback: ${uploadDir}`, 'warning');
}
logConfig(`Final upload directory path: ${require('path').resolve(uploadDir)}`);
return uploadDir;
}
/**
* Utility to detect if running in local development mode
* Returns true if NODE_ENV is not 'production' and UPLOAD_DIR is not set (i.e., not Docker)
*/
function isLocalDevelopment() {
return process.env.NODE_ENV !== 'production' && !process.env.UPLOAD_DIR;
}
/**
* Ensure the upload directory exists (for local development only)
* Creates the directory if it does not exist
*/
function ensureLocalUploadDirExists(uploadDir) {
if (!isLocalDevelopment()) return;
try {
// Read Docker mountinfo to get the host path
const mountInfo = fs.readFileSync('/proc/self/mountinfo', 'utf8');
const lines = mountInfo.split('\n');
// Find the line containing our upload directory
const uploadMount = lines.find(line => line.includes('/app/uploads'));
if (uploadMount) {
// Extract the host path from the mount info
const parts = uploadMount.split(' ');
// The host path is typically in the 4th space-separated field
const hostPath = parts[3];
return hostPath;
if (!fs.existsSync(uploadDir)) {
fs.mkdirSync(uploadDir, { recursive: true });
logConfig(`Created local upload directory: ${uploadDir}`);
} else {
logConfig(`Local upload directory exists: ${uploadDir}`);
}
} catch (err) {
logger.debug('Could not determine host path from mount info');
logConfig(`Failed to create local upload directory: ${uploadDir}. Error: ${err.message}`, 'warning');
}
// Fallback to container path if we can't determine host path
return '/app/uploads';
}
// Determine and ensure upload directory (for local dev)
const resolvedUploadDir = determineUploadDirectory();
ensureLocalUploadDirExists(resolvedUploadDir);
/**
* Application configuration
* Loads and validates environment variables
*/
const config = {
// =====================
// =====================
// Server settings
port: process.env.PORT || 3000,
// =====================
/**
* Port for the server (default: 3000)
* Set via PORT in .env
*/
port: process.env.PORT || DEFAULT_PORT,
/**
* Node environment (default: 'development')
* Set via NODE_ENV in .env
*/
nodeEnv: process.env.NODE_ENV || 'development',
baseUrl: process.env.BASE_URL || `http://localhost:${process.env.PORT || 3000}`,
/**
* Base URL for the app (default: http://localhost:${PORT})
* Set via BASE_URL in .env
*/
baseUrl: process.env.BASE_URL || DEFAULT_BASE_URL,
// =====================
// =====================
// Upload settings
uploadDir: '/app/uploads', // Internal Docker path
uploadDisplayPath: getHostPath(), // Dynamically determined from Docker mount
// =====================
/**
* Directory for uploads
* Priority: UPLOAD_DIR (Docker/production) > LOCAL_UPLOAD_DIR (local dev) > './local_uploads' (fallback)
*/
uploadDir: resolvedUploadDir,
/**
* Max upload size in bytes (default: 1024MB)
* Set via MAX_FILE_SIZE in .env (in MB)
*/
maxFileSize: (() => {
const sizeInMB = parseInt(process.env.MAX_FILE_SIZE || '1024', 10);
if (isNaN(sizeInMB) || sizeInMB <= 0) {
@@ -50,25 +153,94 @@ const config = {
}
return sizeInMB * 1024 * 1024; // Convert MB to bytes
})(),
/**
* Enable auto-upload (true/false, default: false)
* Set via AUTO_UPLOAD in .env
*/
autoUpload: process.env.AUTO_UPLOAD === 'true',
// =====================
// =====================
// Security
// =====================
/**
* Security PIN for uploads (required for protected endpoints)
* Set via DUMBDROP_PIN in .env
*/
pin: validatePin(process.env.DUMBDROP_PIN),
// =====================
// =====================
// UI settings
siteTitle: process.env.DUMBDROP_TITLE || 'DumbDrop',
// =====================
/**
* Site title (default: 'DumbDrop')
* Set via DUMBDROP_TITLE in .env
*/
siteTitle: process.env.DUMBDROP_TITLE || DEFAULT_SITE_TITLE,
// =====================
// =====================
// Notification settings
// =====================
/**
* Apprise notification URL (optional)
* Set via APPRISE_URL in .env
*/
appriseUrl: process.env.APPRISE_URL,
/**
* Notification message template (default provided)
* Set via APPRISE_MESSAGE in .env
*/
appriseMessage: process.env.APPRISE_MESSAGE || 'New file uploaded - {filename} ({size}), Storage used {storage}',
/**
* Size unit for notifications (optional)
* Set via APPRISE_SIZE_UNIT in .env
*/
appriseSizeUnit: process.env.APPRISE_SIZE_UNIT,
// =====================
// =====================
// File extensions
// =====================
/**
* Allowed file extensions (comma-separated, optional)
* Set via ALLOWED_EXTENSIONS in .env
*/
allowedExtensions: process.env.ALLOWED_EXTENSIONS ?
process.env.ALLOWED_EXTENSIONS.split(',').map(ext => ext.trim().toLowerCase()) :
null
null,
allowedIframeOrigins: process.env.ALLOWED_IFRAME_ORIGINS
? process.env.ALLOWED_IFRAME_ORIGINS.split(',').map(origin => origin.trim()).filter(Boolean)
: null,
/**
* Max number of retries for client-side chunk uploads (default: 5)
* Set via CLIENT_MAX_RETRIES in .env
*/
clientMaxRetries: (() => {
const envValue = process.env.CLIENT_MAX_RETRIES;
const defaultValue = DEFAULT_CLIENT_MAX_RETRIES;
if (envValue === undefined) {
return logAndReturn('CLIENT_MAX_RETRIES', defaultValue, true);
}
const retries = parseInt(envValue, 10);
if (isNaN(retries) || retries < 0) {
logConfig(
`Invalid CLIENT_MAX_RETRIES value: "${envValue}". Using default: ${defaultValue}`,
'warning',
);
return logAndReturn('CLIENT_MAX_RETRIES', defaultValue, true);
}
return logAndReturn('CLIENT_MAX_RETRIES', retries);
})(),
uploadPin: logAndReturn('UPLOAD_PIN', process.env.UPLOAD_PIN || null),
};
console.log(`Upload directory configured as: ${config.uploadDir}`);
// Validate required settings
function validateConfig() {
const errors = [];
@@ -79,7 +251,12 @@ function validateConfig() {
// Validate BASE_URL format
try {
new URL(config.baseUrl);
let url = new URL(config.baseUrl);
// Ensure BASE_URL ends with a slash
if (!config.baseUrl.endsWith('/')) {
logger.warn('BASE_URL did not end with a trailing slash. Automatically appending "/".');
config.baseUrl = config.baseUrl + '/';
}
} catch (err) {
errors.push('BASE_URL must be a valid URL');
}

View File

@@ -6,34 +6,40 @@
const { safeCompare } = require('../utils/security');
const logger = require('../utils/logger');
const { config } = require('../config');
/**
* Security headers middleware
*/
function securityHeaders(req, res, next) {
// Content Security Policy
res.setHeader(
'Content-Security-Policy',
let csp =
"default-src 'self'; " +
"connect-src 'self'; " +
"style-src 'self' 'unsafe-inline' cdn.jsdelivr.net; " +
"script-src 'self' 'unsafe-inline' cdn.jsdelivr.net; " +
"img-src 'self' data: blob:;"
);
// X-Content-Type-Options
"img-src 'self' data: blob:;";
// If allowedIframeOrigins is set, allow those origins to embed via iframe
if (config.allowedIframeOrigins && config.allowedIframeOrigins.length > 0) {
// Remove X-Frame-Options header (do not set it)
// Add frame-ancestors directive to CSP
const frameAncestors = ["'self'", ...config.allowedIframeOrigins].join(' ');
csp += ` frame-ancestors ${frameAncestors};`;
} else {
// Default: only allow same origin if not configured
res.setHeader('X-Frame-Options', 'SAMEORIGIN');
}
res.setHeader('Content-Security-Policy', csp);
res.setHeader('X-Content-Type-Options', 'nosniff');
// X-Frame-Options
res.setHeader('X-Frame-Options', 'SAMEORIGIN');
// X-XSS-Protection
res.setHeader('X-XSS-Protection', '1; mode=block');
// Strict Transport Security (when in production)
if (process.env.NODE_ENV === 'production') {
res.setHeader('Strict-Transport-Security', 'max-age=31536000; includeSubDomains');
}
next();
}

View File

@@ -1,410 +1,456 @@
/**
* File upload route handlers and batch upload management.
* Handles file uploads, chunked transfers, and folder creation.
* Manages upload sessions, batch timeouts, and cleanup.
* Manages upload sessions using persistent metadata for resumability.
*/
const express = require('express');
const router = express.Router();
const crypto = require('crypto');
const path = require('path');
const fs = require('fs').promises; // Use promise-based fs
const fsSync = require('fs'); // For sync checks like existsSync
const { config } = require('../config');
const logger = require('../utils/logger');
const { getUniqueFilePath, getUniqueFolderPath } = require('../utils/fileUtils');
const { getUniqueFilePath, getUniqueFolderPath, sanitizeFilename, sanitizePathPreserveDirs, isValidBatchId } = require('../utils/fileUtils');
const { sendNotification } = require('../services/notifications');
const fs = require('fs');
const { cleanupIncompleteUploads } = require('../utils/cleanup');
const { isDemoMode } = require('../utils/demoMode');
// Store ongoing uploads
const uploads = new Map();
// Store folder name mappings for batch uploads with timestamps
// --- Persistence Setup ---
const METADATA_DIR = path.join(config.uploadDir, '.metadata');
// --- In-Memory Maps (Still useful for session-level data) ---
// Store folder name mappings for batch uploads (avoids FS lookups during session)
const folderMappings = new Map();
// Store batch activity timestamps
// Store batch activity timestamps (for cleaning up stale batches/folder mappings)
const batchActivity = new Map();
// Store upload to batch mappings
const uploadToBatch = new Map();
const BATCH_TIMEOUT = 30 * 60 * 1000; // 30 minutes
const BATCH_TIMEOUT = 30 * 60 * 1000; // 30 minutes for batch/folderMapping cleanup
let cleanupInterval;
// --- Helper Functions for Metadata ---
/**
* Start the cleanup interval for inactive batches
* @returns {NodeJS.Timeout} The interval handle
*/
function startBatchCleanup() {
if (cleanupInterval) {
clearInterval(cleanupInterval);
async function readUploadMetadata(uploadId) {
if (!uploadId || typeof uploadId !== 'string' || uploadId.includes('..')) {
logger.warn(`Attempted to read metadata with invalid uploadId: ${uploadId}`);
return null;
}
cleanupInterval = setInterval(() => {
const metaFilePath = path.join(METADATA_DIR, `${uploadId}.meta`);
try {
const data = await fs.readFile(metaFilePath, 'utf8');
return JSON.parse(data);
} catch (err) {
if (err.code === 'ENOENT') {
return null; // Metadata file doesn't exist - normal case for new/finished uploads
}
logger.error(`Error reading metadata for ${uploadId}: ${err.message}`);
throw err; // Rethrow other errors
}
}
async function writeUploadMetadata(uploadId, metadata) {
if (!uploadId || typeof uploadId !== 'string' || uploadId.includes('..')) {
logger.error(`Attempted to write metadata with invalid uploadId: ${uploadId}`);
return; // Prevent writing
}
const metaFilePath = path.join(METADATA_DIR, `${uploadId}.meta`);
metadata.lastActivity = Date.now(); // Update timestamp on every write
try {
// Write atomically if possible (write to temp then rename) for more safety
const tempMetaPath = `${metaFilePath}.${crypto.randomBytes(4).toString('hex')}.tmp`;
await fs.writeFile(tempMetaPath, JSON.stringify(metadata, null, 2));
await fs.rename(tempMetaPath, metaFilePath);
} catch (err) {
logger.error(`Error writing metadata for ${uploadId}: ${err.message}`);
// Attempt to clean up temp file if rename failed
try { await fs.unlink(tempMetaPath); } catch (unlinkErr) {/* ignore */}
throw err;
}
}
async function deleteUploadMetadata(uploadId) {
if (!uploadId || typeof uploadId !== 'string' || uploadId.includes('..')) {
logger.warn(`Attempted to delete metadata with invalid uploadId: ${uploadId}`);
return;
}
const metaFilePath = path.join(METADATA_DIR, `${uploadId}.meta`);
try {
await fs.unlink(metaFilePath);
logger.debug(`Deleted metadata file for upload: ${uploadId}.meta`);
} catch (err) {
if (err.code !== 'ENOENT') { // Ignore if already deleted
logger.error(`Error deleting metadata file ${uploadId}.meta: ${err.message}`);
}
}
}
// --- Batch Cleanup (Focuses on batchActivity map, not primary upload state) ---
let batchCleanupInterval;
function startBatchCleanup() {
if (batchCleanupInterval) clearInterval(batchCleanupInterval);
batchCleanupInterval = setInterval(() => {
const now = Date.now();
logger.info(`Running batch cleanup, checking ${batchActivity.size} active batches`);
logger.info(`Running batch cleanup, checking ${batchActivity.size} active batch sessions`);
let cleanedCount = 0;
for (const [batchId, lastActivity] of batchActivity.entries()) {
if (now - lastActivity >= BATCH_TIMEOUT) {
logger.info(`Cleaning up inactive batch: ${batchId}`);
logger.info(`Cleaning up inactive batch session: ${batchId}`);
batchActivity.delete(batchId);
// Clean up associated folder mappings for this batch
for (const key of folderMappings.keys()) {
if (key.endsWith(`-${batchId}`)) {
folderMappings.delete(key);
}
}
cleanedCount++;
}
}
}, 5 * 60 * 1000); // 5 minutes
return cleanupInterval;
if (cleanedCount > 0) logger.info(`Cleaned up ${cleanedCount} inactive batch sessions.`);
}, 5 * 60 * 1000); // Check every 5 minutes
batchCleanupInterval.unref(); // Allow process to exit if this is the only timer
return batchCleanupInterval;
}
/**
* Stop the batch cleanup interval
*/
function stopBatchCleanup() {
if (cleanupInterval) {
clearInterval(cleanupInterval);
cleanupInterval = null;
if (batchCleanupInterval) {
clearInterval(batchCleanupInterval);
batchCleanupInterval = null;
}
}
// Start cleanup interval unless disabled
if (!process.env.DISABLE_BATCH_CLEANUP) {
startBatchCleanup();
}
// Run cleanup periodically
const CLEANUP_INTERVAL = 5 * 60 * 1000; // 5 minutes
const cleanupTimer = setInterval(() => {
cleanupIncompleteUploads(uploads, uploadToBatch, batchActivity)
.catch(err => logger.error(`Cleanup failed: ${err.message}`));
}, CLEANUP_INTERVAL);
// Handle cleanup timer errors
cleanupTimer.unref(); // Don't keep process alive just for cleanup
process.on('SIGTERM', () => {
clearInterval(cleanupTimer);
// Final cleanup
cleanupIncompleteUploads(uploads, uploadToBatch, batchActivity)
.catch(err => logger.error(`Final cleanup failed: ${err.message}`));
});
/**
* Log the current state of uploads and mappings
* @param {string} context - The context where this log is being called from
*/
function logUploadState(context) {
logger.debug(`Upload State [${context}]:
Active Uploads: ${uploads.size}
Active Batches: ${batchActivity.size}
Folder Mappings: ${folderMappings.size}
Upload-Batch Mappings: ${uploadToBatch.size}
`);
}
/**
* Validate batch ID format
* @param {string} batchId - Batch ID to validate
* @returns {boolean} True if valid
*/
function isValidBatchId(batchId) {
return /^\d+-[a-z0-9]{9}$/.test(batchId);
}
// --- Routes ---
// Initialize upload
router.post('/init', async (req, res) => {
// DEMO MODE CHECK - Bypass persistence if in demo mode
if (isDemoMode()) {
const { filename, fileSize } = req.body;
const uploadId = 'demo-' + crypto.randomBytes(16).toString('hex');
logger.info(`[DEMO] Initialized upload for ${filename} (${fileSize} bytes) with ID ${uploadId}`);
// Simulate zero-byte completion for demo
if (Number(fileSize) === 0) {
logger.success(`[DEMO] Completed zero-byte file upload: ${filename}`);
sendNotification(filename, 0, config); // Still send notification if configured
}
return res.json({ uploadId });
}
const { filename, fileSize } = req.body;
const clientBatchId = req.headers['x-batch-id'];
// --- Basic validations ---
if (!filename) return res.status(400).json({ error: 'Missing filename' });
if (fileSize === undefined || fileSize === null) return res.status(400).json({ error: 'Missing fileSize' });
const size = Number(fileSize);
if (isNaN(size) || size < 0) return res.status(400).json({ error: 'Invalid file size' });
const maxSizeInBytes = config.maxFileSize;
if (size > maxSizeInBytes) return res.status(413).json({ error: 'File too large', limit: maxSizeInBytes });
const batchId = clientBatchId || `${Date.now()}-${crypto.randomBytes(4).toString('hex').substring(0, 9)}`;
if (clientBatchId && !isValidBatchId(batchId)) return res.status(400).json({ error: 'Invalid batch ID format' });
batchActivity.set(batchId, Date.now()); // Track batch session activity
try {
// Log request details for debugging
if (process.env.DEBUG === 'true' || process.env.NODE_ENV === 'development') {
logger.info(`Upload init request:
Filename: ${filename}
Size: ${fileSize} (${typeof fileSize})
Batch ID: ${clientBatchId || 'none'}
`);
} else {
logger.info(`Upload init request: ${filename} (${fileSize} bytes)`);
}
// Validate required fields with detailed errors
if (!filename) {
return res.status(400).json({
error: 'Missing filename',
details: 'The filename field is required'
});
}
if (fileSize === undefined || fileSize === null) {
return res.status(400).json({
error: 'Missing fileSize',
details: 'The fileSize field is required'
});
}
// Convert fileSize to number if it's a string
const size = Number(fileSize);
if (isNaN(size) || size < 0) { // Changed from size <= 0 to allow zero-byte files
return res.status(400).json({
error: 'Invalid file size',
details: `File size must be a non-negative number, received: ${fileSize} (${typeof fileSize})`
});
}
// Validate file size
const maxSizeInBytes = config.maxFileSize;
if (size > maxSizeInBytes) {
const message = `File size ${size} bytes exceeds limit of ${maxSizeInBytes} bytes`;
logger.warn(message);
return res.status(413).json({
error: 'File too large',
message,
limit: maxSizeInBytes,
limitInMB: Math.floor(maxSizeInBytes / (1024 * 1024))
});
}
// Generate batch ID from header or create new one
const batchId = req.headers['x-batch-id'] || `${Date.now()}-${crypto.randomBytes(4).toString('hex').substring(0, 9)}`;
// Validate batch ID if provided in header
if (req.headers['x-batch-id'] && !isValidBatchId(batchId)) {
return res.status(400).json({
error: 'Invalid batch ID format',
details: `Batch ID must match format: timestamp-[9 alphanumeric chars], received: ${batchId}`
});
}
// Update batch activity
batchActivity.set(batchId, Date.now());
// Sanitize filename and convert to forward slashes
const safeFilename = path.normalize(filename)
// --- Path handling and Sanitization ---
const sanitizedFilename = sanitizePathPreserveDirs(filename);
const safeFilename = path.normalize(sanitizedFilename)
.replace(/^(\.\.(\/|\\|$))+/, '')
.replace(/\\/g, '/')
.replace(/^\/+/, ''); // Remove leading slashes
// Log sanitized filename
logger.info(`Processing upload: ${safeFilename}`);
// Validate file extension if configured
.replace(/^\/+/, '');
logger.info(`Upload init request for: ${safeFilename}`);
// --- Extension Check ---
if (config.allowedExtensions) {
const fileExt = path.extname(safeFilename).toLowerCase();
if (!config.allowedExtensions.includes(fileExt)) {
return res.status(400).json({
error: 'File type not allowed',
allowedExtensions: config.allowedExtensions,
receivedExtension: fileExt
});
if (fileExt && !config.allowedExtensions.includes(fileExt)) {
logger.warn(`File type not allowed: ${safeFilename} (Extension: ${fileExt})`);
return res.status(400).json({ error: 'File type not allowed', receivedExtension: fileExt });
}
}
// --- Determine Paths & Handle Folders ---
const uploadId = crypto.randomBytes(16).toString('hex');
let filePath = path.join(config.uploadDir, safeFilename);
let fileHandle;
try {
// Handle file/folder paths
const pathParts = safeFilename.split('/').filter(Boolean); // Remove empty parts
if (pathParts.length > 1) {
// Handle files within folders
const originalFolderName = pathParts[0];
const folderPath = path.join(config.uploadDir, originalFolderName);
let newFolderName = folderMappings.get(`${originalFolderName}-${batchId}`);
if (!newFolderName) {
try {
// First ensure parent directories exist
await fs.promises.mkdir(path.dirname(folderPath), { recursive: true });
// Then try to create the target folder
await fs.promises.mkdir(folderPath, { recursive: false });
newFolderName = originalFolderName;
} catch (err) {
if (err.code === 'EEXIST') {
const uniqueFolderPath = await getUniqueFolderPath(folderPath);
newFolderName = path.basename(uniqueFolderPath);
logger.info(`Folder "${originalFolderName}" exists, using "${newFolderName}"`);
} else {
throw err;
}
let finalFilePath = path.join(config.uploadDir, safeFilename);
const pathParts = safeFilename.split('/').filter(Boolean);
if (pathParts.length > 1) {
const originalFolderName = pathParts[0];
let newFolderName = folderMappings.get(`${originalFolderName}-${batchId}`);
const baseFolderPath = path.join(config.uploadDir, newFolderName || originalFolderName);
if (!newFolderName) {
await fs.mkdir(path.dirname(baseFolderPath), { recursive: true });
try {
await fs.mkdir(baseFolderPath, { recursive: false });
newFolderName = originalFolderName;
} catch (err) {
if (err.code === 'EEXIST') {
const uniqueFolderPath = await getUniqueFolderPath(baseFolderPath);
newFolderName = path.basename(uniqueFolderPath);
logger.info(`Folder "${originalFolderName}" exists or conflict, using unique "${newFolderName}" for batch ${batchId}`);
await fs.mkdir(path.join(config.uploadDir, newFolderName), { recursive: true });
} else {
throw err;
}
folderMappings.set(`${originalFolderName}-${batchId}`, newFolderName);
}
pathParts[0] = newFolderName;
filePath = path.join(config.uploadDir, ...pathParts);
// Ensure all parent directories exist
await fs.promises.mkdir(path.dirname(filePath), { recursive: true });
folderMappings.set(`${originalFolderName}-${batchId}`, newFolderName);
}
// Get unique file path and handle
const result = await getUniqueFilePath(filePath);
filePath = result.path;
fileHandle = result.handle;
// Create upload entry
uploads.set(uploadId, {
safeFilename: path.relative(config.uploadDir, filePath),
filePath,
fileSize: size,
bytesReceived: 0,
writeStream: fileHandle.createWriteStream()
});
// Associate upload with batch
uploadToBatch.set(uploadId, batchId);
logger.info(`Initialized upload for ${path.relative(config.uploadDir, filePath)} (${size} bytes)`);
// Log state after initialization
logUploadState('After Upload Init');
// Handle zero-byte files immediately
if (size === 0) {
const upload = uploads.get(uploadId);
upload.writeStream.end();
uploads.delete(uploadId);
logger.success(`Completed zero-byte file upload: ${upload.safeFilename}`);
await sendNotification(upload.safeFilename, 0, config);
}
// Send response
return res.json({ uploadId });
} catch (err) {
if (fileHandle) {
await fileHandle.close().catch(() => {});
fs.promises.unlink(filePath).catch(() => {});
}
throw err;
pathParts[0] = newFolderName;
finalFilePath = path.join(config.uploadDir, ...pathParts);
await fs.mkdir(path.dirname(finalFilePath), { recursive: true });
} else {
await fs.mkdir(config.uploadDir, { recursive: true }); // Ensure base upload dir exists
}
// --- Check Final Path Collision & Get Unique Name if Needed ---
let checkPath = finalFilePath;
let counter = 1;
while (fsSync.existsSync(checkPath)) {
logger.warn(`Final destination file already exists: ${checkPath}. Generating unique name.`);
const dir = path.dirname(finalFilePath);
const ext = path.extname(finalFilePath);
const baseName = path.basename(finalFilePath, ext);
checkPath = path.join(dir, `${baseName} (${counter})${ext}`);
counter++;
}
if (checkPath !== finalFilePath) {
logger.info(`Using unique final path: ${checkPath}`);
finalFilePath = checkPath;
// If path changed, ensure directory exists (might be needed if baseName contained '/')
await fs.mkdir(path.dirname(finalFilePath), { recursive: true });
}
const partialFilePath = finalFilePath + '.partial';
// --- Create and Persist Metadata ---
const metadata = {
uploadId,
originalFilename: safeFilename, // Store the path as received by client
filePath: finalFilePath, // The final, possibly unique, path
partialFilePath,
fileSize: size,
bytesReceived: 0,
batchId,
createdAt: Date.now(),
lastActivity: Date.now()
};
await writeUploadMetadata(uploadId, metadata);
logger.info(`Initialized persistent upload: ${uploadId} for ${safeFilename} -> ${finalFilePath}`);
// --- Handle Zero-Byte Files --- // (Important: Handle *after* metadata potentially exists)
if (size === 0) {
try {
await fs.writeFile(finalFilePath, ''); // Create the empty file
logger.success(`Completed zero-byte file upload: ${metadata.originalFilename} as ${finalFilePath}`);
await deleteUploadMetadata(uploadId); // Clean up metadata since it's done
sendNotification(metadata.originalFilename, 0, config);
} catch (writeErr) {
logger.error(`Failed to create zero-byte file ${finalFilePath}: ${writeErr.message}`);
await deleteUploadMetadata(uploadId).catch(() => {}); // Attempt cleanup on error
throw writeErr; // Let the main catch block handle it
}
}
res.json({ uploadId });
} catch (err) {
logger.error(`Upload initialization failed:
Error: ${err.message}
Stack: ${err.stack}
Filename: ${filename}
Size: ${fileSize}
Batch ID: ${clientBatchId || 'none'}
`);
return res.status(500).json({
error: 'Failed to initialize upload',
details: err.message
});
logger.error(`Upload initialization failed: ${err.message} ${err.stack}`);
return res.status(500).json({ error: 'Failed to initialize upload', details: err.message });
}
});
// Upload chunk
router.post('/chunk/:uploadId', express.raw({
limit: '10mb',
limit: config.maxFileSize + (10 * 1024 * 1024), // Generous limit for raw body
type: 'application/octet-stream'
}), async (req, res) => {
const { uploadId } = req.params;
const upload = uploads.get(uploadId);
const chunkSize = req.body.length;
const batchId = req.headers['x-batch-id'];
if (!upload) {
logger.warn(`Upload not found: ${uploadId}, Batch ID: ${batchId || 'none'}`);
return res.status(404).json({ error: 'Upload not found' });
// DEMO MODE CHECK
if (isDemoMode()) {
const { uploadId } = req.params;
logger.debug(`[DEMO] Received chunk for ${uploadId}`);
// Fake progress - requires knowing file size which isn't easily available here in demo
const demoProgress = Math.min(100, Math.random() * 100); // Placeholder
return res.json({ bytesReceived: 0, progress: demoProgress });
}
const { uploadId } = req.params;
let chunk = req.body;
let chunkSize = chunk.length;
const clientBatchId = req.headers['x-batch-id']; // Logged but not used directly here
if (!chunkSize) return res.status(400).json({ error: 'Empty chunk received' });
let metadata;
let fileHandle;
try {
// Update batch activity if batch ID provided
if (batchId && isValidBatchId(batchId)) {
batchActivity.set(batchId, Date.now());
}
metadata = await readUploadMetadata(uploadId);
// Write chunk
await new Promise((resolve, reject) => {
upload.writeStream.write(Buffer.from(req.body), (err) => {
if (err) reject(err);
else resolve();
});
});
upload.bytesReceived += chunkSize;
// Calculate progress, ensuring it doesn't exceed 100%
const progress = Math.min(
Math.round((upload.bytesReceived / upload.fileSize) * 100),
100
);
logger.debug(`Chunk received:
File: ${upload.safeFilename}
Progress: ${progress}%
Bytes Received: ${upload.bytesReceived}/${upload.fileSize}
Chunk Size: ${chunkSize}
Upload ID: ${uploadId}
Batch ID: ${batchId || 'none'}
`);
// Check if upload is complete
if (upload.bytesReceived >= upload.fileSize) {
await new Promise((resolve, reject) => {
upload.writeStream.end((err) => {
if (err) reject(err);
else resolve();
});
});
uploads.delete(uploadId);
// Format completion message based on debug mode
if (process.env.DEBUG === 'true' || process.env.NODE_ENV === 'development') {
logger.success(`Upload completed:
File: ${upload.safeFilename}
Size: ${upload.fileSize}
Upload ID: ${uploadId}
Batch ID: ${batchId || 'none'}
`);
} else {
logger.success(`Upload completed: ${upload.safeFilename} (${upload.fileSize} bytes)`);
if (!metadata) {
logger.warn(`Upload metadata not found for chunk request: ${uploadId}. Client Batch ID: ${clientBatchId || 'none'}. Upload may be complete or cancelled.`);
// Check if the final file exists as a fallback for completed uploads
// This is a bit fragile, but handles cases where metadata was deleted slightly early
try {
// Need to guess the final path - THIS IS NOT ROBUST
// A better approach might be needed if this is common
// For now, just return 404
// await fs.access(potentialFinalPath);
// return res.json({ bytesReceived: fileSizeGuess, progress: 100 });
return res.status(404).json({ error: 'Upload session not found or already completed' });
} catch (finalCheckErr) {
return res.status(404).json({ error: 'Upload session not found or already completed' });
}
// Send notification
await sendNotification(upload.safeFilename, upload.fileSize, config);
logUploadState('After Upload Complete');
}
res.json({
bytesReceived: upload.bytesReceived,
progress
});
// Update batch activity using metadata's batchId
if (metadata.batchId && isValidBatchId(metadata.batchId)) {
batchActivity.set(metadata.batchId, Date.now());
}
// --- Sanity Checks & Idempotency ---
if (metadata.bytesReceived >= metadata.fileSize) {
logger.warn(`Received chunk for already completed upload ${uploadId} (${metadata.originalFilename}). Finalizing again if needed.`);
// Ensure finalization if possible, then return success
try {
await fs.access(metadata.filePath); // Check if final file exists
logger.info(`Upload ${uploadId} already finalized at ${metadata.filePath}.`);
} catch (accessErr) {
// Final file doesn't exist, attempt rename
try {
await fs.rename(metadata.partialFilePath, metadata.filePath);
logger.info(`Finalized ${uploadId} on redundant chunk request (renamed ${metadata.partialFilePath} -> ${metadata.filePath}).`);
} catch (renameErr) {
if (renameErr.code === 'ENOENT') {
logger.warn(`Partial file ${metadata.partialFilePath} missing during redundant chunk finalization for ${uploadId}.`);
} else {
logger.error(`Error finalizing ${uploadId} on redundant chunk: ${renameErr.message}`);
}
}
}
// Regardless of rename outcome, delete metadata if it still exists
await deleteUploadMetadata(uploadId);
return res.json({ bytesReceived: metadata.fileSize, progress: 100 });
}
// Prevent writing beyond expected file size (simple protection)
if (metadata.bytesReceived + chunkSize > metadata.fileSize) {
logger.warn(`Chunk for ${uploadId} exceeds expected file size. Received ${metadata.bytesReceived + chunkSize}, expected ${metadata.fileSize}. Truncating chunk.`);
const bytesToWrite = metadata.fileSize - metadata.bytesReceived;
chunk = chunk.slice(0, bytesToWrite);
chunkSize = chunk.length;
if (chunkSize <= 0) { // If we already have exactly the right amount
logger.info(`Upload ${uploadId} already has expected bytes. Skipping write, proceeding to finalize.`);
// Skip write, proceed to finalization check below
metadata.bytesReceived = metadata.fileSize; // Ensure state is correct for finalization
} else {
logger.info(`Truncated chunk for ${uploadId} to ${chunkSize} bytes.`);
}
}
// --- Write Chunk (Append Mode) --- // Only write if chunk has size after potential truncation
if (chunkSize > 0) {
fileHandle = await fs.open(metadata.partialFilePath, 'a');
const writeResult = await fileHandle.write(chunk);
await fileHandle.close(); // Close immediately
if (writeResult.bytesWritten !== chunkSize) {
// This indicates a partial write, which is problematic.
logger.error(`Partial write for chunk ${uploadId}! Expected ${chunkSize}, wrote ${writeResult.bytesWritten}. Disk full?`);
// How to recover? Maybe revert bytesReceived? For now, throw.
throw new Error(`Failed to write full chunk for ${uploadId}`);
}
metadata.bytesReceived += writeResult.bytesWritten;
}
// --- Update State --- (bytesReceived updated above or set if truncated to zero)
const progress = metadata.fileSize === 0 ? 100 :
Math.min( Math.round((metadata.bytesReceived / metadata.fileSize) * 100), 100);
logger.debug(`Chunk written for ${uploadId}: ${metadata.bytesReceived}/${metadata.fileSize} (${progress}%)`);
// --- Persist Updated Metadata (Before potential finalization) ---
await writeUploadMetadata(uploadId, metadata);
// --- Check for Completion --- // Now happens after metadata update
if (metadata.bytesReceived >= metadata.fileSize) {
logger.info(`Upload ${uploadId} (${metadata.originalFilename}) completed ${metadata.bytesReceived} bytes.`);
try {
await fs.rename(metadata.partialFilePath, metadata.filePath);
logger.success(`Upload completed and finalized: ${metadata.originalFilename} as ${metadata.filePath} (${metadata.fileSize} bytes)`);
await deleteUploadMetadata(uploadId); // Clean up metadata file AFTER successful rename
sendNotification(metadata.originalFilename, metadata.fileSize, config);
} catch (renameErr) {
if (renameErr.code === 'ENOENT') {
logger.warn(`Partial file ${metadata.partialFilePath} not found during finalization for ${uploadId}. Assuming already finalized elsewhere.`);
// Attempt to delete metadata anyway if partial is gone
await deleteUploadMetadata(uploadId).catch(() => {});
} else {
logger.error(`CRITICAL: Failed to rename partial file ${metadata.partialFilePath} to ${metadata.filePath}: ${renameErr.message}`);
// Keep metadata and partial file for manual recovery.
// Return success to client as data is likely there, but log server issue.
}
}
}
res.json({ bytesReceived: metadata.bytesReceived, progress });
} catch (err) {
logger.error(`Chunk upload failed:
Error: ${err.message}
Stack: ${err.stack}
File: ${upload.safeFilename}
Upload ID: ${uploadId}
Batch ID: ${batchId || 'none'}
Bytes Received: ${upload.bytesReceived}/${upload.fileSize}
`);
res.status(500).json({ error: 'Failed to process chunk' });
// Ensure file handle is closed on error
if (fileHandle) {
await fileHandle.close().catch(closeErr => logger.error(`Error closing file handle for ${uploadId} after error: ${closeErr.message}`));
}
logger.error(`Chunk upload failed for ${uploadId}: ${err.message} ${err.stack}`);
// Don't delete metadata on generic chunk errors, let client retry or cleanup handle stale files
res.status(500).json({ error: 'Failed to process chunk', details: err.message });
}
});
// Cancel upload
router.post('/cancel/:uploadId', async (req, res) => {
const { uploadId } = req.params;
const upload = uploads.get(uploadId);
if (upload) {
upload.writeStream.end();
try {
await fs.promises.unlink(upload.filePath);
} catch (err) {
logger.error(`Failed to delete incomplete upload: ${err.message}`);
}
uploads.delete(uploadId);
uploadToBatch.delete(uploadId);
logger.info(`Upload cancelled: ${upload.safeFilename}`);
// DEMO MODE CHECK
if (isDemoMode()) {
logger.info(`[DEMO] Upload cancelled: ${req.params.uploadId}`);
return res.json({ message: 'Upload cancelled (Demo)' });
}
res.json({ message: 'Upload cancelled' });
const { uploadId } = req.params;
logger.info(`Received cancel request for upload: ${uploadId}`);
try {
const metadata = await readUploadMetadata(uploadId);
if (metadata) {
// Delete partial file first
try {
await fs.unlink(metadata.partialFilePath);
logger.info(`Deleted partial file on cancellation: ${metadata.partialFilePath}`);
} catch (unlinkErr) {
if (unlinkErr.code !== 'ENOENT') { // Ignore if already gone
logger.error(`Failed to delete partial file ${metadata.partialFilePath} on cancel: ${unlinkErr.message}`);
}
}
// Then delete metadata file
await deleteUploadMetadata(uploadId);
logger.info(`Upload cancelled and cleaned up: ${uploadId} (${metadata.originalFilename})`);
} else {
logger.warn(`Cancel request for non-existent or already completed upload: ${uploadId}`);
}
res.json({ message: 'Upload cancelled or already complete' });
} catch (err) {
logger.error(`Error during upload cancellation for ${uploadId}: ${err.message}`);
res.status(500).json({ error: 'Failed to cancel upload' });
}
});
module.exports = {
router,
startBatchCleanup,
stopBatchCleanup,
// Export for testing
batchActivity,
BATCH_TIMEOUT
// Export for testing if required
readUploadMetadata,
writeUploadMetadata,
deleteUploadMetadata
};

View File

@@ -0,0 +1,60 @@
const fs = require("fs");
const path = require("path");
const PUBLIC_DIR = path.join(__dirname, "..", "..", "public");
function getFiles(dir, basePath = "/") {
let fileList = [];
const files = fs.readdirSync(dir);
files.forEach((file) => {
const filePath = path.join(dir, file);
const fileUrl = path.join(basePath, file).replace(/\\/g, "/");
if (fs.statSync(filePath).isDirectory()) {
fileList = fileList.concat(getFiles(filePath, fileUrl));
} else {
fileList.push(fileUrl);
}
});
return fileList;
}
function generateAssetManifest() {
const assets = getFiles(PUBLIC_DIR);
fs.writeFileSync(path.join(PUBLIC_DIR, "asset-manifest.json"), JSON.stringify(assets, null, 2));
console.log("Asset manifest generated!", assets);
}
function generatePWAManifest() {
generateAssetManifest(); // fetched later in service-worker
const siteTitle = process.env.DUMBDROP_TITLE || process.env.SITE_TITLE || "DumbDrop";
const pwaManifest = {
name: siteTitle,
short_name: siteTitle,
description: "A simple file upload application",
start_url: "/",
display: "standalone",
background_color: "#ffffff",
theme_color: "#000000",
icons: [
{
src: "/assets/icon.png",
type: "image/png",
sizes: "192x192"
},
{
src: "/assets/icon.png",
type: "image/png",
sizes: "512x512"
}
],
orientation: "any"
};
fs.writeFileSync(path.join(PUBLIC_DIR, "manifest.json"), JSON.stringify(pwaManifest, null, 2));
console.log("PWA manifest generated!", pwaManifest);
}
module.exports = { generatePWAManifest };

View File

@@ -8,6 +8,7 @@ const { app, initialize, config } = require('./app');
const logger = require('./utils/logger');
const fs = require('fs');
const { executeCleanup } = require('./utils/cleanup');
const { generatePWAManifest } = require('./scripts/pwa-manifest-generator')
// Track open connections
const connections = new Set();
@@ -40,6 +41,9 @@ async function startServer() {
}
});
// Dynamically generate PWA manifest into public folder
generatePWAManifest();
// Track new connections
server.on('connection', (connection) => {
connections.add(connection);
@@ -49,13 +53,16 @@ async function startServer() {
});
// Shutdown handler function
let isShuttingDown = false; // Prevent multiple shutdowns
const shutdownHandler = async (signal) => {
if (isShuttingDown) return;
isShuttingDown = true;
logger.info(`${signal} received. Shutting down gracefully...`);
// Start a shorter force shutdown timer
const forceShutdownTimer = setTimeout(() => {
logger.error('Force shutdown initiated');
throw new Error('Force shutdown due to timeout');
process.exit(1);
}, 3000); // 3 seconds maximum for total shutdown
try {
@@ -88,9 +95,10 @@ async function startServer() {
// Clear the force shutdown timer since we completed gracefully
clearTimeout(forceShutdownTimer);
process.exitCode = 0;
process.exit(0); // Ensure immediate exit
} catch (error) {
logger.error(`Error during shutdown: ${error.message}`);
throw error;
process.exit(1);
}
};

View File

@@ -4,13 +4,10 @@
* Handles message formatting and notification delivery.
*/
const { exec } = require('child_process');
const util = require('util');
const { formatFileSize, calculateDirectorySize } = require('../utils/fileUtils');
const { spawn } = require('child_process');
const { formatFileSize, calculateDirectorySize, sanitizeFilename } = require('../utils/fileUtils');
const logger = require('../utils/logger');
const execAsync = util.promisify(exec);
/**
* Send a notification using Apprise
* @param {string} filename - Name of uploaded file
@@ -19,34 +16,56 @@ const execAsync = util.promisify(exec);
* @returns {Promise<void>}
*/
async function sendNotification(filename, fileSize, config) {
const { APPRISE_URL, APPRISE_MESSAGE, APPRISE_SIZE_UNIT, uploadDir } = config;
if (!APPRISE_URL) {
return;
}
const { appriseUrl, appriseMessage, appriseSizeUnit, uploadDir } = config;
try {
const formattedSize = formatFileSize(fileSize, APPRISE_SIZE_UNIT);
const dirSize = await calculateDirectorySize(uploadDir);
const totalStorage = formatFileSize(dirSize);
// Sanitize the message components
const sanitizedFilename = JSON.stringify(filename).slice(1, -1);
const message = APPRISE_MESSAGE
.replace('{filename}', sanitizedFilename)
.replace('{size}', formattedSize)
.replace('{storage}', totalStorage);
console.debug("NOTIFICATIONS CONFIG:", filename, fileSize, config);
if (!appriseUrl) {
return;
}
// Use string command for better escaping
const command = `apprise ${APPRISE_URL} -b "${message}"`;
await execAsync(command, { shell: true });
logger.info(`Notification sent for: ${sanitizedFilename} (${formattedSize}, Total storage: ${totalStorage})`);
} catch (err) {
logger.error(`Failed to send notification: ${err.message}`);
}
try {
const formattedSize = formatFileSize(fileSize, appriseSizeUnit);
const dirSize = await calculateDirectorySize(uploadDir);
const totalStorage = formatFileSize(dirSize);
// Sanitize the filename to remove any special characters that could cause issues
const sanitizedFilename = sanitizeFilename(filename); // apply sanitization of filename again (in case)
// Construct the notification message by replacing placeholders
const message = appriseMessage
.replace('{filename}', sanitizedFilename)
.replace('{size}', formattedSize)
.replace('{storage}', totalStorage);
await new Promise((resolve, reject) => {
const appriseProcess = spawn('apprise', [appriseUrl, '-b', message]);
appriseProcess.stdout.on('data', (data) => {
logger.info(`Apprise Output: ${data.toString().trim()}`);
});
appriseProcess.stderr.on('data', (data) => {
logger.error(`Apprise Error: ${data.toString().trim()}`);
});
appriseProcess.on('close', (code) => {
if (code === 0) {
logger.info(`Notification sent for: ${sanitizedFilename} (${formattedSize}, Total storage: ${totalStorage})`);
resolve();
} else {
reject(new Error(`Apprise process exited with code ${code}`));
}
});
appriseProcess.on('error', (err) => {
reject(new Error(`Apprise process failed to start: ${err.message}`));
});
});
} catch (err) {
logger.error(`Failed to send notification: ${err.message}`);
}
}
module.exports = {
sendNotification
};
sendNotification,
};

View File

@@ -4,23 +4,22 @@
* Provides cleanup task registration and execution system.
*/
const fs = require('fs');
const fs = require('fs').promises;
const path = require('path');
const logger = require('./logger');
const { config } = require('../config');
/**
* Stores cleanup tasks that need to be run during shutdown
* @type {Set<Function>}
*/
const cleanupTasks = new Set();
const METADATA_DIR = path.join(config.uploadDir, '.metadata');
const UPLOAD_TIMEOUT = config.uploadTimeout || 30 * 60 * 1000; // Use a config or default (e.g., 30 mins)
let cleanupTasks = [];
/**
* Register a cleanup task to be executed during shutdown
* @param {Function} task - Async function to be executed during cleanup
*/
function registerCleanupTask(task) {
cleanupTasks.add(task);
cleanupTasks.push(task);
}
/**
@@ -28,7 +27,7 @@ function registerCleanupTask(task) {
* @param {Function} task - Task to remove
*/
function removeCleanupTask(task) {
cleanupTasks.delete(task);
cleanupTasks = cleanupTasks.filter((t) => t !== task);
}
/**
@@ -37,7 +36,7 @@ function removeCleanupTask(task) {
* @returns {Promise<void>}
*/
async function executeCleanup(timeout = 1000) {
const taskCount = cleanupTasks.size;
const taskCount = cleanupTasks.length;
if (taskCount === 0) {
logger.info('No cleanup tasks to execute');
return;
@@ -49,7 +48,7 @@ async function executeCleanup(timeout = 1000) {
// Run all cleanup tasks in parallel with timeout
await Promise.race([
Promise.all(
Array.from(cleanupTasks).map(async (task) => {
cleanupTasks.map(async (task) => {
try {
await Promise.race([
task(),
@@ -80,7 +79,7 @@ async function executeCleanup(timeout = 1000) {
}
} finally {
// Clear all tasks regardless of success/failure
cleanupTasks.clear();
cleanupTasks = [];
}
}
@@ -113,7 +112,7 @@ async function cleanupIncompleteUploads(uploads, uploadToBatch, batchActivity) {
// Delete incomplete file
try {
await fs.promises.unlink(upload.filePath);
await fs.unlink(upload.filePath);
logger.info(`Cleaned up incomplete upload: ${upload.safeFilename}`);
} catch (err) {
if (err.code !== 'ENOENT') {
@@ -138,31 +137,173 @@ async function cleanupIncompleteUploads(uploads, uploadToBatch, batchActivity) {
}
}
/**
* Clean up stale/incomplete uploads based on metadata files.
*/
async function cleanupIncompleteMetadataUploads() {
logger.info('Running cleanup for stale metadata/partial uploads...');
let cleanedCount = 0;
let checkedCount = 0;
try {
// Ensure metadata directory exists before trying to read it
try {
await fs.access(METADATA_DIR);
} catch (accessErr) {
if (accessErr.code === 'ENOENT') {
logger.info('Metadata directory does not exist, skipping metadata cleanup.');
return;
}
throw accessErr; // Rethrow other access errors
}
const files = await fs.readdir(METADATA_DIR);
const now = Date.now();
for (const file of files) {
if (file.endsWith('.meta')) {
checkedCount++;
const uploadId = file.replace('.meta', '');
const metaFilePath = path.join(METADATA_DIR, file);
let metadata;
try {
const data = await fs.readFile(metaFilePath, 'utf8');
metadata = JSON.parse(data);
// Check inactivity based on lastActivity timestamp in metadata
if (now - (metadata.lastActivity || metadata.createdAt || 0) > UPLOAD_TIMEOUT) {
logger.warn(`Found stale upload metadata: ${file}. Last activity: ${new Date(metadata.lastActivity || metadata.createdAt)}`);
// Attempt to delete partial file
if (metadata.partialFilePath) {
try {
await fs.unlink(metadata.partialFilePath);
logger.info(`Deleted stale partial file: ${metadata.partialFilePath}`);
} catch (unlinkPartialErr) {
if (unlinkPartialErr.code !== 'ENOENT') { // Ignore if already gone
logger.error(`Failed to delete stale partial file ${metadata.partialFilePath}: ${unlinkPartialErr.message}`);
}
}
}
// Attempt to delete metadata file
try {
await fs.unlink(metaFilePath);
logger.info(`Deleted stale metadata file: ${file}`);
cleanedCount++;
} catch (unlinkMetaErr) {
logger.error(`Failed to delete stale metadata file ${metaFilePath}: ${unlinkMetaErr.message}`);
}
}
} catch (readErr) {
logger.error(`Error reading or parsing metadata file ${metaFilePath} during cleanup: ${readErr.message}. Skipping.`);
// Optionally attempt to delete the corrupt meta file?
// await fs.unlink(metaFilePath).catch(()=>{});
}
} else if (file.endsWith('.tmp')) {
// Clean up potential leftover temp metadata files
const tempMetaPath = path.join(METADATA_DIR, file);
try {
const stats = await fs.stat(tempMetaPath);
if (now - stats.mtime.getTime() > UPLOAD_TIMEOUT) { // If temp file is also old
logger.warn(`Deleting stale temporary metadata file: ${file}`);
await fs.unlink(tempMetaPath);
}
} catch (statErr) {
if (statErr.code !== 'ENOENT') { // Ignore if already gone
logger.error(`Error checking temporary metadata file ${tempMetaPath}: ${statErr.message}`);
}
}
}
}
if (checkedCount > 0 || cleanedCount > 0) {
logger.info(`Metadata cleanup finished. Checked: ${checkedCount}, Cleaned stale: ${cleanedCount}.`);
}
} catch (err) {
// Handle errors reading the METADATA_DIR itself
if (err.code === 'ENOENT') {
logger.info('Metadata directory not found during cleanup scan.'); // Should have been created on init
} else {
logger.error(`Error during metadata cleanup scan: ${err.message}`);
}
}
// Also run empty folder cleanup
await cleanupEmptyFolders(config.uploadDir);
}
// Schedule the new cleanup function
const METADATA_CLEANUP_INTERVAL = 15 * 60 * 1000; // e.g., every 15 minutes
let metadataCleanupTimer = setInterval(cleanupIncompleteMetadataUploads, METADATA_CLEANUP_INTERVAL);
metadataCleanupTimer.unref(); // Allow process to exit if this is the only timer
process.on('SIGTERM', () => clearInterval(metadataCleanupTimer));
process.on('SIGINT', () => clearInterval(metadataCleanupTimer));
/**
* Recursively remove empty folders
* @param {string} dir - Directory to clean
*/
async function cleanupEmptyFolders(dir) {
try {
const files = await fs.promises.readdir(dir);
// Avoid trying to clean the special .metadata directory itself
if (path.basename(dir) === '.metadata') {
logger.debug(`Skipping cleanup of metadata directory: ${dir}`);
return;
}
const files = await fs.readdir(dir);
for (const file of files) {
const fullPath = path.join(dir, file);
const stats = await fs.promises.stat(fullPath);
// Skip the metadata directory during traversal
if (path.basename(fullPath) === '.metadata') {
logger.debug(`Skipping traversal into metadata directory: ${fullPath}`);
continue;
}
let stats;
try {
stats = await fs.stat(fullPath);
} catch (statErr) {
if (statErr.code === 'ENOENT') continue; // File might have been deleted concurrently
throw statErr;
}
if (stats.isDirectory()) {
await cleanupEmptyFolders(fullPath);
// Check if directory is empty after cleaning subdirectories
const remaining = await fs.promises.readdir(fullPath);
let remaining = [];
try {
remaining = await fs.readdir(fullPath);
} catch (readErr) {
if (readErr.code === 'ENOENT') continue; // Directory was deleted
throw readErr;
}
if (remaining.length === 0) {
await fs.promises.rmdir(fullPath);
logger.info(`Removed empty directory: ${fullPath}`);
// Make sure we don't delete the main upload dir
if (fullPath !== path.resolve(config.uploadDir)) {
try {
await fs.rmdir(fullPath);
logger.info(`Removed empty directory: ${fullPath}`);
} catch (rmErr) {
if (rmErr.code !== 'ENOENT') { // Ignore if already deleted
logger.error(`Failed to remove supposedly empty directory ${fullPath}: ${rmErr.message}`);
}
}
}
}
}
}
} catch (err) {
logger.error(`Failed to clean empty folders: ${err.message}`);
if (err.code !== 'ENOENT') { // Ignore if dir was already deleted
logger.error(`Failed to clean empty folders in ${dir}: ${err.message}`);
}
}
}
@@ -171,5 +312,6 @@ module.exports = {
removeCleanupTask,
executeCleanup,
cleanupIncompleteUploads,
cleanupIncompleteMetadataUploads,
cleanupEmptyFolders
};

184
src/utils/demoMode.js Normal file
View File

@@ -0,0 +1,184 @@
/**
* Demo mode utilities
* Provides demo banner and demo-related functionality
* Used to clearly indicate when application is running in demo mode
*/
const multer = require('multer');
const express = require('express');
const router = express.Router();
const logger = require('./logger');
const { config } = require('../config');
const isDemoMode = () => process.env.DEMO_MODE === 'true';
const getDemoBannerHTML = () => `
<div id="demo-banner" style="
background: #ff6b6b;
color: white;
text-align: center;
padding: 10px;
font-weight: bold;
position: fixed;
top: 0;
left: 0;
right: 0;
z-index: 9999;
box-shadow: 0 2px 4px rgba(0,0,0,0.2);
">
🚀 DEMO MODE - This is a demonstration only. Files will not be saved. 🚀
</div>
`;
const injectDemoBanner = (html) => {
if (!isDemoMode()) return html;
return html.replace(
'<body>',
'<body>' + getDemoBannerHTML()
);
};
// Mock storage for demo files and uploads
const demoFiles = new Map();
const demoUploads = new Map();
// Configure demo upload handling
const storage = multer.memoryStorage();
const upload = multer({ storage });
// Create demo routes with exact path matching
const demoRouter = express.Router();
// Mock upload init - match exact path
demoRouter.post('/api/upload/init', (req, res) => {
const { filename, fileSize } = req.body;
const uploadId = 'demo-' + Math.random().toString(36).substr(2, 9);
demoUploads.set(uploadId, {
filename,
fileSize,
bytesReceived: 0
});
logger.info(`[DEMO] Initialized upload for ${filename} (${fileSize} bytes)`);
return res.json({ uploadId });
});
// Mock chunk upload - match exact path and handle large files
demoRouter.post('/api/upload/chunk/:uploadId',
express.raw({
type: 'application/octet-stream',
limit: config.maxFileSize
}),
(req, res) => {
const { uploadId } = req.params;
const upload = demoUploads.get(uploadId);
if (!upload) {
return res.status(404).json({ error: 'Upload not found' });
}
const chunkSize = req.body.length;
upload.bytesReceived += chunkSize;
// Calculate progress
const progress = Math.min(
Math.round((upload.bytesReceived / upload.fileSize) * 100),
100
);
logger.debug(`[DEMO] Chunk received for ${upload.filename}, progress: ${progress}%`);
// If upload is complete
if (upload.bytesReceived >= upload.fileSize) {
const fileId = 'demo-' + Math.random().toString(36).substr(2, 9);
const mockFile = {
id: fileId,
name: upload.filename,
size: upload.fileSize,
url: `/api/files/${fileId}`,
createdAt: new Date().toISOString()
};
demoFiles.set(fileId, mockFile);
demoUploads.delete(uploadId);
logger.success(`[DEMO] Upload completed: ${upload.filename} (${upload.fileSize} bytes)`);
// Return completion response
return res.json({
bytesReceived: upload.bytesReceived,
progress,
complete: true,
file: mockFile
});
}
return res.json({
bytesReceived: upload.bytesReceived,
progress
});
}
);
// Mock upload cancel - match exact path
demoRouter.post('/api/upload/cancel/:uploadId', (req, res) => {
const { uploadId } = req.params;
demoUploads.delete(uploadId);
logger.info(`[DEMO] Upload cancelled: ${uploadId}`);
return res.json({ message: 'Upload cancelled' });
});
// Mock file download - match exact path
demoRouter.get('/api/files/:id', (req, res) => {
const file = demoFiles.get(req.params.id);
if (!file) {
return res.status(404).json({
message: 'Demo Mode: File not found'
});
}
return res.json({
message: 'Demo Mode: This would download the file in production',
file
});
});
// Mock file list - match exact path
demoRouter.get('/api/files', (req, res) => {
return res.json({
files: Array.from(demoFiles.values()),
message: 'Demo Mode: Showing mock file list'
});
});
// Update middleware to handle errors
const demoMiddleware = (req, res, next) => {
if (!isDemoMode()) return next();
logger.debug(`[DEMO] Incoming request: ${req.method} ${req.path}`);
// Handle payload too large errors
demoRouter(req, res, (err) => {
if (err) {
logger.error(`[DEMO] Error handling request: ${err.message}`);
if (err.type === 'entity.too.large') {
return res.status(413).json({
error: 'Payload too large',
message: `File size exceeds limit of ${config.maxFileSize} bytes`
});
}
return res.status(500).json({
error: 'Internal server error',
message: err.message
});
}
next();
});
};
module.exports = {
isDemoMode,
injectDemoBanner,
demoMiddleware
};

View File

@@ -9,19 +9,6 @@ const path = require('path');
const logger = require('./logger');
const { config } = require('../config');
/**
* Get display path for logs
* @param {string} internalPath - Internal Docker path
* @returns {string} Display path for host machine
*/
function getDisplayPath(internalPath) {
if (!internalPath.startsWith(config.uploadDir)) return internalPath;
// Replace the container path with the host path
const relativePath = path.relative(config.uploadDir, internalPath);
return path.join(config.uploadDisplayPath, relativePath);
}
/**
* Format file size to human readable format
* @param {number} bytes - Size in bytes
@@ -90,13 +77,13 @@ async function ensureDirectoryExists(directoryPath) {
try {
if (!fs.existsSync(directoryPath)) {
await fs.promises.mkdir(directoryPath, { recursive: true });
logger.info(`Created directory: ${getDisplayPath(directoryPath)}`);
logger.info(`Created directory: ${directoryPath}`);
}
await fs.promises.access(directoryPath, fs.constants.W_OK);
logger.success(`Directory is writable: ${getDisplayPath(directoryPath)}`);
logger.success(`Directory is writable: ${directoryPath}`);
} catch (err) {
logger.error(`Directory error: ${err.message}`);
throw new Error(`Failed to access or create directory: ${getDisplayPath(directoryPath)}`);
throw new Error(`Failed to access or create directory: ${directoryPath}`);
}
}
@@ -129,8 +116,8 @@ async function getUniqueFilePath(filePath) {
}
}
// Log using display path
logger.info(`Using unique path: ${getDisplayPath(finalPath)}`);
// Log using actual path
logger.info(`Using unique path: ${finalPath}`);
return { path: finalPath, handle: fileHandle };
}
@@ -160,10 +147,36 @@ async function getUniqueFolderPath(folderPath) {
return finalPath;
}
function sanitizeFilename(fileName) {
const sanitized = fileName.replace(/[<>:"/\\|?*]+/g, '').replace(/["`$|;&<>]/g, '');
return sanitized;
}
function sanitizePathPreserveDirs(filePath) {
// Split on forward slashes, sanitize each part, and rejoin
return filePath
.split('/')
.map(part => sanitizeFilename(part))
.join('/');
}
/**
* Validate batch ID format
* @param {string} batchId - Batch ID to validate
* @returns {boolean} True if valid (matches timestamp-9_alphanumeric format)
*/
function isValidBatchId(batchId) {
if (!batchId) return false;
return /^\d+-[a-z0-9]{9}$/.test(batchId);
}
module.exports = {
formatFileSize,
calculateDirectorySize,
ensureDirectoryExists,
getUniqueFilePath,
getUniqueFolderPath
getUniqueFolderPath,
sanitizeFilename,
sanitizePathPreserveDirs,
isValidBatchId
};