2 Commits

Author SHA1 Message Date
Greirson Lee-Thorp
105d2a7412 feat(upload): Implement persistent state via metadata for resumability (#50)
* feat: Enhance chunk upload functionality with configurable retry logic

- Introduced MAX_RETRIES configuration to allow dynamic adjustment of retry attempts for chunk uploads.
- Updated index.html to read MAX_RETRIES from server-side configuration, providing a default value if not set.
- Implemented retry logic in uploadChunkWithRetry method, including exponential backoff and error handling for network issues.
- Added console warnings for invalid or missing MAX_RETRIES values to improve debugging.

This commit improves the robustness of file uploads by allowing configurable retry behavior, enhancing user experience during upload failures.

* feat: Enhance upload functionality with metadata management and improved error handling

- Introduced persistent metadata management for uploads, allowing resumability and better tracking of upload states.
- Added special handling for 404 responses during chunk uploads, logging warnings and marking uploads as complete if previously finished.
- Implemented metadata directory creation and validation in app.js to ensure proper upload management.
- Updated upload.js to include metadata read/write functions, improving the robustness of the upload process.
- Enhanced cleanup routines to handle stale metadata and incomplete uploads, ensuring a cleaner state.

This commit significantly improves the upload process by adding metadata support, enhancing error handling, and ensuring better resource management during uploads.
2025-05-04 11:33:01 -07:00
Greirson Lee-Thorp
e963f2bcde feat: Improve dev experience, Improve Environmental Variable and Folder Control, resolves BASE_URL junk (#49)
* feat: Add ALLOWED_IFRAME_ORIGINS configuration and update security headers (#47)

- Introduced ALLOWED_IFRAME_ORIGINS environment variable to specify trusted origins for iframe embedding.
- Updated security headers middleware to conditionally allow specified origins in Content Security Policy.
- Enhanced documentation in README.md to explain the new configuration and its security implications.

Fixes #35

* feat: Update .env.example and .gitignore for improved configuration management

- Enhanced .env.example with detailed comments for environment variables, including upload settings, security options, and notification configurations.
- Updated .gitignore to include additional editor and OS-specific files, ensuring a cleaner repository.
- Modified package.json to add a predev script for Node.js version validation and adjusted the dev script for nodemon.
- Improved server.js shutdown handling to prevent multiple shutdowns and ensure graceful exits.
- Refactored config/index.js to log loaded environment variables and ensure the upload directory exists based on environment settings.
- Cleaned up fileUtils.js by removing unused functions and improving logging for directory creation.

This commit enhances clarity and maintainability of configuration settings and improves application shutdown behavior.

* feat: Update Docker configuration and documentation for upload handling

- Explicitly set the upload directory environment variable in docker-compose.yml to ensure clarity in file storage.
- Simplified the Dockerfile by removing the creation of the local_uploads directory, as it is now managed by the host system.
- Enhanced README.md to reflect changes in upload directory management and provide clearer instructions for users.
- Removed outdated development configuration files to streamline the development setup.

This commit improves the clarity and usability of the Docker setup for file uploads.

* feat: Add Local Development Guide and update README for clarity

- Introduced a comprehensive LOCAL_DEVELOPMENT.md file with setup instructions, testing guidelines, and troubleshooting tips for local development.
- Updated README.md to include a link to the new Local Development Guide and revised sections for clarity regarding upload directory management.
- Enhanced the Quick Start section to direct users to the dedicated local development documentation.

This commit improves the onboarding experience for developers and provides clear instructions for local setup.

* feat: Implement BASE_URL configuration for asset management and API requests

- Added BASE_URL configuration to README.md, emphasizing the need for a trailing slash when deploying under a subpath.
- Updated index.html and login.html to utilize BASE_URL for linking stylesheets, icons, and API requests, ensuring correct asset loading.
- Enhanced app.js to replace placeholders with the actual BASE_URL during HTML rendering.
- Implemented a validation check in config/index.js to ensure BASE_URL is a valid URL and ends with a trailing slash.

This commit improves the flexibility of the application for different deployment scenarios and enhances asset management.

Fixes #34, Fixes #39, Fixes #38

* Update app.js, borked some of the css n such

* resolved BASE_URL breaking frontend

* fix: Update BASE_URL handling and security headers

- Ensured BASE_URL has a trailing slash in app.js to prevent asset loading issues.
- Refactored index.html and login.html to remove leading slashes from API paths for correct concatenation with BASE_URL.
- Enhanced security headers middleware to include 'connect-src' directive in Content Security Policy.

This commit addresses issues with asset management and improves security configurations.
2025-05-04 10:29:48 -07:00
23 changed files with 1455 additions and 1029 deletions

View File

@@ -1,18 +1,68 @@
# Server Configuration
PORT=3000 # The port the server will listen on
BASE_URL=http://localhost:3000 # The base URL for the application
#########################################
# SERVER CONFIGURATION
#########################################
# Upload Settings
MAX_FILE_SIZE=1024 # Maximum file size in MB
AUTO_UPLOAD=false # Enable automatic upload on file selection
# Port for the server (default: 3000)
PORT=3000
# Security
DUMBDROP_PIN= # Optional PIN protection (4-10 digits)
DUMBDROP_TITLE=DumbDrop # Site title displayed in header
# Base URL for the application (default: http://localhost:PORT)
BASE_URL=http://localhost:3000/
# Notifications (Optional)
APPRISE_URL= # Apprise URL for notifications (e.g., tgram://bottoken/ChatID)
APPRISE_MESSAGE=New file uploaded - {filename} ({size}), Storage used {storage}
APPRISE_SIZE_UNIT=auto # Size unit for notifications (auto, B, KB, MB, GB, TB)
# Node environment (default: development)
NODE_ENV=development
DEMO_MODE=false
#########################################
# FILE UPLOAD SETTINGS
#########################################
# Maximum file size in MB (default: 1024)
MAX_FILE_SIZE=1024
# Directory for uploads (Docker/production; optional)
UPLOAD_DIR=
# Directory for uploads (local dev, fallback: './local_uploads')
LOCAL_UPLOAD_DIR=./local_uploads
# Comma-separated list of allowed file extensions (optional, e.g. .jpg,.png,.pdf)
# ALLOWED_EXTENSIONS=.jpg,.png,.pdf
ALLOWED_EXTENSIONS=
#########################################
# SECURITY
#########################################
# PIN protection (4-10 digits, optional)
# DUMBDROP_PIN=1234
DUMBDROP_PIN=
#########################################
# UI SETTINGS
#########################################
# Site title displayed in header (default: DumbDrop)
DUMBDROP_TITLE=DumbDrop
#########################################
# NOTIFICATION SETTINGS
#########################################
# Apprise URL for notifications (optional)
APPRISE_URL=
# Notification message template (default: New file uploaded {filename} ({size}), Storage used {storage})
APPRISE_MESSAGE=New file uploaded {filename} ({size}), Storage used {storage}
# Size unit for notifications (B, KB, MB, GB, TB, or Auto; default: Auto)
APPRISE_SIZE_UNIT=Auto
#########################################
# ADVANCED
#########################################
# Enable automatic upload on file selection (true/false, default: false)
AUTO_UPLOAD=false
# Comma-separated list of origins allowed to embed the app in an iframe (optional)
# ALLOWED_IFRAME_ORIGINS=https://example.com,https://another.com
ALLOWED_IFRAME_ORIGINS=

36
.gitignore vendored
View File

@@ -203,4 +203,38 @@ Thumbs.db
*.log
.env.*
!.env.example
!dev/.env.dev.example
!dev/.env.dev.example
# Added by Claude Task Master
dev-debug.log
# Environment variables
# Editor directories and files
.idea
.vscode
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?
# OS specific
# Task files
.windsurfrules
README-task-master.md
.cursor/mcp.json
.cursor/rules/cursor_rules.mdc
.cursor/rules/dev_workflow.mdc
.cursor/rules/self_improve.mdc
.cursor/rules/taskmaster.mdc
scripts/example_prd.txt
scripts/prd.txt
tasks/task_001.txt
tasks/task_002.txt
tasks/task_003.txt
tasks/task_004.txt
tasks/task_005.txt
tasks/task_006.txt
tasks/task_007.txt
tasks/task_008.txt
tasks/task_009.txt
tasks/task_010.txt
tasks/tasks.json

View File

@@ -32,8 +32,8 @@ ENV NODE_ENV=development
RUN npm install && \
npm cache clean --force
# Create upload directories
RUN mkdir -p uploads local_uploads
# Create upload directory
RUN mkdir -p uploads
# Copy source with specific paths to avoid unnecessary files
COPY src/ ./src/

122
LOCAL_DEVELOPMENT.md Normal file
View File

@@ -0,0 +1,122 @@
# Local Development (Recommended Quick Start)
## Prerequisites
- **Node.js** >= 20.0.0
_Why?_: The app uses features only available in Node 20+.
- **npm** (comes with Node.js)
- **Python 3** (for notification testing, optional)
- **Apprise** (for notification testing, optional)
## Setup Instructions
1. **Clone the repository**
```bash
git clone https://github.com/yourusername/dumbdrop.git
cd dumbdrop
```
2. **Copy and configure environment variables**
```bash
cp .env.example .env
```
- Open `.env` in your editor and review the variables.
- At minimum, set:
- `PORT=3000`
- `LOCAL_UPLOAD_DIR=./local_uploads`
- `MAX_FILE_SIZE=1024`
- `DUMBDROP_PIN=` (optional, for PIN protection)
- `APPRISE_URL=` (optional, for notifications)
3. **Install dependencies**
```bash
npm install
```
4. **Start the development server**
```bash
npm run dev
```
- You should see output like:
```
DumbDrop server running on http://localhost:3000
```
5. **Open the app**
- Go to [http://localhost:3000](http://localhost:3000) in your browser.
---
## Testing File Uploads
- Drag and drop files onto the web interface.
- Supported file types: _All_, unless restricted by `ALLOWED_EXTENSIONS` in `.env`.
- Maximum file size: as set by `MAX_FILE_SIZE` (default: 1024 MB).
- Uploaded files are stored in the directory specified by `LOCAL_UPLOAD_DIR` (default: `./local_uploads`).
- To verify uploads:
- Check the `local_uploads` folder for your files.
- The UI will show a success message on upload.
---
## Notification Testing (Python/Apprise)
If you want to test notifications (e.g., for new uploads):
1. **Install Python 3**
- [Download Python](https://www.python.org/downloads/) if not already installed.
2. **Install Apprise**
```bash
pip install apprise
```
3. **Configure Apprise in `.env`**
- Set `APPRISE_URL` to your notification service URL (see [Apprise documentation](https://github.com/caronc/apprise)).
- Example for a local test:
```
APPRISE_URL=mailto://your@email.com
```
4. **Trigger a test notification**
- Upload a file via the web UI.
- If configured, you should receive a notification.
---
## Troubleshooting
**Problem:** Port already in use
**Solution:**
- Change the `PORT` in `.env` to a free port.
**Problem:** "Cannot find module 'express'"
**Solution:**
- Run `npm install` to install dependencies.
**Problem:** File uploads not working
**Solution:**
- Ensure `LOCAL_UPLOAD_DIR` exists and is writable.
- Check file size and extension restrictions in `.env`.
**Problem:** Notifications not sent
**Solution:**
- Verify `APPRISE_URL` is set and correct.
- Ensure Apprise is installed and accessible.
**Problem:** Permission denied on uploads
**Solution:**
- Make sure your user has write permissions to `local_uploads`.
**Problem:** Environment variables not loading
**Solution:**
- Double-check that `.env` exists and is formatted correctly.
- Restart the server after making changes.
---
## Additional Notes
- For Docker-based development, see the "Quick Start" and "Docker Compose" sections in the main README.
- For more advanced configuration, review the "Configuration" section in the main README.
- If you encounter issues not listed here, please open an issue on GitHub or check the Discussions tab.

101
README.md
View File

@@ -8,10 +8,11 @@ No auth (unless you want it now!), no storage, no nothing. Just a simple file up
## Table of Contents
- [Quick Start](#quick-start)
- [Production Deployment with Docker](#production-deployment-with-docker)
- [Local Development (Recommended Quick Start)](LOCAL_DEVELOPMENT.md)
- [Features](#features)
- [Configuration](#configuration)
- [Security](#security)
- [Development](#development)
- [Technical Details](#technical-details)
- [Demo Mode](demo.md)
- [Contributing](#contributing)
@@ -19,17 +20,13 @@ No auth (unless you want it now!), no storage, no nothing. Just a simple file up
## Quick Start
### Prerequisites
- Docker (recommended)
- Node.js >=20.0.0 (for local development)
### Option 1: Docker (For Dummies)
```bash
# Pull and run with one command
docker run -p 3000:3000 -v ./local_uploads:/app/uploads dumbwareio/dumbdrop:latest
docker run -p 3000:3000 -v ./uploads:/app/uploads dumbwareio/dumbdrop:latest
```
1. Go to http://localhost:3000
2. Upload a File - It'll show up in ./local_uploads
2. Upload a File - It'll show up in ./uploads
3. Celebrate on how dumb easy this was
### Option 2: Docker Compose (For Dummies who like customizing)
@@ -42,8 +39,10 @@ services:
- 3000:3000
volumes:
# Where your uploaded files will land
- ./local_uploads:/app/uploads
- ./uploads:/app/uploads
environment:
# Explicitly set upload directory inside the container
UPLOAD_DIR: /app/uploads
# The title shown in the web interface
DUMBDROP_TITLE: DumbDrop
# Maximum file size in MB
@@ -55,42 +54,21 @@ services:
# The base URL for the application
BASE_URL: http://localhost:3000
```
Then run:
```bash
docker compose up -d
```
1. Go to http://localhost:3000
2. Upload a File - It'll show up in ./local_uploads
2. Upload a File - It'll show up in ./uploads
3. Rejoice in the glory of your dumb uploads
> **Note:** The `UPLOAD_DIR` environment variable is now explicitly set to `/app/uploads` in the container. The Dockerfile only creates the `uploads` directory, not `local_uploads`. The host directory `./uploads` is mounted to `/app/uploads` for persistent storage.
### Option 3: Running Locally (For Developers)
> If you're a developer, check out our [Dev Guide](#development) for the dumb setup.
For local development setup, troubleshooting, and advanced usage, see the dedicated guide:
1. Install dependencies:
```bash
npm install
```
2. Set environment variables in `.env`:
```env
PORT=3000 # Port to run the server on
MAX_FILE_SIZE=1024 # Maximum file size in MB
DUMBDROP_PIN=123456 # Optional PIN protection
```
3. Start the server:
```bash
npm start
```
#### Windows Users
If you're using Windows PowerShell with Docker, use this format for paths:
```bash
docker run -p 3000:3000 -v "${PWD}\local_uploads:/app/uploads" dumbwareio/dumbdrop:latest
```
👉 [Local Development Guide](LOCAL_DEVELOPMENT.md)
## Features
@@ -111,23 +89,33 @@ docker run -p 3000:3000 -v "${PWD}\local_uploads:/app/uploads" dumbwareio/dumbdr
### Environment Variables
| Variable | Description | Default | Required |
|------------------|---------------------------------------|---------|----------|
| PORT | Server port | 3000 | No |
| BASE_URL | Base URL for the application | http://localhost:PORT | No |
| MAX_FILE_SIZE | Maximum file size in MB | 1024 | No |
| DUMBDROP_PIN | PIN protection (4-10 digits) | None | No |
| DUMBDROP_TITLE | Site title displayed in header | DumbDrop| No |
| APPRISE_URL | Apprise URL for notifications | None | No |
| APPRISE_MESSAGE | Notification message template | New file uploaded {filename} ({size}), Storage used {storage} | No |
| APPRISE_SIZE_UNIT| Size unit for notifications | Auto | No |
| AUTO_UPLOAD | Enable automatic upload on file selection | false | No |
| ALLOWED_EXTENSIONS| Comma-separated list of allowed file extensions | None | No |
| ALLOWED_IFRAME_ORIGINS | Comma-separated list of origins allowed to embed the app in an iframe (e.g. https://organizr.example.com,https://myportal.com) | None | No |
| Variable | Description | Default | Required |
|------------------------|------------------------------------------------------------------|-----------------------------------------|----------|
| PORT | Server port | 3000 | No |
| BASE_URL | Base URL for the application | http://localhost:PORT | No |
| MAX_FILE_SIZE | Maximum file size in MB | 1024 | No |
| DUMBDROP_PIN | PIN protection (4-10 digits) | None | No |
| DUMBDROP_TITLE | Site title displayed in header | DumbDrop | No |
| APPRISE_URL | Apprise URL for notifications | None | No |
| APPRISE_MESSAGE | Notification message template | New file uploaded {filename} ({size}), Storage used {storage} | No |
| APPRISE_SIZE_UNIT | Size unit for notifications (B, KB, MB, GB, TB, or Auto) | Auto | No |
| AUTO_UPLOAD | Enable automatic upload on file selection | false | No |
| ALLOWED_EXTENSIONS | Comma-separated list of allowed file extensions | None | No |
| ALLOWED_IFRAME_ORIGINS | Comma-separated list of origins allowed to embed the app in an iframe | None | No |
| UPLOAD_DIR | Directory for uploads (Docker/production; should be `/app/uploads` in container) | None (see LOCAL_UPLOAD_DIR fallback) | No |
| LOCAL_UPLOAD_DIR | Directory for uploads (local dev, fallback: './local_uploads') | ./local_uploads | No |
### ALLOWED_IFRAME_ORIGINS
- **UPLOAD_DIR** is used in Docker/production. If not set, LOCAL_UPLOAD_DIR is used for local development. If neither is set, the default is `./local_uploads`.
- **Docker Note:** The Dockerfile now only creates the `uploads` directory inside the container. The host's `./local_uploads` is mounted to `/app/uploads` and should be managed on the host system.
- **BASE_URL**: If you are deploying DumbDrop under a subpath (e.g., `https://example.com/watchfolder/`), you **must** set `BASE_URL` to the full path including the trailing slash (e.g., `https://example.com/watchfolder/`). All API and asset requests will be prefixed with this value. If you deploy at the root, use `https://example.com/`.
- **BASE_URL** must end with a trailing slash. The app will fail to start if this is not the case.
To allow this app to be embedded in an iframe on specific origins (such as Organizr), set the `ALLOWED_IFRAME_ORIGINS` environment variable to a comma-separated list of allowed parent origins. Example:
See `.env.example` for a template and more details.
<details>
<summary>ALLOWED_IFRAME_ORIGINS</summary>
To allow this app to be embedded in an iframe on specific origins (such as Organizr), set the `ALLOWED_IFRAME_ORIGINS` environment variable. For example:
```env
ALLOWED_IFRAME_ORIGINS=https://organizr.example.com,https://myportal.com
@@ -136,15 +124,20 @@ ALLOWED_IFRAME_ORIGINS=https://organizr.example.com,https://myportal.com
- If not set, the app will only allow itself to be embedded in an iframe on the same origin (default security).
- If set, the app will allow embedding in iframes on the specified origins and itself.
- **Security Note:** Only add trusted origins. Allowing arbitrary origins can expose your app to clickjacking and other attacks.
</details>
<details>
<summary>File Extension Filtering</summary>
### File Extension Filtering
To restrict which file types can be uploaded, set the `ALLOWED_EXTENSIONS` environment variable. For example:
```env
ALLOWED_EXTENSIONS=.jpg,.jpeg,.png,.pdf,.doc,.docx,.txt
```
If not set, all file extensions will be allowed.
</details>
### Notification Setup
<details>
<summary>Notification Setup</summary>
#### Message Templates
The notification message supports the following placeholders:
@@ -168,6 +161,7 @@ Both {size} and {storage} use the same formatting rules based on APPRISE_SIZE_UN
- Support for all Apprise notification services
- Customizable notification messages with filename templating
- Optional - disabled if no APPRISE_URL is set
</details>
## Security
@@ -206,10 +200,7 @@ Both {size} and {storage} use the same formatting rules based on APPRISE_SIZE_UN
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
See [Development Guide](dev/README.md) for local setup and guidelines.
See [Local Development (Recommended Quick Start)](LOCAL_DEVELOPMENT.md) for local setup and guidelines.
---
Made with ❤️ by [DumbWare.io](https://dumbware.io)

View File

@@ -1,50 +0,0 @@
# Version control
.git
.gitignore
# Dependencies
node_modules
npm-debug.log
yarn-debug.log
yarn-error.log
# Environment variables
.env
.env.*
!.env.example
# Development
.vscode
.idea
*.swp
*.swo
# Build outputs
dist
build
coverage
# Local uploads (development only)
local_uploads
# Logs
logs
*.log
# System files
.DS_Store
Thumbs.db
# Docker
.docker
docker-compose*.yml
Dockerfile*
# Documentation
README.md
CHANGELOG.md
docs
# Development configurations
.editorconfig
nodemon.json

View File

@@ -1,22 +0,0 @@
# Development Environment Settings
# Server Configuration
PORT=3000 # Development server port
# Upload Settings
MAX_FILE_SIZE=1024 # Maximum file size in MB for development
AUTO_UPLOAD=false # Disable auto-upload by default in development
UPLOAD_DIR=../local_uploads # Local development upload directory
# Development Specific
DUMBDROP_TITLE=DumbDrop-Dev # Development environment indicator
DUMBDROP_PIN=123456 # Default development PIN (change in production)
# Optional Development Features
NODE_ENV=development # Ensures development mode
DEBUG=dumbdrop:* # Enable debug logging (if implemented)
# Development Notifications (Optional)
APPRISE_URL= # Test notification endpoint
APPRISE_MESSAGE=[DEV] New file uploaded - {filename} ({size}), Storage used {storage}
APPRISE_SIZE_UNIT=auto

View File

@@ -1,46 +0,0 @@
# Base stage for shared configurations
FROM node:20-alpine as base
# Install python and create virtual environment with minimal dependencies
RUN apk add --no-cache python3 py3-pip && \
python3 -m venv /opt/venv && \
rm -rf /var/cache/apk/*
# Activate virtual environment and install apprise
RUN . /opt/venv/bin/activate && \
pip install --no-cache-dir apprise && \
find /opt/venv -type d -name "__pycache__" -exec rm -r {} +
# Add virtual environment to PATH
ENV PATH="/opt/venv/bin:$PATH"
WORKDIR /usr/src/app
# Dependencies stage
FROM base as deps
COPY package*.json ./
RUN npm ci --only=production && \
npm cache clean --force
# Development stage
FROM deps as development
ENV NODE_ENV=development
# Install dev dependencies
RUN npm install && \
npm cache clean --force
# Create upload directories
RUN mkdir -p uploads local_uploads
# Copy source with specific paths to avoid unnecessary files
COPY src/ ./src/
COPY public/ ./public/
COPY dev/ ./dev/
COPY .eslintrc.json .eslintignore ./
# Expose port
EXPOSE 3000
CMD ["npm", "run", "dev"]

View File

@@ -1,73 +0,0 @@
# DumbDrop Development Guide
## Quick Start
1. Clone the repository:
```bash
git clone https://github.com/yourusername/DumbDrop.git
cd DumbDrop
```
2. Set up development environment:
```bash
cd dev
cp .env.dev.example .env.dev
```
3. Start development server:
```bash
docker-compose -f docker-compose.dev.yml up
```
The application will be available at http://localhost:3000 with hot-reloading enabled.
## Development Environment Features
- Hot-reloading with nodemon
- Development-specific environment variables
- Local file storage in `../local_uploads`
- Debug logging enabled
- Development-specific notifications
## Project Structure
```
DumbDrop/
├── dev/ # Development configurations
│ ├── docker-compose.dev.yml
│ ├── .env.dev.example
│ └── README.md
├── src/ # Application source code
├── public/ # Static assets
├── local_uploads/ # Development file storage
└── [Production files in root]
```
## Development Workflow
1. Create feature branches from `main`:
```bash
git checkout -b feature/your-feature-name
```
2. Make changes and test locally
3. Commit using conventional commits:
```bash
feat: add new feature
fix: resolve bug
docs: update documentation
```
4. Push and create pull request
## Debugging
- Use `DEBUG=dumbdrop:*` for detailed logs
- Container shell access: `docker-compose -f docker-compose.dev.yml exec app sh`
- Logs: `docker-compose -f docker-compose.dev.yml logs -f app`
## Common Issues
1. Port conflicts: Change port in `.env.dev`
2. File permissions: Ensure proper ownership of `local_uploads`
3. Node modules: Remove and rebuild with `docker-compose -f docker-compose.dev.yml build --no-cache`

View File

@@ -1,74 +0,0 @@
#!/bin/bash
# Set script to exit on error
set -e
# Enable Docker BuildKit
export DOCKER_BUILDKIT=1
# Colors for pretty output
GREEN='\033[0;32m'
BLUE='\033[0;34m'
RED='\033[0;31m'
NC='\033[0m' # No Color
# Helper function for pretty printing
print_message() {
echo -e "${BLUE}🔧 ${1}${NC}"
}
# Ensure we're in the right directory
cd "$(dirname "$0")"
case "$1" in
"up")
print_message "Starting DumbDrop in development mode..."
if [ ! -f .env.dev ]; then
print_message "No .env.dev found. Creating from example..."
cp .env.dev.example .env.dev
fi
docker compose -f docker-compose.dev.yml up -d --build
print_message "Container logs:"
docker compose -f docker-compose.dev.yml logs
;;
"down")
print_message "Stopping DumbDrop development environment..."
docker compose -f docker-compose.dev.yml down
;;
"logs")
print_message "Showing DumbDrop logs..."
docker compose -f docker-compose.dev.yml logs -f
;;
"rebuild")
print_message "Rebuilding DumbDrop..."
docker compose -f docker-compose.dev.yml build --no-cache
docker compose -f docker-compose.dev.yml up
;;
"clean")
print_message "Cleaning up development environment..."
docker compose -f docker-compose.dev.yml down -v --remove-orphans
rm -f .env.dev
print_message "Cleaned up containers, volumes, and env file"
;;
"shell")
print_message "Opening shell in container..."
docker compose -f docker-compose.dev.yml exec app sh
;;
"lint")
print_message "Running linter..."
docker compose -f docker-compose.dev.yml exec app npm run lint
;;
*)
echo -e "${GREEN}DumbDrop Development Helper${NC}"
echo "Usage: ./dev.sh [command]"
echo ""
echo "Commands:"
echo " up - Start development environment (creates .env.dev if missing)"
echo " down - Stop development environment"
echo " logs - Show container logs"
echo " rebuild - Rebuild container without cache and start"
echo " clean - Clean up everything (containers, volumes, env)"
echo " shell - Open shell in container"
echo " lint - Run linter"
;;
esac

View File

@@ -1,33 +0,0 @@
services:
app:
build:
context: ..
dockerfile: dev/Dockerfile.dev
target: development
args:
DOCKER_BUILDKIT: 1
x-bake:
options:
dockerignore: dev/.dockerignore
volumes:
- ..:/usr/src/app
- /usr/src/app/node_modules
ports:
- "3000:3000"
environment:
- NODE_ENV=development
- PORT=3000
- MAX_FILE_SIZE=1024
- AUTO_UPLOAD=false
- DUMBDROP_TITLE=DumbDrop-Dev
# - APPRISE_URL=ntfy://dumbdrop-test
# - APPRISE_MESSAGE=[DEV] New file uploaded - {filename} ({size}), Storage used {storage}
# - APPRISE_SIZE_UNIT=auto
command: npm run dev
restart: unless-stopped
# Enable container debugging if needed
# stdin_open: true
# tty: true
# Add development labels
labels:
- "dev.dumbware.environment=development"

View File

@@ -7,6 +7,8 @@ services:
# Replace "./local_uploads" ( before the colon ) with the path where the files land
- ./local_uploads:/app/uploads
environment: # Environment variables for the DumbDrop service
# Explicitly set upload directory inside the container
UPLOAD_DIR: /app/uploads
DUMBDROP_TITLE: DumbDrop # The title shown in the web interface
MAX_FILE_SIZE: 1024 # Maximum file size in MB
DUMBDROP_PIN: 123456 # Optional PIN protection (4-10 digits, leave empty to disable)

432
package-lock.json generated
View File

@@ -29,9 +29,9 @@
}
},
"node_modules/@eslint-community/eslint-utils": {
"version": "4.4.1",
"resolved": "https://registry.npmjs.org/@eslint-community/eslint-utils/-/eslint-utils-4.4.1.tgz",
"integrity": "sha512-s3O3waFUrMV8P/XaF/+ZTp1X9XBZW1a4B97ZnjQF2KYWaFD2A8KyFBsrsfSjEmjn3RGWAIuvlneuZm3CUK3jbA==",
"version": "4.7.0",
"resolved": "https://registry.npmjs.org/@eslint-community/eslint-utils/-/eslint-utils-4.7.0.tgz",
"integrity": "sha512-dyybb3AcajC7uha6CvhdVRJqaKyn7w2YKqKyAN37NKYgZT36w+iRb0Dymmc5qEJ549c/S31cMMSFd75bteCpCw==",
"dev": true,
"license": "MIT",
"dependencies": {
@@ -81,31 +81,6 @@
"url": "https://opencollective.com/eslint"
}
},
"node_modules/@eslint/eslintrc/node_modules/debug": {
"version": "4.4.0",
"resolved": "https://registry.npmjs.org/debug/-/debug-4.4.0.tgz",
"integrity": "sha512-6WTZ/IxCY/T6BALoZHaE4ctp9xm+Z5kY/pzYaCHRFeyVhojxlrm+46y68HA6hr0TcwEssoxNiDEUJQjfPZ/RYA==",
"dev": true,
"license": "MIT",
"dependencies": {
"ms": "^2.1.3"
},
"engines": {
"node": ">=6.0"
},
"peerDependenciesMeta": {
"supports-color": {
"optional": true
}
}
},
"node_modules/@eslint/eslintrc/node_modules/ms": {
"version": "2.1.3",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz",
"integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==",
"dev": true,
"license": "MIT"
},
"node_modules/@eslint/js": {
"version": "8.57.1",
"resolved": "https://registry.npmjs.org/@eslint/js/-/js-8.57.1.tgz",
@@ -132,31 +107,6 @@
"node": ">=10.10.0"
}
},
"node_modules/@humanwhocodes/config-array/node_modules/debug": {
"version": "4.4.0",
"resolved": "https://registry.npmjs.org/debug/-/debug-4.4.0.tgz",
"integrity": "sha512-6WTZ/IxCY/T6BALoZHaE4ctp9xm+Z5kY/pzYaCHRFeyVhojxlrm+46y68HA6hr0TcwEssoxNiDEUJQjfPZ/RYA==",
"dev": true,
"license": "MIT",
"dependencies": {
"ms": "^2.1.3"
},
"engines": {
"node": ">=6.0"
},
"peerDependenciesMeta": {
"supports-color": {
"optional": true
}
}
},
"node_modules/@humanwhocodes/config-array/node_modules/ms": {
"version": "2.1.3",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz",
"integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==",
"dev": true,
"license": "MIT"
},
"node_modules/@humanwhocodes/module-importer": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/@humanwhocodes/module-importer/-/module-importer-1.0.1.tgz",
@@ -238,9 +188,9 @@
}
},
"node_modules/acorn": {
"version": "8.14.0",
"resolved": "https://registry.npmjs.org/acorn/-/acorn-8.14.0.tgz",
"integrity": "sha512-cl669nCJTZBsL97OF4kUQm5g5hC2uihk0NxY3WENAC0TYdILVkAyHymAntgxGkl7K+t0cXIrH5siy5S4XkFycA==",
"version": "8.14.1",
"resolved": "https://registry.npmjs.org/acorn/-/acorn-8.14.1.tgz",
"integrity": "sha512-OvQ/2pUDKmgfCg++xsTX1wGxfTaszcHVcTctW4UJB4hibJx2HXxxO5UmVgyjMa+ZDsiaf5wWLXYpRWMmBI0QHg==",
"dev": true,
"license": "MIT",
"bin": {
@@ -390,6 +340,21 @@
"npm": "1.2.8000 || >= 1.4.16"
}
},
"node_modules/body-parser/node_modules/debug": {
"version": "2.6.9",
"resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz",
"integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==",
"license": "MIT",
"dependencies": {
"ms": "2.0.0"
}
},
"node_modules/body-parser/node_modules/ms": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
"integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==",
"license": "MIT"
},
"node_modules/brace-expansion": {
"version": "1.1.11",
"resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.11.tgz",
@@ -441,9 +406,9 @@
}
},
"node_modules/call-bind-apply-helpers": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/call-bind-apply-helpers/-/call-bind-apply-helpers-1.0.1.tgz",
"integrity": "sha512-BhYE+WDaywFg2TBWYNXAE+8B1ATnThNBqXHP5nQu0jWJdVvY2hvkpyB3qOmtmDePiS5/BDQ8wASEWGMWRG148g==",
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/call-bind-apply-helpers/-/call-bind-apply-helpers-1.0.2.tgz",
"integrity": "sha512-Sp1ablJ0ivDkSzjcaJdxEunN5/XvksFJ2sMBFfq6x0ryhQV/2b/KwFe21cMpmHtPOSij8K99/wSfoEuTObmuMQ==",
"license": "MIT",
"dependencies": {
"es-errors": "^1.3.0",
@@ -454,13 +419,13 @@
}
},
"node_modules/call-bound": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/call-bound/-/call-bound-1.0.3.tgz",
"integrity": "sha512-YTd+6wGlNlPxSuri7Y6X8tY2dmm12UMH66RpKMhiX6rsk5wXXnYgbUcOt8kiS31/AjfoTOvCsE+w8nZQLQnzHA==",
"version": "1.0.4",
"resolved": "https://registry.npmjs.org/call-bound/-/call-bound-1.0.4.tgz",
"integrity": "sha512-+ys997U96po4Kx/ABpBCqhA9EuxJaQWDQg7295H4hBphv3IZg0boBKuwYpt4YXp6MZ5AmZQnU/tyMTlRpaSejg==",
"license": "MIT",
"dependencies": {
"call-bind-apply-helpers": "^1.0.1",
"get-intrinsic": "^1.2.6"
"call-bind-apply-helpers": "^1.0.2",
"get-intrinsic": "^1.3.0"
},
"engines": {
"node": ">= 0.4"
@@ -496,29 +461,6 @@
"url": "https://github.com/chalk/chalk?sponsor=1"
}
},
"node_modules/chalk/node_modules/has-flag": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz",
"integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=8"
}
},
"node_modules/chalk/node_modules/supports-color": {
"version": "7.2.0",
"resolved": "https://registry.npmjs.org/supports-color/-/supports-color-7.2.0.tgz",
"integrity": "sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw==",
"dev": true,
"license": "MIT",
"dependencies": {
"has-flag": "^4.0.0"
},
"engines": {
"node": ">=8"
}
},
"node_modules/chokidar": {
"version": "3.6.0",
"resolved": "https://registry.npmjs.org/chokidar/-/chokidar-3.6.0.tgz",
@@ -544,6 +486,19 @@
"fsevents": "~2.3.2"
}
},
"node_modules/chokidar/node_modules/glob-parent": {
"version": "5.1.2",
"resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.2.tgz",
"integrity": "sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow==",
"dev": true,
"license": "ISC",
"dependencies": {
"is-glob": "^4.0.1"
},
"engines": {
"node": ">= 6"
}
},
"node_modules/color-convert": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz",
@@ -608,9 +563,9 @@
}
},
"node_modules/cookie": {
"version": "0.7.1",
"resolved": "https://registry.npmjs.org/cookie/-/cookie-0.7.1.tgz",
"integrity": "sha512-6DnInpx7SJ2AK3+CTUE/ZM0vWTUboZCegxhC2xiIydHR9jNuTAASBrfEpHhiGOZw/nX51bHt6YQl8jsGo4y/0w==",
"version": "0.7.2",
"resolved": "https://registry.npmjs.org/cookie/-/cookie-0.7.2.tgz",
"integrity": "sha512-yki5XnKuf750l50uGTllt6kKILY4nQ1eNIQatoXEByZ5dWgnKqbnqmTrBE5B4N7lrMJKQ2ytWMiTO2o0v6Ew/w==",
"license": "MIT",
"engines": {
"node": ">= 0.6"
@@ -629,15 +584,6 @@
"node": ">= 0.8.0"
}
},
"node_modules/cookie-parser/node_modules/cookie": {
"version": "0.7.2",
"resolved": "https://registry.npmjs.org/cookie/-/cookie-0.7.2.tgz",
"integrity": "sha512-yki5XnKuf750l50uGTllt6kKILY4nQ1eNIQatoXEByZ5dWgnKqbnqmTrBE5B4N7lrMJKQ2ytWMiTO2o0v6Ew/w==",
"license": "MIT",
"engines": {
"node": ">= 0.6"
}
},
"node_modules/cookie-signature": {
"version": "1.0.6",
"resolved": "https://registry.npmjs.org/cookie-signature/-/cookie-signature-1.0.6.tgz",
@@ -679,12 +625,21 @@
}
},
"node_modules/debug": {
"version": "2.6.9",
"resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz",
"integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==",
"version": "4.4.0",
"resolved": "https://registry.npmjs.org/debug/-/debug-4.4.0.tgz",
"integrity": "sha512-6WTZ/IxCY/T6BALoZHaE4ctp9xm+Z5kY/pzYaCHRFeyVhojxlrm+46y68HA6hr0TcwEssoxNiDEUJQjfPZ/RYA==",
"dev": true,
"license": "MIT",
"dependencies": {
"ms": "2.0.0"
"ms": "^2.1.3"
},
"engines": {
"node": ">=6.0"
},
"peerDependenciesMeta": {
"supports-color": {
"optional": true
}
}
},
"node_modules/deep-is": {
@@ -727,9 +682,9 @@
}
},
"node_modules/dotenv": {
"version": "16.4.7",
"resolved": "https://registry.npmjs.org/dotenv/-/dotenv-16.4.7.tgz",
"integrity": "sha512-47qPchRCykZC03FhkYAhrvwU4xDBFIj1QPqaarj6mdM/hgUzfPHcpkHJOn3mJAufFeeAxAzeGsr5X0M4k6fLZQ==",
"version": "16.5.0",
"resolved": "https://registry.npmjs.org/dotenv/-/dotenv-16.5.0.tgz",
"integrity": "sha512-m/C+AwOAr9/W1UOIZUo232ejMNnJAJtYQjUbHoNTBNTJSvqzzDh7vnrei3o3r3m9blf6ZoDkvcw0VmozNRFJxg==",
"license": "BSD-2-Clause",
"engines": {
"node": ">=12"
@@ -935,16 +890,6 @@
"eslint": ">=5.16.0"
}
},
"node_modules/eslint-plugin-node/node_modules/semver": {
"version": "6.3.1",
"resolved": "https://registry.npmjs.org/semver/-/semver-6.3.1.tgz",
"integrity": "sha512-BR7VvDCVHO+q2xBEWskxS6DJE1qRnb7DxzUrogb71CWoSficBxYsiAGd+Kl0mmq/MprG9yArRkyrQxTO6XjMzA==",
"dev": true,
"license": "ISC",
"bin": {
"semver": "bin/semver.js"
}
},
"node_modules/eslint-scope": {
"version": "7.2.2",
"resolved": "https://registry.npmjs.org/eslint-scope/-/eslint-scope-7.2.2.tgz",
@@ -1001,44 +946,6 @@
"url": "https://opencollective.com/eslint"
}
},
"node_modules/eslint/node_modules/debug": {
"version": "4.4.0",
"resolved": "https://registry.npmjs.org/debug/-/debug-4.4.0.tgz",
"integrity": "sha512-6WTZ/IxCY/T6BALoZHaE4ctp9xm+Z5kY/pzYaCHRFeyVhojxlrm+46y68HA6hr0TcwEssoxNiDEUJQjfPZ/RYA==",
"dev": true,
"license": "MIT",
"dependencies": {
"ms": "^2.1.3"
},
"engines": {
"node": ">=6.0"
},
"peerDependenciesMeta": {
"supports-color": {
"optional": true
}
}
},
"node_modules/eslint/node_modules/glob-parent": {
"version": "6.0.2",
"resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-6.0.2.tgz",
"integrity": "sha512-XxwI8EOhVQgWp6iDL+3b0r86f4d6AX6zSU55HfB4ydCEuXLXc5FcYeOu+nnGftS4TEju/11rt4KJPTMgbfmv4A==",
"dev": true,
"license": "ISC",
"dependencies": {
"is-glob": "^4.0.3"
},
"engines": {
"node": ">=10.13.0"
}
},
"node_modules/eslint/node_modules/ms": {
"version": "2.1.3",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz",
"integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==",
"dev": true,
"license": "MIT"
},
"node_modules/espree": {
"version": "9.6.1",
"resolved": "https://registry.npmjs.org/espree/-/espree-9.6.1.tgz",
@@ -1173,6 +1080,30 @@
"express": "^4.11 || 5 || ^5.0.0-beta.1"
}
},
"node_modules/express/node_modules/cookie": {
"version": "0.7.1",
"resolved": "https://registry.npmjs.org/cookie/-/cookie-0.7.1.tgz",
"integrity": "sha512-6DnInpx7SJ2AK3+CTUE/ZM0vWTUboZCegxhC2xiIydHR9jNuTAASBrfEpHhiGOZw/nX51bHt6YQl8jsGo4y/0w==",
"license": "MIT",
"engines": {
"node": ">= 0.6"
}
},
"node_modules/express/node_modules/debug": {
"version": "2.6.9",
"resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz",
"integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==",
"license": "MIT",
"dependencies": {
"ms": "2.0.0"
}
},
"node_modules/express/node_modules/ms": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
"integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==",
"license": "MIT"
},
"node_modules/fast-deep-equal": {
"version": "3.1.3",
"resolved": "https://registry.npmjs.org/fast-deep-equal/-/fast-deep-equal-3.1.3.tgz",
@@ -1195,9 +1126,9 @@
"license": "MIT"
},
"node_modules/fastq": {
"version": "1.19.0",
"resolved": "https://registry.npmjs.org/fastq/-/fastq-1.19.0.tgz",
"integrity": "sha512-7SFSRCNjBQIZH/xZR3iy5iQYR8aGBE0h3VG6/cwlbrpdciNYBMotQav8c1XI3HjHH+NikUpP53nPdlZSdWmFzA==",
"version": "1.19.1",
"resolved": "https://registry.npmjs.org/fastq/-/fastq-1.19.1.tgz",
"integrity": "sha512-GwLTyxkCXjXbxqIhTsMI2Nui8huMPtnxg7krajPJAjnEG/iiOS7i+zCtWGZR9G0NBKbXKh6X9m9UIsYX/N6vvQ==",
"dev": true,
"license": "ISC",
"dependencies": {
@@ -1248,6 +1179,21 @@
"node": ">= 0.8"
}
},
"node_modules/finalhandler/node_modules/debug": {
"version": "2.6.9",
"resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz",
"integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==",
"license": "MIT",
"dependencies": {
"ms": "2.0.0"
}
},
"node_modules/finalhandler/node_modules/ms": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
"integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==",
"license": "MIT"
},
"node_modules/find-up": {
"version": "5.0.0",
"resolved": "https://registry.npmjs.org/find-up/-/find-up-5.0.0.tgz",
@@ -1281,9 +1227,9 @@
}
},
"node_modules/flatted": {
"version": "3.3.2",
"resolved": "https://registry.npmjs.org/flatted/-/flatted-3.3.2.tgz",
"integrity": "sha512-AiwGJM8YcNOaobumgtng+6NHuOqC3A7MixFeDafM3X9cIUM+xUXoS5Vfgf+OihAYe20fxqNM9yPBXJzRtZ/4eA==",
"version": "3.3.3",
"resolved": "https://registry.npmjs.org/flatted/-/flatted-3.3.3.tgz",
"integrity": "sha512-GX+ysw4PBCz0PzosHDepZGANEuFCMLrnRTiEy9McGjmkCQYwRq4A/X786G/fjM/+OjsWSU1ZrY5qyARZmO/uwg==",
"dev": true,
"license": "ISC"
},
@@ -1337,17 +1283,17 @@
}
},
"node_modules/get-intrinsic": {
"version": "1.2.7",
"resolved": "https://registry.npmjs.org/get-intrinsic/-/get-intrinsic-1.2.7.tgz",
"integrity": "sha512-VW6Pxhsrk0KAOqs3WEd0klDiF/+V7gQOpAvY1jVU/LHmaD/kQO4523aiJuikX/QAKYiW6x8Jh+RJej1almdtCA==",
"version": "1.3.0",
"resolved": "https://registry.npmjs.org/get-intrinsic/-/get-intrinsic-1.3.0.tgz",
"integrity": "sha512-9fSjSaos/fRIVIp+xSJlE6lfwhES7LNtKaCBIamHsjr2na1BiABJPo0mOjjz8GJDURarmCPGqaiVg5mfjb98CQ==",
"license": "MIT",
"dependencies": {
"call-bind-apply-helpers": "^1.0.1",
"call-bind-apply-helpers": "^1.0.2",
"es-define-property": "^1.0.1",
"es-errors": "^1.3.0",
"es-object-atoms": "^1.0.0",
"es-object-atoms": "^1.1.1",
"function-bind": "^1.1.2",
"get-proto": "^1.0.0",
"get-proto": "^1.0.1",
"gopd": "^1.2.0",
"has-symbols": "^1.1.0",
"hasown": "^2.0.2",
@@ -1396,16 +1342,16 @@
}
},
"node_modules/glob-parent": {
"version": "5.1.2",
"resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.2.tgz",
"integrity": "sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow==",
"version": "6.0.2",
"resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-6.0.2.tgz",
"integrity": "sha512-XxwI8EOhVQgWp6iDL+3b0r86f4d6AX6zSU55HfB4ydCEuXLXc5FcYeOu+nnGftS4TEju/11rt4KJPTMgbfmv4A==",
"dev": true,
"license": "ISC",
"dependencies": {
"is-glob": "^4.0.1"
"is-glob": "^4.0.3"
},
"engines": {
"node": ">= 6"
"node": ">=10.13.0"
}
},
"node_modules/globals": {
@@ -1444,13 +1390,13 @@
"license": "MIT"
},
"node_modules/has-flag": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/has-flag/-/has-flag-3.0.0.tgz",
"integrity": "sha512-sKJf1+ceQBr4SMkvQnBDNDtf4TXpVhVGateu0t918bl30FnbE2m4vNLX+VWe/dpjlb+HugGYzW7uQXH98HPEYw==",
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz",
"integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=4"
"node": ">=8"
}
},
"node_modules/has-symbols": {
@@ -1846,15 +1792,15 @@
}
},
"node_modules/ms": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
"integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==",
"version": "2.1.3",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz",
"integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==",
"license": "MIT"
},
"node_modules/multer": {
"version": "1.4.5-lts.1",
"resolved": "https://registry.npmjs.org/multer/-/multer-1.4.5-lts.1.tgz",
"integrity": "sha512-ywPWvcDMeH+z9gQq5qYHCCy+ethsk4goepZ45GLD63fOu0YcNecQxi64nDs3qluZB+murG3/D4dJ7+dGctcCQQ==",
"version": "1.4.5-lts.2",
"resolved": "https://registry.npmjs.org/multer/-/multer-1.4.5-lts.2.tgz",
"integrity": "sha512-VzGiVigcG9zUAoCNU+xShztrlr1auZOlurXynNvO9GiWD1/mTBbUljOKY+qMeazBqXgRnjzeEgJI/wyjJUHg9A==",
"license": "MIT",
"dependencies": {
"append-field": "^1.0.0",
@@ -1886,9 +1832,9 @@
}
},
"node_modules/nodemon": {
"version": "3.1.9",
"resolved": "https://registry.npmjs.org/nodemon/-/nodemon-3.1.9.tgz",
"integrity": "sha512-hdr1oIb2p6ZSxu3PB2JWWYS7ZQ0qvaZsc3hK8DR8f02kRzc8rjYmxAIvdz+aYC+8F2IjNaB7HMcSDg8nQpJxyg==",
"version": "3.1.10",
"resolved": "https://registry.npmjs.org/nodemon/-/nodemon-3.1.10.tgz",
"integrity": "sha512-WDjw3pJ0/0jMFmyNDp3gvY2YizjLmmOUQo6DEBY+JgdvW/yQ9mEeSw6H5ythl5Ny2ytb7f9C2nIbjSxMNzbJXw==",
"dev": true,
"license": "MIT",
"dependencies": {
@@ -1914,31 +1860,42 @@
"url": "https://opencollective.com/nodemon"
}
},
"node_modules/nodemon/node_modules/debug": {
"version": "4.4.0",
"resolved": "https://registry.npmjs.org/debug/-/debug-4.4.0.tgz",
"integrity": "sha512-6WTZ/IxCY/T6BALoZHaE4ctp9xm+Z5kY/pzYaCHRFeyVhojxlrm+46y68HA6hr0TcwEssoxNiDEUJQjfPZ/RYA==",
"node_modules/nodemon/node_modules/has-flag": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/has-flag/-/has-flag-3.0.0.tgz",
"integrity": "sha512-sKJf1+ceQBr4SMkvQnBDNDtf4TXpVhVGateu0t918bl30FnbE2m4vNLX+VWe/dpjlb+HugGYzW7uQXH98HPEYw==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=4"
}
},
"node_modules/nodemon/node_modules/semver": {
"version": "7.7.1",
"resolved": "https://registry.npmjs.org/semver/-/semver-7.7.1.tgz",
"integrity": "sha512-hlq8tAfn0m/61p4BVRcPzIGr6LKiMwo4VM6dGi6pt4qcRkmNzTcWq6eCEjEh+qXjkMDvPlOFFSGwQjoEa6gyMA==",
"dev": true,
"license": "ISC",
"bin": {
"semver": "bin/semver.js"
},
"engines": {
"node": ">=10"
}
},
"node_modules/nodemon/node_modules/supports-color": {
"version": "5.5.0",
"resolved": "https://registry.npmjs.org/supports-color/-/supports-color-5.5.0.tgz",
"integrity": "sha512-QjVjwdXIt408MIiAqCX4oUKsgU2EqAGzs2Ppkm4aQYbjm+ZEWEcW4SfFNTr4uMNZma0ey4f5lgLrkB0aX0QMow==",
"dev": true,
"license": "MIT",
"dependencies": {
"ms": "^2.1.3"
"has-flag": "^3.0.0"
},
"engines": {
"node": ">=6.0"
},
"peerDependenciesMeta": {
"supports-color": {
"optional": true
}
"node": ">=4"
}
},
"node_modules/nodemon/node_modules/ms": {
"version": "2.1.3",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz",
"integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==",
"dev": true,
"license": "MIT"
},
"node_modules/normalize-path": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/normalize-path/-/normalize-path-3.0.0.tgz",
@@ -1959,9 +1916,9 @@
}
},
"node_modules/object-inspect": {
"version": "1.13.3",
"resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.13.3.tgz",
"integrity": "sha512-kDCGIbxkDSXE3euJZZXzc6to7fCrKHNI/hSRQnRuQ+BWjFNzZwiFF8fj/6o2t2G9/jTj8PSIYTfCLelLZEeRpA==",
"version": "1.13.4",
"resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.13.4.tgz",
"integrity": "sha512-W67iLl4J2EXEGTbfeHCffrjDfitvLANg0UlX3wFUUSTx92KXRFegMHUVgSqE+wvhAbi4WqjGg9czysTV2Epbew==",
"license": "MIT",
"engines": {
"node": ">= 0.4"
@@ -2130,9 +2087,9 @@
}
},
"node_modules/prettier": {
"version": "3.5.1",
"resolved": "https://registry.npmjs.org/prettier/-/prettier-3.5.1.tgz",
"integrity": "sha512-hPpFQvHwL3Qv5AdRvBFMhnKo4tYxp0ReXiPn2bxkiohEX6mBeBwEpBSQTkD458RaaDKQMYSp4hX4UtfUTA5wDw==",
"version": "3.5.3",
"resolved": "https://registry.npmjs.org/prettier/-/prettier-3.5.3.tgz",
"integrity": "sha512-QQtaxnoDJeAkDvDKWCLiwIXkTgRhwYDEQCghU9Z6q03iyek/rxRh/2lC3HB7P8sWT2xC/y5JDctPLBIGzHKbhw==",
"dev": true,
"license": "MIT",
"bin": {
@@ -2320,9 +2277,9 @@
}
},
"node_modules/reusify": {
"version": "1.0.4",
"resolved": "https://registry.npmjs.org/reusify/-/reusify-1.0.4.tgz",
"integrity": "sha512-U9nH88a3fc/ekCF1l0/UP1IosiuIjyTh7hBvXVMHYgVcfGvt897Xguj2UOLDeI5BG2m7/uwyaLVT6fbtCwTyzw==",
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/reusify/-/reusify-1.1.0.tgz",
"integrity": "sha512-g6QUff04oZpHs0eG5p83rFLhHeV00ug/Yf9nZM6fLeUrPguBTkTQOdpAWWspMh55TZfVQDPaN3NQJfbVRAxdIw==",
"dev": true,
"license": "MIT",
"engines": {
@@ -2398,16 +2355,13 @@
"license": "MIT"
},
"node_modules/semver": {
"version": "7.6.3",
"resolved": "https://registry.npmjs.org/semver/-/semver-7.6.3.tgz",
"integrity": "sha512-oVekP1cKtI+CTDvHWYFUcMtsK/00wmAEfyqKfNdARm8u1wNVhSgaX7A8d4UuIlUI5e84iEwOhs7ZPYRmzU9U6A==",
"version": "6.3.1",
"resolved": "https://registry.npmjs.org/semver/-/semver-6.3.1.tgz",
"integrity": "sha512-BR7VvDCVHO+q2xBEWskxS6DJE1qRnb7DxzUrogb71CWoSficBxYsiAGd+Kl0mmq/MprG9yArRkyrQxTO6XjMzA==",
"dev": true,
"license": "ISC",
"bin": {
"semver": "bin/semver.js"
},
"engines": {
"node": ">=10"
}
},
"node_modules/send": {
@@ -2434,6 +2388,21 @@
"node": ">= 0.8.0"
}
},
"node_modules/send/node_modules/debug": {
"version": "2.6.9",
"resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz",
"integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==",
"license": "MIT",
"dependencies": {
"ms": "2.0.0"
}
},
"node_modules/send/node_modules/debug/node_modules/ms": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
"integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==",
"license": "MIT"
},
"node_modules/send/node_modules/encodeurl": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-1.0.2.tgz",
@@ -2443,12 +2412,6 @@
"node": ">= 0.8"
}
},
"node_modules/send/node_modules/ms": {
"version": "2.1.3",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz",
"integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==",
"license": "MIT"
},
"node_modules/serve-static": {
"version": "1.16.2",
"resolved": "https://registry.npmjs.org/serve-static/-/serve-static-1.16.2.tgz",
@@ -2578,6 +2541,19 @@
"node": ">=10"
}
},
"node_modules/simple-update-notifier/node_modules/semver": {
"version": "7.7.1",
"resolved": "https://registry.npmjs.org/semver/-/semver-7.7.1.tgz",
"integrity": "sha512-hlq8tAfn0m/61p4BVRcPzIGr6LKiMwo4VM6dGi6pt4qcRkmNzTcWq6eCEjEh+qXjkMDvPlOFFSGwQjoEa6gyMA==",
"dev": true,
"license": "ISC",
"bin": {
"semver": "bin/semver.js"
},
"engines": {
"node": ">=10"
}
},
"node_modules/statuses": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/statuses/-/statuses-2.0.1.tgz",
@@ -2637,16 +2613,16 @@
}
},
"node_modules/supports-color": {
"version": "5.5.0",
"resolved": "https://registry.npmjs.org/supports-color/-/supports-color-5.5.0.tgz",
"integrity": "sha512-QjVjwdXIt408MIiAqCX4oUKsgU2EqAGzs2Ppkm4aQYbjm+ZEWEcW4SfFNTr4uMNZma0ey4f5lgLrkB0aX0QMow==",
"version": "7.2.0",
"resolved": "https://registry.npmjs.org/supports-color/-/supports-color-7.2.0.tgz",
"integrity": "sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw==",
"dev": true,
"license": "MIT",
"dependencies": {
"has-flag": "^3.0.0"
"has-flag": "^4.0.0"
},
"engines": {
"node": ">=4"
"node": ">=8"
}
},
"node_modules/supports-preserve-symlinks-flag": {

View File

@@ -4,10 +4,11 @@
"main": "src/server.js",
"scripts": {
"start": "node src/server.js",
"dev": "nodemon --legacy-watch src/server.js",
"dev": "nodemon src/server.js",
"lint": "eslint .",
"lint:fix": "eslint . --fix",
"format": "prettier --write ."
"format": "prettier --write .",
"predev": "node -e \"const v=process.versions.node.split('.');if(v[0]<20) {console.error('Node.js >=20.0.0 required');process.exit(1)}\""
},
"keywords": [],
"author": "",

View File

@@ -4,11 +4,12 @@
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{{SITE_TITLE}} - Simple File Upload</title>
<link rel="stylesheet" href="styles.css">
<link rel="stylesheet" href="{{BASE_URL}}styles.css">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/toastify-js/src/toastify.min.css">
<script src="https://cdn.jsdelivr.net/npm/toastify-js"></script>
<link rel="manifest" href="/manifest.json">
<link rel="icon" type="image/svg+xml" href="assets/icon.svg">
<link rel="manifest" href="{{BASE_URL}}manifest.json">
<link rel="icon" type="image/svg+xml" href="{{BASE_URL}}assets/icon.svg">
<script>window.BASE_URL = '{{BASE_URL}}';</script>
</head>
<body>
<div class="container">
@@ -52,9 +53,26 @@
<script defer>
const CHUNK_SIZE = 1024 * 1024; // 1MB chunks
const MAX_RETRIES = 3;
const RETRY_DELAY = 1000;
const AUTO_UPLOAD = ['true', '1', 'yes'].includes('{{AUTO_UPLOAD}}'.toLowerCase());
const RETRY_DELAY = 1000; // 1 second delay between retries
// Read MAX_RETRIES from the injected server value, with a fallback
const MAX_RETRIES_STR = '{{MAX_RETRIES}}';
let maxRetries = 5; // Default value
if (MAX_RETRIES_STR && MAX_RETRIES_STR !== '{{MAX_RETRIES}}') {
const parsedRetries = parseInt(MAX_RETRIES_STR, 10);
if (!isNaN(parsedRetries) && parsedRetries >= 0) {
maxRetries = parsedRetries;
} else {
console.warn(`Invalid MAX_RETRIES value "${MAX_RETRIES_STR}" received from server, defaulting to ${maxRetries}.`);
}
} else {
console.warn('MAX_RETRIES not injected by server, defaulting to 5.');
}
window.MAX_RETRIES = maxRetries; // Assign to window for potential global use/debugging
console.log(`Max retries for chunk uploads: ${window.MAX_RETRIES}`);
const AUTO_UPLOAD_STR = '{{AUTO_UPLOAD}}';
const AUTO_UPLOAD = ['true', '1', 'yes'].includes(AUTO_UPLOAD_STR.toLowerCase());
// Utility function to generate a unique batch ID
function generateBatchId() {
@@ -81,12 +99,21 @@
this.lastUploadedBytes = 0;
this.lastUploadTime = null;
this.uploadRate = 0;
this.maxRetries = window.MAX_RETRIES; // Use configured retries
this.retryDelay = RETRY_DELAY; // Use constant
}
async start() {
try {
this.updateProgress(0); // Initial progress update
await this.initUpload();
await this.uploadChunks();
if (this.file.size > 0) { // Only upload chunks if file is not empty
await this.uploadChunks();
} else {
console.log(`Skipping chunk upload for zero-byte file: ${this.file.name}`);
// Server handles zero-byte completion in /init
this.updateProgress(100); // Mark as complete on client too
}
return true;
} catch (error) {
console.error('Upload failed:', error);
@@ -116,7 +143,9 @@
headers['X-Batch-ID'] = this.batchId;
}
const response = await fetch('/api/upload/init', {
// Remove leading slash from API path before concatenating
const apiUrl = '/api/upload/init'.startsWith('/') ? '/api/upload/init'.substring(1) : '/api/upload/init';
const response = await fetch(window.BASE_URL + apiUrl, {
method: 'POST',
headers,
body: JSON.stringify({
@@ -136,10 +165,22 @@
async uploadChunks() {
this.createProgressElement();
let currentChunkStartPosition = this.position; // Track start position for retries
while (this.position < this.file.size) {
const chunk = await this.readChunk();
await this.uploadChunk(chunk);
const chunk = await this.readChunk(); // Reads based on current this.position
try {
// Attempt to upload the chunk with retry logic
// Pass the position *before* reading the chunk, as that's the start of the data being sent
await this.uploadChunkWithRetry(chunk, currentChunkStartPosition);
// If successful, update the start position for the *next* chunk read
// this.position is updated internally by readChunk, so currentChunkStartPosition reflects the next read point
currentChunkStartPosition = this.position;
} catch (error) {
// If uploadChunkWithRetry fails after all retries, propagate the error
console.error(`UploadChunks failed after retries for chunk starting at ${currentChunkStartPosition}. File: ${this.file.webkitRelativePath || this.file.name}`);
throw error; // Propagate up to the start() method's catch block
}
}
}
@@ -151,22 +192,94 @@
return await blob.arrayBuffer();
}
async uploadChunk(chunk) {
const response = await fetch(`/api/upload/chunk/${this.uploadId}`, {
method: 'POST',
headers: {
'Content-Type': 'application/octet-stream',
'X-Batch-ID': this.batchId
},
body: chunk
});
async uploadChunkWithRetry(chunk, chunkStartPosition) {
const chunkApiUrlPath = `/api/upload/chunk/${this.uploadId}`;
const chunkApiUrl = chunkApiUrlPath.startsWith('/') ? chunkApiUrlPath.substring(1) : chunkApiUrlPath;
let lastError = null;
if (!response.ok) {
throw new Error(`Failed to upload chunk: ${response.statusText}`);
for (let attempt = 0; attempt <= this.maxRetries; attempt++) {
try {
if (attempt > 0) {
console.warn(`Retrying chunk (start: ${chunkStartPosition}) upload for ${this.file.webkitRelativePath || this.file.name} (Attempt ${attempt}/${this.maxRetries})...`);
this.updateProgressElementInfo(`Retrying attempt ${attempt}...`, 'var(--warning-color)');
}
// Use AbortController for potential timeout or cancellation during fetch
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), 30000); // 30-second timeout per attempt
const response = await fetch(window.BASE_URL + chunkApiUrl, {
method: 'POST',
headers: {
'Content-Type': 'application/octet-stream',
'X-Batch-ID': this.batchId
// Consider adding 'Content-Range': `bytes ${chunkStartPosition}-${chunkStartPosition + chunk.byteLength - 1}/${this.file.size}`
// If the server supports handling potential duplicate chunks via Content-Range
},
body: chunk,
signal: controller.signal // Add abort signal
});
clearTimeout(timeoutId); // Clear timeout if fetch completes
if (response.ok) {
const data = await response.json();
if (attempt > 0) {
console.log(`Chunk upload successful on retry attempt ${attempt} for ${this.file.webkitRelativePath || this.file.name}`);
}
// Update progress based on server response
// this.position is updated by readChunk(), so progress reflects total uploaded
this.updateProgress(data.progress);
// Success! Exit the retry loop.
this.updateProgressElementInfo('uploading...'); // Reset info message
return;
} else {
// Server responded with an error status (4xx, 5xx)
let errorText = 'Unknown server error';
try {
errorText = await response.text();
} catch (textError) { /* ignore if reading text fails */ }
// --- Add Special 404 Handling ---
if (response.status === 404 && attempt > 0) {
console.warn(`Received 404 Not Found on retry attempt ${attempt} for ${this.file.webkitRelativePath || this.file.name}. Assuming upload completed previously.`);
this.updateProgress(100); // Mark as complete
return; // Exit retry loop successfully
}
// --- End Special 404 Handling ---
lastError = new Error(`Failed to upload chunk: ${response.status} ${response.statusText}. Server response: ${errorText}`);
console.error(`Chunk upload attempt ${attempt} failed: ${lastError.message}`);
this.updateProgressElementInfo(`Attempt ${attempt} failed: ${response.statusText}`, 'var(--danger-color)');
}
} catch (error) {
// Network error, fetch failed completely, or timeout
lastError = error;
if (error.name === 'AbortError') {
console.error(`Chunk upload attempt ${attempt} timed out after 30 seconds.`);
this.updateProgressElementInfo(`Attempt ${attempt} timed out`, 'var(--danger-color)');
} else {
console.error(`Chunk upload attempt ${attempt} failed with network error: ${error.message}`);
this.updateProgressElementInfo(`Attempt ${attempt} network error`, 'var(--danger-color)');
}
}
// If not the last attempt, wait before retrying
if (attempt < this.maxRetries) {
// Exponential backoff: 1s, 2s, 4s, ... but capped
const delay = Math.min(this.retryDelay * Math.pow(2, attempt), 30000); // Max 30s delay
await new Promise(resolve => setTimeout(resolve, delay));
}
}
const data = await response.json();
this.updateProgress(data.progress);
// If we exit the loop, all retries have failed.
// Position reset is tricky. If the server *did* receive a chunk but failed to respond OK,
// simply resending might corrupt data unless the server handles it idempotently.
// Failing the whole upload is often safer.
// this.position = chunkStartPosition; // Re-enable if server can handle duplicate chunks safely
console.error(`Chunk upload failed permanently after ${this.maxRetries} retries for ${this.file.webkitRelativePath || this.file.name}, chunk starting at ${chunkStartPosition}.`);
this.updateProgressElementInfo(`Upload failed after ${this.maxRetries} retries`, 'var(--danger-color)');
throw lastError || new Error(`Chunk upload failed after ${this.maxRetries} retries.`);
}
createProgressElement() {
@@ -234,8 +347,12 @@
}
// Update progress info
this.progressElement.infoSpan.textContent = `${rateText} · ${percent < 100 ? 'uploading...' : 'complete'}`;
this.progressElement.detailsSpan.textContent =
const statusText = percent < 100 ? 'uploading...' : 'complete';
// Use the helper for info updates, only update if not showing a retry message
if (!this.progressElement.infoSpan.textContent.startsWith('Retry') && !this.progressElement.infoSpan.textContent.startsWith('Attempt')) {
this.updateProgressElementInfo(`${rateText} · ${statusText}`);
}
this.progressElement.detailsSpan.textContent =
`${formatFileSize(this.position)} of ${formatFileSize(this.file.size)} (${percent.toFixed(1)}%)`;
// Update tracking variables
@@ -249,6 +366,31 @@
}
}
}
// Helper to update the info message and color in the progress element
updateProgressElementInfo(message, color = '') {
if (this.progressElement && this.progressElement.infoSpan) {
this.progressElement.infoSpan.textContent = message;
this.progressElement.infoSpan.style.color = color; // Reset if color is empty string
}
}
// Helper to attempt cancellation on the server
async cancelUploadOnServer() {
if (!this.uploadId) return;
console.log(`Attempting to cancel upload ${this.uploadId} on server due to error.`);
try {
const cancelApiUrlPath = `/api/upload/cancel/${this.uploadId}`;
const cancelApiUrl = cancelApiUrlPath.startsWith('/') ? cancelApiUrlPath.substring(1) : cancelApiUrlPath;
// No need to wait for response here, just fire and forget
fetch(window.BASE_URL + cancelApiUrl, { method: 'POST' }).catch(err => {
console.warn(`Sending cancel request failed for upload ${this.uploadId}:`, err);
});
} catch (cancelError) {
// Catch synchronous errors, though unlikely with fetch
console.warn(`Error initiating cancel request for upload ${this.uploadId}:`, cancelError);
} // Add closing brace for try block
}
}
// UI Event Handlers

View File

@@ -4,8 +4,8 @@
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{{SITE_TITLE}} - Login</title>
<link rel="stylesheet" href="styles.css">
<link rel="icon" type="image/svg+xml" href="assets/icon.svg">
<link rel="stylesheet" href="{{BASE_URL}}styles.css">
<link rel="icon" type="image/svg+xml" href="{{BASE_URL}}assets/icon.svg">
<style>
.login-container {
display: flex;
@@ -54,6 +54,7 @@
background-color: var(--textarea-bg);
}
</style>
<script>window.BASE_URL = '{{BASE_URL}}';</script>
</head>
<body>
<div class="login-container">
@@ -125,7 +126,7 @@
// Handle form submission
const verifyPin = async (pin) => {
try {
const response = await fetch('/api/auth/verify-pin', {
const response = await fetch(window.BASE_URL + '/api/auth/verify-pin', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ pin })
@@ -211,7 +212,7 @@
};
// Check PIN length and initialize
fetch('/api/auth/pin-required')
fetch(window.BASE_URL + '/api/auth/pin-required')
.then(response => {
if (response.status === 429) {
throw new Error('Too many attempts. Please wait before trying again.');
@@ -240,6 +241,17 @@
pinContainer.style.pointerEvents = 'none';
}
});
document.addEventListener('DOMContentLoaded', function() {
// Rewrite asset URLs to use BASE_URL as prefix if not absolute
const baseUrl = window.BASE_URL;
document.querySelectorAll('link[rel="stylesheet"], link[rel="icon"]').forEach(link => {
const href = link.getAttribute('href');
if (href && !href.startsWith('http') && !href.startsWith('data:') && !href.startsWith(baseUrl)) {
link.setAttribute('href', baseUrl + href.replace(/^\//, ''));
}
});
});
</script>
</body>
</html>

View File

@@ -9,6 +9,7 @@ const cors = require('cors');
const cookieParser = require('cookie-parser');
const path = require('path');
const fs = require('fs');
const fsPromises = require('fs').promises;
const { config, validateConfig } = require('./config');
const logger = require('./utils/logger');
@@ -53,6 +54,10 @@ app.get('/', (req, res) => {
let html = fs.readFileSync(path.join(__dirname, '../public', 'index.html'), 'utf8');
html = html.replace(/{{SITE_TITLE}}/g, config.siteTitle);
html = html.replace('{{AUTO_UPLOAD}}', config.autoUpload.toString());
html = html.replace('{{MAX_RETRIES}}', config.clientMaxRetries.toString());
// Ensure baseUrl has a trailing slash for correct asset linking
const baseUrlWithSlash = config.baseUrl.endsWith('/') ? config.baseUrl : config.baseUrl + '/';
html = html.replace(/{{BASE_URL}}/g, baseUrlWithSlash);
html = injectDemoBanner(html);
res.send(html);
});
@@ -66,6 +71,9 @@ app.get('/login.html', (req, res) => {
let html = fs.readFileSync(path.join(__dirname, '../public', 'login.html'), 'utf8');
html = html.replace(/{{SITE_TITLE}}/g, config.siteTitle);
// Ensure baseUrl has a trailing slash
const baseUrlWithSlash = config.baseUrl.endsWith('/') ? config.baseUrl : config.baseUrl + '/';
html = html.replace(/{{BASE_URL}}/g, baseUrlWithSlash);
html = injectDemoBanner(html);
res.send(html);
});
@@ -80,9 +88,13 @@ app.use((req, res, next) => {
const filePath = path.join(__dirname, '../public', req.path);
let html = fs.readFileSync(filePath, 'utf8');
html = html.replace(/{{SITE_TITLE}}/g, config.siteTitle);
if (req.path === 'index.html') {
if (req.path === '/index.html' || req.path === 'index.html') {
html = html.replace('{{AUTO_UPLOAD}}', config.autoUpload.toString());
html = html.replace('{{MAX_RETRIES}}', config.clientMaxRetries.toString());
}
// Ensure baseUrl has a trailing slash
const baseUrlWithSlash = config.baseUrl.endsWith('/') ? config.baseUrl : config.baseUrl + '/';
html = html.replace(/{{BASE_URL}}/g, baseUrlWithSlash);
html = injectDemoBanner(html);
res.send(html);
} catch (err) {
@@ -102,6 +114,10 @@ app.use((err, req, res, next) => { // eslint-disable-line no-unused-vars
});
});
// --- Add this after config is loaded ---
const METADATA_DIR = path.join(config.uploadDir, '.metadata');
// --- End addition ---
/**
* Initialize the application
* Sets up required directories and validates configuration
@@ -113,6 +129,25 @@ async function initialize() {
// Ensure upload directory exists and is writable
await ensureDirectoryExists(config.uploadDir);
// --- Add this section ---
// Ensure metadata directory exists
try {
if (!fs.existsSync(METADATA_DIR)) {
await fsPromises.mkdir(METADATA_DIR, { recursive: true });
logger.info(`Created metadata directory: ${METADATA_DIR}`);
} else {
logger.info(`Metadata directory exists: ${METADATA_DIR}`);
}
// Check writability (optional but good practice)
await fsPromises.access(METADATA_DIR, fs.constants.W_OK);
logger.success(`Metadata directory is writable: ${METADATA_DIR}`);
} catch (err) {
logger.error(`Metadata directory error (${METADATA_DIR}): ${err.message}`);
// Decide if this is fatal. If resumability is critical, maybe throw.
throw new Error(`Failed to access or create metadata directory: ${METADATA_DIR}`);
}
// --- End added section ---
// Log configuration
logger.info(`Maximum file size set to: ${config.maxFileSize / (1024 * 1024)}MB`);

View File

@@ -1,48 +1,151 @@
require('dotenv').config();
console.log('Loaded ENV:', {
PORT: process.env.PORT,
UPLOAD_DIR: process.env.UPLOAD_DIR,
LOCAL_UPLOAD_DIR: process.env.LOCAL_UPLOAD_DIR,
NODE_ENV: process.env.NODE_ENV
});
console.log('Loaded ENV:', {
PORT: process.env.PORT,
UPLOAD_DIR: process.env.UPLOAD_DIR,
LOCAL_UPLOAD_DIR: process.env.LOCAL_UPLOAD_DIR,
NODE_ENV: process.env.NODE_ENV
});
const { validatePin } = require('../utils/security');
const logger = require('../utils/logger');
const fs = require('fs');
const path = require('path');
const { version } = require('../../package.json'); // Get version from package.json
/**
* Get the host path from Docker mount point
* @returns {string} Host path or fallback to container path
* Environment Variables Reference
*
* PORT - Port for the server (default: 3000)
* NODE_ENV - Node environment (default: 'development')
* BASE_URL - Base URL for the app (default: http://localhost:${PORT})
* UPLOAD_DIR - Directory for uploads (Docker/production)
* LOCAL_UPLOAD_DIR - Directory for uploads (local dev, fallback: './local_uploads')
* MAX_FILE_SIZE - Max upload size in MB (default: 1024)
* AUTO_UPLOAD - Enable auto-upload (true/false, default: false)
* DUMBDROP_PIN - Security PIN for uploads (required for protected endpoints)
* DUMBDROP_TITLE - Site title (default: 'DumbDrop')
* APPRISE_URL - Apprise notification URL (optional)
* APPRISE_MESSAGE - Notification message template (default provided)
* APPRISE_SIZE_UNIT - Size unit for notifications (optional)
* ALLOWED_EXTENSIONS - Comma-separated list of allowed file extensions (optional)
* ALLOWED_IFRAME_ORIGINS - Comma-separated list of allowed iframe origins (optional)
*/
function getHostPath() {
// Helper for clear configuration logging
const logConfig = (message, level = 'info') => {
const prefix = level === 'warning' ? '⚠️ WARNING:' : ' INFO:';
console.log(`${prefix} CONFIGURATION: ${message}`);
};
// Default configurations
const DEFAULT_PORT = 3000;
const DEFAULT_CHUNK_SIZE = 1024 * 1024 * 100; // 100MB
const DEFAULT_SITE_TITLE = 'DumbDrop';
const DEFAULT_BASE_URL = 'http://localhost:3000';
const DEFAULT_CLIENT_MAX_RETRIES = 5; // Default retry count
const logAndReturn = (key, value, isDefault = false) => {
logConfig(`${key}: ${value}${isDefault ? ' (default)' : ''}`);
return value;
};
/**
* Determine the upload directory based on environment variables.
* Priority:
* 1. UPLOAD_DIR (for Docker/production)
* 2. LOCAL_UPLOAD_DIR (for local development)
* 3. './local_uploads' (default fallback)
* @returns {string} The upload directory path
*/
function determineUploadDirectory() {
let uploadDir;
if (process.env.UPLOAD_DIR) {
uploadDir = process.env.UPLOAD_DIR;
logConfig(`Upload directory set from UPLOAD_DIR: ${uploadDir}`);
} else if (process.env.LOCAL_UPLOAD_DIR) {
uploadDir = process.env.LOCAL_UPLOAD_DIR;
logConfig(`Upload directory using LOCAL_UPLOAD_DIR fallback: ${uploadDir}`, 'warning');
} else {
uploadDir = './local_uploads';
logConfig(`Upload directory using default fallback: ${uploadDir}`, 'warning');
}
logConfig(`Final upload directory path: ${require('path').resolve(uploadDir)}`);
return uploadDir;
}
/**
* Utility to detect if running in local development mode
* Returns true if NODE_ENV is not 'production' and UPLOAD_DIR is not set (i.e., not Docker)
*/
function isLocalDevelopment() {
return process.env.NODE_ENV !== 'production' && !process.env.UPLOAD_DIR;
}
/**
* Ensure the upload directory exists (for local development only)
* Creates the directory if it does not exist
*/
function ensureLocalUploadDirExists(uploadDir) {
if (!isLocalDevelopment()) return;
try {
// Read Docker mountinfo to get the host path
const mountInfo = fs.readFileSync('/proc/self/mountinfo', 'utf8');
const lines = mountInfo.split('\n');
// Find the line containing our upload directory
const uploadMount = lines.find(line => line.includes('/app/uploads'));
if (uploadMount) {
// Extract the host path from the mount info
const parts = uploadMount.split(' ');
// The host path is typically in the 4th space-separated field
const hostPath = parts[3];
return hostPath;
if (!fs.existsSync(uploadDir)) {
fs.mkdirSync(uploadDir, { recursive: true });
logConfig(`Created local upload directory: ${uploadDir}`);
} else {
logConfig(`Local upload directory exists: ${uploadDir}`);
}
} catch (err) {
logger.debug('Could not determine host path from mount info');
logConfig(`Failed to create local upload directory: ${uploadDir}. Error: ${err.message}`, 'warning');
}
// Fallback to container path if we can't determine host path
return '/app/uploads';
}
// Determine and ensure upload directory (for local dev)
const resolvedUploadDir = determineUploadDirectory();
ensureLocalUploadDirExists(resolvedUploadDir);
/**
* Application configuration
* Loads and validates environment variables
*/
const config = {
// =====================
// =====================
// Server settings
port: process.env.PORT || 3000,
// =====================
/**
* Port for the server (default: 3000)
* Set via PORT in .env
*/
port: process.env.PORT || DEFAULT_PORT,
/**
* Node environment (default: 'development')
* Set via NODE_ENV in .env
*/
nodeEnv: process.env.NODE_ENV || 'development',
baseUrl: process.env.BASE_URL || `http://localhost:${process.env.PORT || 3000}`,
/**
* Base URL for the app (default: http://localhost:${PORT})
* Set via BASE_URL in .env
*/
baseUrl: process.env.BASE_URL || DEFAULT_BASE_URL,
// =====================
// =====================
// Upload settings
uploadDir: '/app/uploads', // Internal Docker path
uploadDisplayPath: getHostPath(), // Dynamically determined from Docker mount
// =====================
/**
* Directory for uploads
* Priority: UPLOAD_DIR (Docker/production) > LOCAL_UPLOAD_DIR (local dev) > './local_uploads' (fallback)
*/
uploadDir: resolvedUploadDir,
/**
* Max upload size in bytes (default: 1024MB)
* Set via MAX_FILE_SIZE in .env (in MB)
*/
maxFileSize: (() => {
const sizeInMB = parseInt(process.env.MAX_FILE_SIZE || '1024', 10);
if (isNaN(sizeInMB) || sizeInMB <= 0) {
@@ -50,31 +153,94 @@ const config = {
}
return sizeInMB * 1024 * 1024; // Convert MB to bytes
})(),
/**
* Enable auto-upload (true/false, default: false)
* Set via AUTO_UPLOAD in .env
*/
autoUpload: process.env.AUTO_UPLOAD === 'true',
// =====================
// =====================
// Security
// =====================
/**
* Security PIN for uploads (required for protected endpoints)
* Set via DUMBDROP_PIN in .env
*/
pin: validatePin(process.env.DUMBDROP_PIN),
// =====================
// =====================
// UI settings
siteTitle: process.env.DUMBDROP_TITLE || 'DumbDrop',
// =====================
/**
* Site title (default: 'DumbDrop')
* Set via DUMBDROP_TITLE in .env
*/
siteTitle: process.env.DUMBDROP_TITLE || DEFAULT_SITE_TITLE,
// =====================
// =====================
// Notification settings
// =====================
/**
* Apprise notification URL (optional)
* Set via APPRISE_URL in .env
*/
appriseUrl: process.env.APPRISE_URL,
/**
* Notification message template (default provided)
* Set via APPRISE_MESSAGE in .env
*/
appriseMessage: process.env.APPRISE_MESSAGE || 'New file uploaded - {filename} ({size}), Storage used {storage}',
/**
* Size unit for notifications (optional)
* Set via APPRISE_SIZE_UNIT in .env
*/
appriseSizeUnit: process.env.APPRISE_SIZE_UNIT,
// =====================
// =====================
// File extensions
// =====================
/**
* Allowed file extensions (comma-separated, optional)
* Set via ALLOWED_EXTENSIONS in .env
*/
allowedExtensions: process.env.ALLOWED_EXTENSIONS ?
process.env.ALLOWED_EXTENSIONS.split(',').map(ext => ext.trim().toLowerCase()) :
null,
// Allowed iframe origins (for embedding in iframes)
// Comma-separated list of origins, e.g. "https://organizr.example.com,https://dumb.myportal.com"
allowedIframeOrigins: process.env.ALLOWED_IFRAME_ORIGINS
? process.env.ALLOWED_IFRAME_ORIGINS.split(',').map(origin => origin.trim()).filter(Boolean)
: null
: null,
/**
* Max number of retries for client-side chunk uploads (default: 5)
* Set via CLIENT_MAX_RETRIES in .env
*/
clientMaxRetries: (() => {
const envValue = process.env.CLIENT_MAX_RETRIES;
const defaultValue = DEFAULT_CLIENT_MAX_RETRIES;
if (envValue === undefined) {
return logAndReturn('CLIENT_MAX_RETRIES', defaultValue, true);
}
const retries = parseInt(envValue, 10);
if (isNaN(retries) || retries < 0) {
logConfig(
`Invalid CLIENT_MAX_RETRIES value: "${envValue}". Using default: ${defaultValue}`,
'warning',
);
return logAndReturn('CLIENT_MAX_RETRIES', defaultValue, true);
}
return logAndReturn('CLIENT_MAX_RETRIES', retries);
})(),
uploadPin: logAndReturn('UPLOAD_PIN', process.env.UPLOAD_PIN || null),
};
console.log(`Upload directory configured as: ${config.uploadDir}`);
// Validate required settings
function validateConfig() {
const errors = [];
@@ -85,7 +251,12 @@ function validateConfig() {
// Validate BASE_URL format
try {
new URL(config.baseUrl);
let url = new URL(config.baseUrl);
// Ensure BASE_URL ends with a slash
if (!config.baseUrl.endsWith('/')) {
logger.warn('BASE_URL did not end with a trailing slash. Automatically appending "/".');
config.baseUrl = config.baseUrl + '/';
}
} catch (err) {
errors.push('BASE_URL must be a valid URL');
}

View File

@@ -15,6 +15,7 @@ function securityHeaders(req, res, next) {
// Content Security Policy
let csp =
"default-src 'self'; " +
"connect-src 'self'; " +
"style-src 'self' 'unsafe-inline' cdn.jsdelivr.net; " +
"script-src 'self' 'unsafe-inline' cdn.jsdelivr.net; " +
"img-src 'self' data: blob:;";

View File

@@ -1,413 +1,456 @@
/**
* File upload route handlers and batch upload management.
* Handles file uploads, chunked transfers, and folder creation.
* Manages upload sessions, batch timeouts, and cleanup.
* Manages upload sessions using persistent metadata for resumability.
*/
const express = require('express');
const router = express.Router();
const crypto = require('crypto');
const path = require('path');
const fs = require('fs').promises; // Use promise-based fs
const fsSync = require('fs'); // For sync checks like existsSync
const { config } = require('../config');
const logger = require('../utils/logger');
const { getUniqueFilePath, getUniqueFolderPath, sanitizeFilename, sanitizePathPreserveDirs } = require('../utils/fileUtils');
const { getUniqueFilePath, getUniqueFolderPath, sanitizeFilename, sanitizePathPreserveDirs, isValidBatchId } = require('../utils/fileUtils');
const { sendNotification } = require('../services/notifications');
const fs = require('fs');
const { cleanupIncompleteUploads } = require('../utils/cleanup');
const { isDemoMode, createMockUploadResponse } = require('../utils/demoMode');
const { isDemoMode } = require('../utils/demoMode');
// Store ongoing uploads
const uploads = new Map();
// Store folder name mappings for batch uploads with timestamps
// --- Persistence Setup ---
const METADATA_DIR = path.join(config.uploadDir, '.metadata');
// --- In-Memory Maps (Still useful for session-level data) ---
// Store folder name mappings for batch uploads (avoids FS lookups during session)
const folderMappings = new Map();
// Store batch activity timestamps
// Store batch activity timestamps (for cleaning up stale batches/folder mappings)
const batchActivity = new Map();
// Store upload to batch mappings
const uploadToBatch = new Map();
const BATCH_TIMEOUT = 30 * 60 * 1000; // 30 minutes
const BATCH_TIMEOUT = 30 * 60 * 1000; // 30 minutes for batch/folderMapping cleanup
let cleanupInterval;
// --- Helper Functions for Metadata ---
/**
* Start the cleanup interval for inactive batches
* @returns {NodeJS.Timeout} The interval handle
*/
function startBatchCleanup() {
if (cleanupInterval) {
clearInterval(cleanupInterval);
async function readUploadMetadata(uploadId) {
if (!uploadId || typeof uploadId !== 'string' || uploadId.includes('..')) {
logger.warn(`Attempted to read metadata with invalid uploadId: ${uploadId}`);
return null;
}
cleanupInterval = setInterval(() => {
const metaFilePath = path.join(METADATA_DIR, `${uploadId}.meta`);
try {
const data = await fs.readFile(metaFilePath, 'utf8');
return JSON.parse(data);
} catch (err) {
if (err.code === 'ENOENT') {
return null; // Metadata file doesn't exist - normal case for new/finished uploads
}
logger.error(`Error reading metadata for ${uploadId}: ${err.message}`);
throw err; // Rethrow other errors
}
}
async function writeUploadMetadata(uploadId, metadata) {
if (!uploadId || typeof uploadId !== 'string' || uploadId.includes('..')) {
logger.error(`Attempted to write metadata with invalid uploadId: ${uploadId}`);
return; // Prevent writing
}
const metaFilePath = path.join(METADATA_DIR, `${uploadId}.meta`);
metadata.lastActivity = Date.now(); // Update timestamp on every write
try {
// Write atomically if possible (write to temp then rename) for more safety
const tempMetaPath = `${metaFilePath}.${crypto.randomBytes(4).toString('hex')}.tmp`;
await fs.writeFile(tempMetaPath, JSON.stringify(metadata, null, 2));
await fs.rename(tempMetaPath, metaFilePath);
} catch (err) {
logger.error(`Error writing metadata for ${uploadId}: ${err.message}`);
// Attempt to clean up temp file if rename failed
try { await fs.unlink(tempMetaPath); } catch (unlinkErr) {/* ignore */}
throw err;
}
}
async function deleteUploadMetadata(uploadId) {
if (!uploadId || typeof uploadId !== 'string' || uploadId.includes('..')) {
logger.warn(`Attempted to delete metadata with invalid uploadId: ${uploadId}`);
return;
}
const metaFilePath = path.join(METADATA_DIR, `${uploadId}.meta`);
try {
await fs.unlink(metaFilePath);
logger.debug(`Deleted metadata file for upload: ${uploadId}.meta`);
} catch (err) {
if (err.code !== 'ENOENT') { // Ignore if already deleted
logger.error(`Error deleting metadata file ${uploadId}.meta: ${err.message}`);
}
}
}
// --- Batch Cleanup (Focuses on batchActivity map, not primary upload state) ---
let batchCleanupInterval;
function startBatchCleanup() {
if (batchCleanupInterval) clearInterval(batchCleanupInterval);
batchCleanupInterval = setInterval(() => {
const now = Date.now();
logger.info(`Running batch cleanup, checking ${batchActivity.size} active batches`);
logger.info(`Running batch cleanup, checking ${batchActivity.size} active batch sessions`);
let cleanedCount = 0;
for (const [batchId, lastActivity] of batchActivity.entries()) {
if (now - lastActivity >= BATCH_TIMEOUT) {
logger.info(`Cleaning up inactive batch: ${batchId}`);
logger.info(`Cleaning up inactive batch session: ${batchId}`);
batchActivity.delete(batchId);
// Clean up associated folder mappings for this batch
for (const key of folderMappings.keys()) {
if (key.endsWith(`-${batchId}`)) {
folderMappings.delete(key);
}
}
cleanedCount++;
}
}
}, 5 * 60 * 1000); // 5 minutes
return cleanupInterval;
if (cleanedCount > 0) logger.info(`Cleaned up ${cleanedCount} inactive batch sessions.`);
}, 5 * 60 * 1000); // Check every 5 minutes
batchCleanupInterval.unref(); // Allow process to exit if this is the only timer
return batchCleanupInterval;
}
/**
* Stop the batch cleanup interval
*/
function stopBatchCleanup() {
if (cleanupInterval) {
clearInterval(cleanupInterval);
cleanupInterval = null;
if (batchCleanupInterval) {
clearInterval(batchCleanupInterval);
batchCleanupInterval = null;
}
}
// Start cleanup interval unless disabled
if (!process.env.DISABLE_BATCH_CLEANUP) {
startBatchCleanup();
}
// Run cleanup periodically
const CLEANUP_INTERVAL = 5 * 60 * 1000; // 5 minutes
const cleanupTimer = setInterval(() => {
cleanupIncompleteUploads(uploads, uploadToBatch, batchActivity)
.catch(err => logger.error(`Cleanup failed: ${err.message}`));
}, CLEANUP_INTERVAL);
// Handle cleanup timer errors
cleanupTimer.unref(); // Don't keep process alive just for cleanup
process.on('SIGTERM', () => {
clearInterval(cleanupTimer);
// Final cleanup
cleanupIncompleteUploads(uploads, uploadToBatch, batchActivity)
.catch(err => logger.error(`Final cleanup failed: ${err.message}`));
});
/**
* Log the current state of uploads and mappings
* @param {string} context - The context where this log is being called from
*/
function logUploadState(context) {
logger.debug(`Upload State [${context}]:
Active Uploads: ${uploads.size}
Active Batches: ${batchActivity.size}
Folder Mappings: ${folderMappings.size}
Upload-Batch Mappings: ${uploadToBatch.size}
`);
}
/**
* Validate batch ID format
* @param {string} batchId - Batch ID to validate
* @returns {boolean} True if valid
*/
function isValidBatchId(batchId) {
return /^\d+-[a-z0-9]{9}$/.test(batchId);
}
// --- Routes ---
// Initialize upload
router.post('/init', async (req, res) => {
// DEMO MODE CHECK - Bypass persistence if in demo mode
if (isDemoMode()) {
const { filename, fileSize } = req.body;
const uploadId = 'demo-' + crypto.randomBytes(16).toString('hex');
logger.info(`[DEMO] Initialized upload for ${filename} (${fileSize} bytes) with ID ${uploadId}`);
// Simulate zero-byte completion for demo
if (Number(fileSize) === 0) {
logger.success(`[DEMO] Completed zero-byte file upload: ${filename}`);
sendNotification(filename, 0, config); // Still send notification if configured
}
return res.json({ uploadId });
}
const { filename, fileSize } = req.body;
const clientBatchId = req.headers['x-batch-id'];
// --- Basic validations ---
if (!filename) return res.status(400).json({ error: 'Missing filename' });
if (fileSize === undefined || fileSize === null) return res.status(400).json({ error: 'Missing fileSize' });
const size = Number(fileSize);
if (isNaN(size) || size < 0) return res.status(400).json({ error: 'Invalid file size' });
const maxSizeInBytes = config.maxFileSize;
if (size > maxSizeInBytes) return res.status(413).json({ error: 'File too large', limit: maxSizeInBytes });
const batchId = clientBatchId || `${Date.now()}-${crypto.randomBytes(4).toString('hex').substring(0, 9)}`;
if (clientBatchId && !isValidBatchId(batchId)) return res.status(400).json({ error: 'Invalid batch ID format' });
batchActivity.set(batchId, Date.now()); // Track batch session activity
try {
// Log request details for debugging
if (process.env.DEBUG === 'true' || process.env.NODE_ENV === 'development') {
logger.info(`Upload init request:
Filename: ${filename}
Size: ${fileSize} (${typeof fileSize})
Batch ID: ${clientBatchId || 'none'}
`);
} else {
logger.info(`Upload init request: ${filename} (${fileSize} bytes)`);
}
// Validate required fields with detailed errors
if (!filename) {
return res.status(400).json({
error: 'Missing filename',
details: 'The filename field is required'
});
}
if (fileSize === undefined || fileSize === null) {
return res.status(400).json({
error: 'Missing fileSize',
details: 'The fileSize field is required'
});
}
// Convert fileSize to number if it's a string
const size = Number(fileSize);
if (isNaN(size) || size < 0) { // Changed from size <= 0 to allow zero-byte files
return res.status(400).json({
error: 'Invalid file size',
details: `File size must be a non-negative number, received: ${fileSize} (${typeof fileSize})`
});
}
// Validate file size
const maxSizeInBytes = config.maxFileSize;
if (size > maxSizeInBytes) {
const message = `File size ${size} bytes exceeds limit of ${maxSizeInBytes} bytes`;
logger.warn(message);
return res.status(413).json({
error: 'File too large',
message,
limit: maxSizeInBytes,
limitInMB: Math.floor(maxSizeInBytes / (1024 * 1024))
});
}
// Generate batch ID from header or create new one
const batchId = req.headers['x-batch-id'] || `${Date.now()}-${crypto.randomBytes(4).toString('hex').substring(0, 9)}`;
// Validate batch ID if provided in header
if (req.headers['x-batch-id'] && !isValidBatchId(batchId)) {
return res.status(400).json({
error: 'Invalid batch ID format',
details: `Batch ID must match format: timestamp-[9 alphanumeric chars], received: ${batchId}`
});
}
// Update batch activity
batchActivity.set(batchId, Date.now());
// Sanitize filename and convert to forward slashes, preserving directory structure
// --- Path handling and Sanitization ---
const sanitizedFilename = sanitizePathPreserveDirs(filename);
const safeFilename = path.normalize(sanitizedFilename)
.replace(/^(\.\.(\/|\\|$))+/, '')
.replace(/\\/g, '/')
.replace(/^\/+/, ''); // Remove leading slashes
// Log sanitized filename
logger.info(`Processing upload: ${safeFilename}`);
// Validate file extension if configured
.replace(/^\/+/, '');
logger.info(`Upload init request for: ${safeFilename}`);
// --- Extension Check ---
if (config.allowedExtensions) {
const fileExt = path.extname(safeFilename).toLowerCase();
if (!config.allowedExtensions.includes(fileExt)) {
return res.status(400).json({
error: 'File type not allowed',
allowedExtensions: config.allowedExtensions,
receivedExtension: fileExt
});
if (fileExt && !config.allowedExtensions.includes(fileExt)) {
logger.warn(`File type not allowed: ${safeFilename} (Extension: ${fileExt})`);
return res.status(400).json({ error: 'File type not allowed', receivedExtension: fileExt });
}
}
// --- Determine Paths & Handle Folders ---
const uploadId = crypto.randomBytes(16).toString('hex');
let filePath = path.join(config.uploadDir, safeFilename);
let fileHandle;
try {
// Handle file/folder paths
const pathParts = safeFilename.split('/').filter(Boolean); // Remove empty parts
if (pathParts.length > 1) {
// The first part is the root folder name from the client
const originalFolderName = pathParts[0];
// Always use a consistent mapping for this batch to avoid collisions
// This ensures all files in the batch go into the same (possibly renamed) root folder
let newFolderName = folderMappings.get(`${originalFolderName}-${batchId}`);
const folderPath = path.join(config.uploadDir, newFolderName || originalFolderName);
if (!newFolderName) {
try {
// Ensure parent directories exist
await fs.promises.mkdir(path.dirname(folderPath), { recursive: true });
// Try to create the target folder
await fs.promises.mkdir(folderPath, { recursive: false });
newFolderName = originalFolderName;
} catch (err) {
if (err.code === 'EEXIST') {
// If the folder exists, generate a unique folder name for this batch
const uniqueFolderPath = await getUniqueFolderPath(folderPath);
newFolderName = path.basename(uniqueFolderPath);
logger.info(`Folder "${originalFolderName}" exists, using "${newFolderName}" for batch ${batchId}`);
} else {
throw err;
}
let finalFilePath = path.join(config.uploadDir, safeFilename);
const pathParts = safeFilename.split('/').filter(Boolean);
if (pathParts.length > 1) {
const originalFolderName = pathParts[0];
let newFolderName = folderMappings.get(`${originalFolderName}-${batchId}`);
const baseFolderPath = path.join(config.uploadDir, newFolderName || originalFolderName);
if (!newFolderName) {
await fs.mkdir(path.dirname(baseFolderPath), { recursive: true });
try {
await fs.mkdir(baseFolderPath, { recursive: false });
newFolderName = originalFolderName;
} catch (err) {
if (err.code === 'EEXIST') {
const uniqueFolderPath = await getUniqueFolderPath(baseFolderPath);
newFolderName = path.basename(uniqueFolderPath);
logger.info(`Folder "${originalFolderName}" exists or conflict, using unique "${newFolderName}" for batch ${batchId}`);
await fs.mkdir(path.join(config.uploadDir, newFolderName), { recursive: true });
} else {
throw err;
}
// Store the mapping for this batch
folderMappings.set(`${originalFolderName}-${batchId}`, newFolderName);
}
// Always apply the mapping for this batch
pathParts[0] = newFolderName;
filePath = path.join(config.uploadDir, ...pathParts);
// Ensure all parent directories exist for the file
await fs.promises.mkdir(path.dirname(filePath), { recursive: true });
folderMappings.set(`${originalFolderName}-${batchId}`, newFolderName);
}
// Get unique file path and handle
const result = await getUniqueFilePath(filePath);
filePath = result.path;
fileHandle = result.handle;
// Create upload entry
uploads.set(uploadId, {
safeFilename: path.relative(config.uploadDir, filePath),
filePath,
fileSize: size,
bytesReceived: 0,
writeStream: fileHandle.createWriteStream()
});
// Associate upload with batch
uploadToBatch.set(uploadId, batchId);
logger.info(`Initialized upload for ${path.relative(config.uploadDir, filePath)} (${size} bytes)`);
// Log state after initialization
logUploadState('After Upload Init');
// Handle zero-byte files immediately
if (size === 0) {
const upload = uploads.get(uploadId);
upload.writeStream.end();
uploads.delete(uploadId);
logger.success(`Completed zero-byte file upload: ${upload.safeFilename}`);
sendNotification(upload.safeFilename, 0, config);
}
// Send response
return res.json({ uploadId });
} catch (err) {
if (fileHandle) {
await fileHandle.close().catch(() => {});
fs.promises.unlink(filePath).catch(() => {});
}
throw err;
pathParts[0] = newFolderName;
finalFilePath = path.join(config.uploadDir, ...pathParts);
await fs.mkdir(path.dirname(finalFilePath), { recursive: true });
} else {
await fs.mkdir(config.uploadDir, { recursive: true }); // Ensure base upload dir exists
}
// --- Check Final Path Collision & Get Unique Name if Needed ---
let checkPath = finalFilePath;
let counter = 1;
while (fsSync.existsSync(checkPath)) {
logger.warn(`Final destination file already exists: ${checkPath}. Generating unique name.`);
const dir = path.dirname(finalFilePath);
const ext = path.extname(finalFilePath);
const baseName = path.basename(finalFilePath, ext);
checkPath = path.join(dir, `${baseName} (${counter})${ext}`);
counter++;
}
if (checkPath !== finalFilePath) {
logger.info(`Using unique final path: ${checkPath}`);
finalFilePath = checkPath;
// If path changed, ensure directory exists (might be needed if baseName contained '/')
await fs.mkdir(path.dirname(finalFilePath), { recursive: true });
}
const partialFilePath = finalFilePath + '.partial';
// --- Create and Persist Metadata ---
const metadata = {
uploadId,
originalFilename: safeFilename, // Store the path as received by client
filePath: finalFilePath, // The final, possibly unique, path
partialFilePath,
fileSize: size,
bytesReceived: 0,
batchId,
createdAt: Date.now(),
lastActivity: Date.now()
};
await writeUploadMetadata(uploadId, metadata);
logger.info(`Initialized persistent upload: ${uploadId} for ${safeFilename} -> ${finalFilePath}`);
// --- Handle Zero-Byte Files --- // (Important: Handle *after* metadata potentially exists)
if (size === 0) {
try {
await fs.writeFile(finalFilePath, ''); // Create the empty file
logger.success(`Completed zero-byte file upload: ${metadata.originalFilename} as ${finalFilePath}`);
await deleteUploadMetadata(uploadId); // Clean up metadata since it's done
sendNotification(metadata.originalFilename, 0, config);
} catch (writeErr) {
logger.error(`Failed to create zero-byte file ${finalFilePath}: ${writeErr.message}`);
await deleteUploadMetadata(uploadId).catch(() => {}); // Attempt cleanup on error
throw writeErr; // Let the main catch block handle it
}
}
res.json({ uploadId });
} catch (err) {
logger.error(`Upload initialization failed:
Error: ${err.message}
Stack: ${err.stack}
Filename: ${filename}
Size: ${fileSize}
Batch ID: ${clientBatchId || 'none'}
`);
return res.status(500).json({
error: 'Failed to initialize upload',
details: err.message
});
logger.error(`Upload initialization failed: ${err.message} ${err.stack}`);
return res.status(500).json({ error: 'Failed to initialize upload', details: err.message });
}
});
// Upload chunk
router.post('/chunk/:uploadId', express.raw({
limit: '10mb',
limit: config.maxFileSize + (10 * 1024 * 1024), // Generous limit for raw body
type: 'application/octet-stream'
}), async (req, res) => {
const { uploadId } = req.params;
const upload = uploads.get(uploadId);
const chunkSize = req.body.length;
const batchId = req.headers['x-batch-id'];
if (!upload) {
logger.warn(`Upload not found: ${uploadId}, Batch ID: ${batchId || 'none'}`);
return res.status(404).json({ error: 'Upload not found' });
// DEMO MODE CHECK
if (isDemoMode()) {
const { uploadId } = req.params;
logger.debug(`[DEMO] Received chunk for ${uploadId}`);
// Fake progress - requires knowing file size which isn't easily available here in demo
const demoProgress = Math.min(100, Math.random() * 100); // Placeholder
return res.json({ bytesReceived: 0, progress: demoProgress });
}
const { uploadId } = req.params;
let chunk = req.body;
let chunkSize = chunk.length;
const clientBatchId = req.headers['x-batch-id']; // Logged but not used directly here
if (!chunkSize) return res.status(400).json({ error: 'Empty chunk received' });
let metadata;
let fileHandle;
try {
// Update batch activity if batch ID provided
if (batchId && isValidBatchId(batchId)) {
batchActivity.set(batchId, Date.now());
}
metadata = await readUploadMetadata(uploadId);
// Write chunk
await new Promise((resolve, reject) => {
upload.writeStream.write(Buffer.from(req.body), (err) => {
if (err) reject(err);
else resolve();
});
});
upload.bytesReceived += chunkSize;
// Calculate progress, ensuring it doesn't exceed 100%
const progress = Math.min(
Math.round((upload.bytesReceived / upload.fileSize) * 100),
100
);
logger.debug(`Chunk received:
File: ${upload.safeFilename}
Progress: ${progress}%
Bytes Received: ${upload.bytesReceived}/${upload.fileSize}
Chunk Size: ${chunkSize}
Upload ID: ${uploadId}
Batch ID: ${batchId || 'none'}
`);
// Check if upload is complete
if (upload.bytesReceived >= upload.fileSize) {
await new Promise((resolve, reject) => {
upload.writeStream.end((err) => {
if (err) reject(err);
else resolve();
});
});
uploads.delete(uploadId);
// Format completion message based on debug mode
if (process.env.DEBUG === 'true' || process.env.NODE_ENV === 'development') {
logger.success(`Upload completed:
File: ${upload.safeFilename}
Size: ${upload.fileSize}
Upload ID: ${uploadId}
Batch ID: ${batchId || 'none'}
`);
} else {
logger.success(`Upload completed: ${upload.safeFilename} (${upload.fileSize} bytes)`);
if (!metadata) {
logger.warn(`Upload metadata not found for chunk request: ${uploadId}. Client Batch ID: ${clientBatchId || 'none'}. Upload may be complete or cancelled.`);
// Check if the final file exists as a fallback for completed uploads
// This is a bit fragile, but handles cases where metadata was deleted slightly early
try {
// Need to guess the final path - THIS IS NOT ROBUST
// A better approach might be needed if this is common
// For now, just return 404
// await fs.access(potentialFinalPath);
// return res.json({ bytesReceived: fileSizeGuess, progress: 100 });
return res.status(404).json({ error: 'Upload session not found or already completed' });
} catch (finalCheckErr) {
return res.status(404).json({ error: 'Upload session not found or already completed' });
}
// Send notification
sendNotification(upload.safeFilename, upload.fileSize, config);
logUploadState('After Upload Complete');
}
res.json({
bytesReceived: upload.bytesReceived,
progress
});
// Update batch activity using metadata's batchId
if (metadata.batchId && isValidBatchId(metadata.batchId)) {
batchActivity.set(metadata.batchId, Date.now());
}
// --- Sanity Checks & Idempotency ---
if (metadata.bytesReceived >= metadata.fileSize) {
logger.warn(`Received chunk for already completed upload ${uploadId} (${metadata.originalFilename}). Finalizing again if needed.`);
// Ensure finalization if possible, then return success
try {
await fs.access(metadata.filePath); // Check if final file exists
logger.info(`Upload ${uploadId} already finalized at ${metadata.filePath}.`);
} catch (accessErr) {
// Final file doesn't exist, attempt rename
try {
await fs.rename(metadata.partialFilePath, metadata.filePath);
logger.info(`Finalized ${uploadId} on redundant chunk request (renamed ${metadata.partialFilePath} -> ${metadata.filePath}).`);
} catch (renameErr) {
if (renameErr.code === 'ENOENT') {
logger.warn(`Partial file ${metadata.partialFilePath} missing during redundant chunk finalization for ${uploadId}.`);
} else {
logger.error(`Error finalizing ${uploadId} on redundant chunk: ${renameErr.message}`);
}
}
}
// Regardless of rename outcome, delete metadata if it still exists
await deleteUploadMetadata(uploadId);
return res.json({ bytesReceived: metadata.fileSize, progress: 100 });
}
// Prevent writing beyond expected file size (simple protection)
if (metadata.bytesReceived + chunkSize > metadata.fileSize) {
logger.warn(`Chunk for ${uploadId} exceeds expected file size. Received ${metadata.bytesReceived + chunkSize}, expected ${metadata.fileSize}. Truncating chunk.`);
const bytesToWrite = metadata.fileSize - metadata.bytesReceived;
chunk = chunk.slice(0, bytesToWrite);
chunkSize = chunk.length;
if (chunkSize <= 0) { // If we already have exactly the right amount
logger.info(`Upload ${uploadId} already has expected bytes. Skipping write, proceeding to finalize.`);
// Skip write, proceed to finalization check below
metadata.bytesReceived = metadata.fileSize; // Ensure state is correct for finalization
} else {
logger.info(`Truncated chunk for ${uploadId} to ${chunkSize} bytes.`);
}
}
// --- Write Chunk (Append Mode) --- // Only write if chunk has size after potential truncation
if (chunkSize > 0) {
fileHandle = await fs.open(metadata.partialFilePath, 'a');
const writeResult = await fileHandle.write(chunk);
await fileHandle.close(); // Close immediately
if (writeResult.bytesWritten !== chunkSize) {
// This indicates a partial write, which is problematic.
logger.error(`Partial write for chunk ${uploadId}! Expected ${chunkSize}, wrote ${writeResult.bytesWritten}. Disk full?`);
// How to recover? Maybe revert bytesReceived? For now, throw.
throw new Error(`Failed to write full chunk for ${uploadId}`);
}
metadata.bytesReceived += writeResult.bytesWritten;
}
// --- Update State --- (bytesReceived updated above or set if truncated to zero)
const progress = metadata.fileSize === 0 ? 100 :
Math.min( Math.round((metadata.bytesReceived / metadata.fileSize) * 100), 100);
logger.debug(`Chunk written for ${uploadId}: ${metadata.bytesReceived}/${metadata.fileSize} (${progress}%)`);
// --- Persist Updated Metadata (Before potential finalization) ---
await writeUploadMetadata(uploadId, metadata);
// --- Check for Completion --- // Now happens after metadata update
if (metadata.bytesReceived >= metadata.fileSize) {
logger.info(`Upload ${uploadId} (${metadata.originalFilename}) completed ${metadata.bytesReceived} bytes.`);
try {
await fs.rename(metadata.partialFilePath, metadata.filePath);
logger.success(`Upload completed and finalized: ${metadata.originalFilename} as ${metadata.filePath} (${metadata.fileSize} bytes)`);
await deleteUploadMetadata(uploadId); // Clean up metadata file AFTER successful rename
sendNotification(metadata.originalFilename, metadata.fileSize, config);
} catch (renameErr) {
if (renameErr.code === 'ENOENT') {
logger.warn(`Partial file ${metadata.partialFilePath} not found during finalization for ${uploadId}. Assuming already finalized elsewhere.`);
// Attempt to delete metadata anyway if partial is gone
await deleteUploadMetadata(uploadId).catch(() => {});
} else {
logger.error(`CRITICAL: Failed to rename partial file ${metadata.partialFilePath} to ${metadata.filePath}: ${renameErr.message}`);
// Keep metadata and partial file for manual recovery.
// Return success to client as data is likely there, but log server issue.
}
}
}
res.json({ bytesReceived: metadata.bytesReceived, progress });
} catch (err) {
logger.error(`Chunk upload failed:
Error: ${err.message}
Stack: ${err.stack}
File: ${upload.safeFilename}
Upload ID: ${uploadId}
Batch ID: ${batchId || 'none'}
Bytes Received: ${upload.bytesReceived}/${upload.fileSize}
`);
res.status(500).json({ error: 'Failed to process chunk' });
// Ensure file handle is closed on error
if (fileHandle) {
await fileHandle.close().catch(closeErr => logger.error(`Error closing file handle for ${uploadId} after error: ${closeErr.message}`));
}
logger.error(`Chunk upload failed for ${uploadId}: ${err.message} ${err.stack}`);
// Don't delete metadata on generic chunk errors, let client retry or cleanup handle stale files
res.status(500).json({ error: 'Failed to process chunk', details: err.message });
}
});
// Cancel upload
router.post('/cancel/:uploadId', async (req, res) => {
const { uploadId } = req.params;
const upload = uploads.get(uploadId);
if (upload) {
upload.writeStream.end();
try {
await fs.promises.unlink(upload.filePath);
} catch (err) {
logger.error(`Failed to delete incomplete upload: ${err.message}`);
}
uploads.delete(uploadId);
uploadToBatch.delete(uploadId);
logger.info(`Upload cancelled: ${upload.safeFilename}`);
// DEMO MODE CHECK
if (isDemoMode()) {
logger.info(`[DEMO] Upload cancelled: ${req.params.uploadId}`);
return res.json({ message: 'Upload cancelled (Demo)' });
}
res.json({ message: 'Upload cancelled' });
const { uploadId } = req.params;
logger.info(`Received cancel request for upload: ${uploadId}`);
try {
const metadata = await readUploadMetadata(uploadId);
if (metadata) {
// Delete partial file first
try {
await fs.unlink(metadata.partialFilePath);
logger.info(`Deleted partial file on cancellation: ${metadata.partialFilePath}`);
} catch (unlinkErr) {
if (unlinkErr.code !== 'ENOENT') { // Ignore if already gone
logger.error(`Failed to delete partial file ${metadata.partialFilePath} on cancel: ${unlinkErr.message}`);
}
}
// Then delete metadata file
await deleteUploadMetadata(uploadId);
logger.info(`Upload cancelled and cleaned up: ${uploadId} (${metadata.originalFilename})`);
} else {
logger.warn(`Cancel request for non-existent or already completed upload: ${uploadId}`);
}
res.json({ message: 'Upload cancelled or already complete' });
} catch (err) {
logger.error(`Error during upload cancellation for ${uploadId}: ${err.message}`);
res.status(500).json({ error: 'Failed to cancel upload' });
}
});
module.exports = {
router,
startBatchCleanup,
stopBatchCleanup,
// Export for testing
batchActivity,
BATCH_TIMEOUT
// Export for testing if required
readUploadMetadata,
writeUploadMetadata,
deleteUploadMetadata
};

View File

@@ -53,13 +53,16 @@ async function startServer() {
});
// Shutdown handler function
let isShuttingDown = false; // Prevent multiple shutdowns
const shutdownHandler = async (signal) => {
if (isShuttingDown) return;
isShuttingDown = true;
logger.info(`${signal} received. Shutting down gracefully...`);
// Start a shorter force shutdown timer
const forceShutdownTimer = setTimeout(() => {
logger.error('Force shutdown initiated');
throw new Error('Force shutdown due to timeout');
process.exit(1);
}, 3000); // 3 seconds maximum for total shutdown
try {
@@ -92,9 +95,10 @@ async function startServer() {
// Clear the force shutdown timer since we completed gracefully
clearTimeout(forceShutdownTimer);
process.exitCode = 0;
process.exit(0); // Ensure immediate exit
} catch (error) {
logger.error(`Error during shutdown: ${error.message}`);
throw error;
process.exit(1);
}
};

View File

@@ -4,23 +4,22 @@
* Provides cleanup task registration and execution system.
*/
const fs = require('fs');
const fs = require('fs').promises;
const path = require('path');
const logger = require('./logger');
const { config } = require('../config');
/**
* Stores cleanup tasks that need to be run during shutdown
* @type {Set<Function>}
*/
const cleanupTasks = new Set();
const METADATA_DIR = path.join(config.uploadDir, '.metadata');
const UPLOAD_TIMEOUT = config.uploadTimeout || 30 * 60 * 1000; // Use a config or default (e.g., 30 mins)
let cleanupTasks = [];
/**
* Register a cleanup task to be executed during shutdown
* @param {Function} task - Async function to be executed during cleanup
*/
function registerCleanupTask(task) {
cleanupTasks.add(task);
cleanupTasks.push(task);
}
/**
@@ -28,7 +27,7 @@ function registerCleanupTask(task) {
* @param {Function} task - Task to remove
*/
function removeCleanupTask(task) {
cleanupTasks.delete(task);
cleanupTasks = cleanupTasks.filter((t) => t !== task);
}
/**
@@ -37,7 +36,7 @@ function removeCleanupTask(task) {
* @returns {Promise<void>}
*/
async function executeCleanup(timeout = 1000) {
const taskCount = cleanupTasks.size;
const taskCount = cleanupTasks.length;
if (taskCount === 0) {
logger.info('No cleanup tasks to execute');
return;
@@ -49,7 +48,7 @@ async function executeCleanup(timeout = 1000) {
// Run all cleanup tasks in parallel with timeout
await Promise.race([
Promise.all(
Array.from(cleanupTasks).map(async (task) => {
cleanupTasks.map(async (task) => {
try {
await Promise.race([
task(),
@@ -80,7 +79,7 @@ async function executeCleanup(timeout = 1000) {
}
} finally {
// Clear all tasks regardless of success/failure
cleanupTasks.clear();
cleanupTasks = [];
}
}
@@ -113,7 +112,7 @@ async function cleanupIncompleteUploads(uploads, uploadToBatch, batchActivity) {
// Delete incomplete file
try {
await fs.promises.unlink(upload.filePath);
await fs.unlink(upload.filePath);
logger.info(`Cleaned up incomplete upload: ${upload.safeFilename}`);
} catch (err) {
if (err.code !== 'ENOENT') {
@@ -138,31 +137,173 @@ async function cleanupIncompleteUploads(uploads, uploadToBatch, batchActivity) {
}
}
/**
* Clean up stale/incomplete uploads based on metadata files.
*/
async function cleanupIncompleteMetadataUploads() {
logger.info('Running cleanup for stale metadata/partial uploads...');
let cleanedCount = 0;
let checkedCount = 0;
try {
// Ensure metadata directory exists before trying to read it
try {
await fs.access(METADATA_DIR);
} catch (accessErr) {
if (accessErr.code === 'ENOENT') {
logger.info('Metadata directory does not exist, skipping metadata cleanup.');
return;
}
throw accessErr; // Rethrow other access errors
}
const files = await fs.readdir(METADATA_DIR);
const now = Date.now();
for (const file of files) {
if (file.endsWith('.meta')) {
checkedCount++;
const uploadId = file.replace('.meta', '');
const metaFilePath = path.join(METADATA_DIR, file);
let metadata;
try {
const data = await fs.readFile(metaFilePath, 'utf8');
metadata = JSON.parse(data);
// Check inactivity based on lastActivity timestamp in metadata
if (now - (metadata.lastActivity || metadata.createdAt || 0) > UPLOAD_TIMEOUT) {
logger.warn(`Found stale upload metadata: ${file}. Last activity: ${new Date(metadata.lastActivity || metadata.createdAt)}`);
// Attempt to delete partial file
if (metadata.partialFilePath) {
try {
await fs.unlink(metadata.partialFilePath);
logger.info(`Deleted stale partial file: ${metadata.partialFilePath}`);
} catch (unlinkPartialErr) {
if (unlinkPartialErr.code !== 'ENOENT') { // Ignore if already gone
logger.error(`Failed to delete stale partial file ${metadata.partialFilePath}: ${unlinkPartialErr.message}`);
}
}
}
// Attempt to delete metadata file
try {
await fs.unlink(metaFilePath);
logger.info(`Deleted stale metadata file: ${file}`);
cleanedCount++;
} catch (unlinkMetaErr) {
logger.error(`Failed to delete stale metadata file ${metaFilePath}: ${unlinkMetaErr.message}`);
}
}
} catch (readErr) {
logger.error(`Error reading or parsing metadata file ${metaFilePath} during cleanup: ${readErr.message}. Skipping.`);
// Optionally attempt to delete the corrupt meta file?
// await fs.unlink(metaFilePath).catch(()=>{});
}
} else if (file.endsWith('.tmp')) {
// Clean up potential leftover temp metadata files
const tempMetaPath = path.join(METADATA_DIR, file);
try {
const stats = await fs.stat(tempMetaPath);
if (now - stats.mtime.getTime() > UPLOAD_TIMEOUT) { // If temp file is also old
logger.warn(`Deleting stale temporary metadata file: ${file}`);
await fs.unlink(tempMetaPath);
}
} catch (statErr) {
if (statErr.code !== 'ENOENT') { // Ignore if already gone
logger.error(`Error checking temporary metadata file ${tempMetaPath}: ${statErr.message}`);
}
}
}
}
if (checkedCount > 0 || cleanedCount > 0) {
logger.info(`Metadata cleanup finished. Checked: ${checkedCount}, Cleaned stale: ${cleanedCount}.`);
}
} catch (err) {
// Handle errors reading the METADATA_DIR itself
if (err.code === 'ENOENT') {
logger.info('Metadata directory not found during cleanup scan.'); // Should have been created on init
} else {
logger.error(`Error during metadata cleanup scan: ${err.message}`);
}
}
// Also run empty folder cleanup
await cleanupEmptyFolders(config.uploadDir);
}
// Schedule the new cleanup function
const METADATA_CLEANUP_INTERVAL = 15 * 60 * 1000; // e.g., every 15 minutes
let metadataCleanupTimer = setInterval(cleanupIncompleteMetadataUploads, METADATA_CLEANUP_INTERVAL);
metadataCleanupTimer.unref(); // Allow process to exit if this is the only timer
process.on('SIGTERM', () => clearInterval(metadataCleanupTimer));
process.on('SIGINT', () => clearInterval(metadataCleanupTimer));
/**
* Recursively remove empty folders
* @param {string} dir - Directory to clean
*/
async function cleanupEmptyFolders(dir) {
try {
const files = await fs.promises.readdir(dir);
// Avoid trying to clean the special .metadata directory itself
if (path.basename(dir) === '.metadata') {
logger.debug(`Skipping cleanup of metadata directory: ${dir}`);
return;
}
const files = await fs.readdir(dir);
for (const file of files) {
const fullPath = path.join(dir, file);
const stats = await fs.promises.stat(fullPath);
// Skip the metadata directory during traversal
if (path.basename(fullPath) === '.metadata') {
logger.debug(`Skipping traversal into metadata directory: ${fullPath}`);
continue;
}
let stats;
try {
stats = await fs.stat(fullPath);
} catch (statErr) {
if (statErr.code === 'ENOENT') continue; // File might have been deleted concurrently
throw statErr;
}
if (stats.isDirectory()) {
await cleanupEmptyFolders(fullPath);
// Check if directory is empty after cleaning subdirectories
const remaining = await fs.promises.readdir(fullPath);
let remaining = [];
try {
remaining = await fs.readdir(fullPath);
} catch (readErr) {
if (readErr.code === 'ENOENT') continue; // Directory was deleted
throw readErr;
}
if (remaining.length === 0) {
await fs.promises.rmdir(fullPath);
logger.info(`Removed empty directory: ${fullPath}`);
// Make sure we don't delete the main upload dir
if (fullPath !== path.resolve(config.uploadDir)) {
try {
await fs.rmdir(fullPath);
logger.info(`Removed empty directory: ${fullPath}`);
} catch (rmErr) {
if (rmErr.code !== 'ENOENT') { // Ignore if already deleted
logger.error(`Failed to remove supposedly empty directory ${fullPath}: ${rmErr.message}`);
}
}
}
}
}
}
} catch (err) {
logger.error(`Failed to clean empty folders: ${err.message}`);
if (err.code !== 'ENOENT') { // Ignore if dir was already deleted
logger.error(`Failed to clean empty folders in ${dir}: ${err.message}`);
}
}
}
@@ -171,5 +312,6 @@ module.exports = {
removeCleanupTask,
executeCleanup,
cleanupIncompleteUploads,
cleanupIncompleteMetadataUploads,
cleanupEmptyFolders
};

View File

@@ -9,19 +9,6 @@ const path = require('path');
const logger = require('./logger');
const { config } = require('../config');
/**
* Get display path for logs
* @param {string} internalPath - Internal Docker path
* @returns {string} Display path for host machine
*/
function getDisplayPath(internalPath) {
if (!internalPath.startsWith(config.uploadDir)) return internalPath;
// Replace the container path with the host path
const relativePath = path.relative(config.uploadDir, internalPath);
return path.join(config.uploadDisplayPath, relativePath);
}
/**
* Format file size to human readable format
* @param {number} bytes - Size in bytes
@@ -90,13 +77,13 @@ async function ensureDirectoryExists(directoryPath) {
try {
if (!fs.existsSync(directoryPath)) {
await fs.promises.mkdir(directoryPath, { recursive: true });
logger.info(`Created directory: ${getDisplayPath(directoryPath)}`);
logger.info(`Created directory: ${directoryPath}`);
}
await fs.promises.access(directoryPath, fs.constants.W_OK);
logger.success(`Directory is writable: ${getDisplayPath(directoryPath)}`);
logger.success(`Directory is writable: ${directoryPath}`);
} catch (err) {
logger.error(`Directory error: ${err.message}`);
throw new Error(`Failed to access or create directory: ${getDisplayPath(directoryPath)}`);
throw new Error(`Failed to access or create directory: ${directoryPath}`);
}
}
@@ -129,8 +116,8 @@ async function getUniqueFilePath(filePath) {
}
}
// Log using display path
logger.info(`Using unique path: ${getDisplayPath(finalPath)}`);
// Log using actual path
logger.info(`Using unique path: ${finalPath}`);
return { path: finalPath, handle: fileHandle };
}
@@ -173,6 +160,16 @@ function sanitizePathPreserveDirs(filePath) {
.join('/');
}
/**
* Validate batch ID format
* @param {string} batchId - Batch ID to validate
* @returns {boolean} True if valid (matches timestamp-9_alphanumeric format)
*/
function isValidBatchId(batchId) {
if (!batchId) return false;
return /^\d+-[a-z0-9]{9}$/.test(batchId);
}
module.exports = {
formatFileSize,
calculateDirectorySize,
@@ -180,5 +177,6 @@ module.exports = {
getUniqueFilePath,
getUniqueFolderPath,
sanitizeFilename,
sanitizePathPreserveDirs
sanitizePathPreserveDirs,
isValidBatchId
};