mirror of
https://github.com/kyantech/Palmr.git
synced 2025-11-02 04:53:26 +00:00
Compare commits
112 Commits
v3.1.3-bet
...
feat--chan
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3dbd5b81ae | ||
|
|
1806fbee39 | ||
|
|
5d8c80812b | ||
|
|
7118d87e47 | ||
|
|
6742ca314e | ||
|
|
c0a7970330 | ||
|
|
d0d5d012f0 | ||
|
|
965ef244f3 | ||
|
|
18700d7e72 | ||
|
|
25b1a62d5f | ||
|
|
7617a14f1b | ||
|
|
cb4ed3f581 | ||
|
|
148676513d | ||
|
|
42a5b7a796 | ||
|
|
59fccd9a93 | ||
|
|
91a5a24c8b | ||
|
|
ff83364870 | ||
|
|
df31b325f6 | ||
|
|
cce9847242 | ||
|
|
39dc94b7f8 | ||
|
|
ab5ea156a3 | ||
|
|
4ff1eb28d9 | ||
|
|
17080e4465 | ||
|
|
c798c1bb1d | ||
|
|
0d7f9ca2b3 | ||
|
|
f78ecab2ed | ||
|
|
fcc877738f | ||
|
|
92722692f9 | ||
|
|
95ac0f195b | ||
|
|
d6c9b0d7d2 | ||
|
|
59f9e19ffb | ||
|
|
6086d2a0ac | ||
|
|
6b979a22fb | ||
|
|
e4bae380c9 | ||
|
|
3117904009 | ||
|
|
b078e94189 | ||
|
|
bd4212b44c | ||
|
|
b699bffb5b | ||
|
|
a755c5324f | ||
|
|
9072e7e866 | ||
|
|
d3d1057ba8 | ||
|
|
24eda85fdc | ||
|
|
d49d15ac9b | ||
|
|
e7b2062764 | ||
|
|
f21f972825 | ||
|
|
4f4e4a079e | ||
|
|
6fbb9aa9da | ||
|
|
0e610d002c | ||
|
|
abd8366e94 | ||
|
|
5d8c243125 | ||
|
|
d23af700da | ||
|
|
494161eb47 | ||
|
|
6a9728be4b | ||
|
|
5e889956c7 | ||
|
|
5afc6ea271 | ||
|
|
cc368377c2 | ||
|
|
51764be7d4 | ||
|
|
fe598b4a30 | ||
|
|
80286e57d9 | ||
|
|
9f36a48d15 | ||
|
|
94286e8452 | ||
|
|
0ce2d6a998 | ||
|
|
9e15fd7d2e | ||
|
|
736348ebe8 | ||
|
|
ddb981cba2 | ||
|
|
724452fb40 | ||
|
|
a2ac6a6268 | ||
|
|
aecda25b25 | ||
|
|
0f22b0bb23 | ||
|
|
edf6d70d69 | ||
|
|
a2ecd2e221 | ||
|
|
2f022cae5d | ||
|
|
bb3669f5b2 | ||
|
|
87fd8caf2c | ||
|
|
e8087a7c01 | ||
|
|
4075a7df29 | ||
|
|
c081b6f764 | ||
|
|
ecaa6d0321 | ||
|
|
e7ae7833ad | ||
|
|
22f34f6f81 | ||
|
|
29efe0a10e | ||
|
|
965c64b468 | ||
|
|
ce57cda672 | ||
|
|
a59857079e | ||
|
|
9ae2a0c628 | ||
|
|
f2c514cd82 | ||
|
|
6755230c53 | ||
|
|
f2a0e60f20 | ||
|
|
6cb21e95c4 | ||
|
|
868add68a5 | ||
|
|
307148d951 | ||
|
|
9cb4235550 | ||
|
|
6014b3e961 | ||
|
|
32f0a891ba | ||
|
|
124ac46eeb | ||
|
|
d3e76c19bf | ||
|
|
dd1ce189ae | ||
|
|
82e43b06c6 | ||
|
|
aab4e6d9df | ||
|
|
1f097678ce | ||
|
|
96cb4a04ec | ||
|
|
b7c4b37e89 | ||
|
|
952cf27ecb | ||
|
|
765810e4e5 | ||
|
|
36d09a7679 | ||
|
|
c6d6648942 | ||
|
|
54ca7580b0 | ||
|
|
4e53d239bb | ||
|
|
6491894f0e | ||
|
|
93e05dd913 | ||
|
|
2efe69e50b | ||
|
|
761865a6a3 |
259
.github/copilot-instructions.md
vendored
Normal file
259
.github/copilot-instructions.md
vendored
Normal file
@@ -0,0 +1,259 @@
|
||||
# GitHub Copilot Instructions for Palmr
|
||||
|
||||
This file contains instructions for GitHub Copilot to help contributors work effectively with the Palmr codebase.
|
||||
|
||||
## Project Overview
|
||||
|
||||
Palmr is a flexible and open-source alternative to file transfer services like WeTransfer and SendGB. It's built with:
|
||||
|
||||
- **Backend**: Fastify (Node.js) with TypeScript, SQLite database, and filesystem/S3 storage
|
||||
- **Frontend**: Next.js 15 + React + TypeScript + Shadcn/ui
|
||||
- **Documentation**: Next.js + Fumadocs + MDX
|
||||
- **Package Manager**: pnpm (v10.6.0)
|
||||
- **Monorepo Structure**: Three main apps (web, server, docs) in the `apps/` directory
|
||||
|
||||
## Architecture and Structure
|
||||
|
||||
### Monorepo Layout
|
||||
|
||||
```
|
||||
apps/
|
||||
├── docs/ # Documentation site (Next.js + Fumadocs)
|
||||
├── server/ # Backend API (Fastify + TypeScript)
|
||||
└── web/ # Frontend application (Next.js 15)
|
||||
```
|
||||
|
||||
### Key Technologies
|
||||
|
||||
- **TypeScript**: Primary language for all applications
|
||||
- **Database**: Prisma ORM with SQLite (optional S3-compatible storage)
|
||||
- **Authentication**: Multiple OAuth providers (Google, GitHub, Discord, etc.)
|
||||
- **Internationalization**: Multi-language support with translation scripts
|
||||
- **Validation**: Husky pre-push hooks for linting and type checking
|
||||
|
||||
## Development Workflow
|
||||
|
||||
### Base Branch
|
||||
|
||||
Always create new branches from and submit PRs to the `next` branch, not `main`.
|
||||
|
||||
### Commit Convention
|
||||
|
||||
Use Conventional Commits format for all commits:
|
||||
|
||||
```
|
||||
<type>(<scope>): <description>
|
||||
|
||||
Types:
|
||||
- feat: New feature
|
||||
- fix: Bug fix
|
||||
- docs: Documentation changes
|
||||
- test: Adding or updating tests
|
||||
- refactor: Code refactoring
|
||||
- style: Code formatting
|
||||
- chore: Maintenance tasks
|
||||
```
|
||||
|
||||
Examples:
|
||||
|
||||
- `feat(web): add user authentication system`
|
||||
- `fix(api): resolve null pointer exception in user service`
|
||||
- `docs: update installation instructions in README`
|
||||
- `test(server): add unit tests for user validation`
|
||||
|
||||
### Code Quality Standards
|
||||
|
||||
1. **Linting**: All apps use ESLint. Run `pnpm lint` before committing
|
||||
2. **Formatting**: Use Prettier for code formatting. Run `pnpm format`
|
||||
3. **Type Checking**: Run `pnpm type-check` to validate TypeScript
|
||||
4. **Validation**: Run `pnpm validate` to run both linting and type checking
|
||||
5. **Pre-push Hook**: Automatically validates all apps before pushing
|
||||
|
||||
### Testing Changes
|
||||
|
||||
- Test incrementally during development
|
||||
- Run validation locally before pushing: `pnpm validate` in each app directory
|
||||
- Keep changes focused on a single issue or feature
|
||||
- Review your work before committing
|
||||
|
||||
## Application-Specific Guidelines
|
||||
|
||||
### Web App (`apps/web/`)
|
||||
|
||||
- Framework: Next.js 15 with App Router
|
||||
- Port: 3000 (development)
|
||||
- Scripts:
|
||||
- `pnpm dev`: Start development server
|
||||
- `pnpm build`: Build for production
|
||||
- `pnpm validate`: Run linting and type checking
|
||||
- Translations: Use Python scripts in `scripts/` directory
|
||||
- `pnpm translations:check`: Check translation status
|
||||
- `pnpm translations:sync`: Synchronize translations
|
||||
|
||||
### Server App (`apps/server/`)
|
||||
|
||||
- Framework: Fastify with TypeScript
|
||||
- Port: 3333 (default)
|
||||
- Scripts:
|
||||
- `pnpm dev`: Start development server with watch mode
|
||||
- `pnpm build`: Build TypeScript to JavaScript
|
||||
- `pnpm validate`: Run linting and type checking
|
||||
- `pnpm db:seed`: Seed database
|
||||
- Database: Prisma ORM with SQLite
|
||||
|
||||
### Docs App (`apps/docs/`)
|
||||
|
||||
- Framework: Next.js with Fumadocs
|
||||
- Port: 3001 (development)
|
||||
- Content: MDX files in `content/docs/`
|
||||
- Scripts:
|
||||
- `pnpm dev`: Start development server
|
||||
- `pnpm build`: Build documentation site
|
||||
- `pnpm validate`: Run linting and type checking
|
||||
|
||||
## Code Style and Best Practices
|
||||
|
||||
### General Guidelines
|
||||
|
||||
1. **Follow Style Guidelines**: Ensure code adheres to ESLint and Prettier configurations
|
||||
2. **TypeScript First**: Always use TypeScript, avoid `any` types when possible
|
||||
3. **Component Organization**: Keep components focused and single-purpose
|
||||
4. **Error Handling**: Implement proper error handling and logging
|
||||
5. **Comments**: Add comments only when necessary to explain complex logic
|
||||
6. **Imports**: Use absolute imports where configured, keep imports organized
|
||||
|
||||
### API Development (Server)
|
||||
|
||||
- Use Fastify's schema validation for all routes
|
||||
- Follow REST principles for endpoint design
|
||||
- Implement proper authentication and authorization
|
||||
- Handle errors gracefully with appropriate status codes
|
||||
- Document API endpoints in the docs app
|
||||
|
||||
### Frontend Development (Web)
|
||||
|
||||
- Use React Server Components where appropriate
|
||||
- Implement proper loading and error states
|
||||
- Follow accessibility best practices (WCAG guidelines)
|
||||
- Optimize performance (lazy loading, code splitting)
|
||||
- Use Shadcn/ui components for consistent UI
|
||||
|
||||
### Documentation
|
||||
|
||||
- Write clear, concise documentation
|
||||
- Include code examples where helpful
|
||||
- Update documentation when changing functionality
|
||||
- Use MDX features for interactive documentation
|
||||
- Follow the existing documentation structure
|
||||
|
||||
## Translation and Internationalization
|
||||
|
||||
- All user-facing strings should be translatable
|
||||
- Use the Next.js internationalization system
|
||||
- Translation files are in `apps/web/messages/`
|
||||
- Reference file: `en-US.json`
|
||||
- Run `pnpm translations:check` to verify translations
|
||||
- Mark untranslated strings with `[TO_TRANSLATE]` prefix
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Authentication Providers
|
||||
|
||||
- Provider configurations in `apps/server/src/modules/auth-providers/providers.config.ts`
|
||||
- Support for OAuth2 and OIDC protocols
|
||||
- Field mappings for user data normalization
|
||||
- Special handling for providers like GitHub that require additional API calls
|
||||
|
||||
### File Storage
|
||||
|
||||
- Default: Filesystem storage
|
||||
- Optional: S3-compatible object storage
|
||||
- File metadata stored in SQLite database
|
||||
|
||||
### Environment Variables
|
||||
|
||||
- Configure via `.env` files (not committed to repository)
|
||||
- Required variables documented in README or docs
|
||||
- Use environment-specific configurations
|
||||
|
||||
## Contributing Guidelines
|
||||
|
||||
### Pull Request Process
|
||||
|
||||
1. Fork the repository
|
||||
2. Create a branch from `next`: `git checkout -b feature/your-feature upstream/next`
|
||||
3. Make focused changes addressing a single issue/feature
|
||||
4. Write or update tests as needed
|
||||
5. Update documentation to reflect changes
|
||||
6. Ensure all validations pass: `pnpm validate` in each app
|
||||
7. Commit using Conventional Commits
|
||||
8. Push to your fork
|
||||
9. Create Pull Request targeting the `next` branch
|
||||
|
||||
### Code Review
|
||||
|
||||
- Be responsive to feedback
|
||||
- Keep discussions constructive and professional
|
||||
- Make requested changes promptly
|
||||
- Ask questions if requirements are unclear
|
||||
|
||||
### What to Avoid
|
||||
|
||||
- Don't mix unrelated changes in a single PR
|
||||
- Don't skip linting or type checking
|
||||
- Don't commit directly to `main` or `next` branches
|
||||
- Don't add unnecessary dependencies
|
||||
- Don't ignore existing code style and patterns
|
||||
- Don't remove or modify tests without good reason
|
||||
|
||||
## Helpful Commands
|
||||
|
||||
### Root Level
|
||||
|
||||
```bash
|
||||
pnpm install # Install all dependencies
|
||||
git config core.hooksPath .husky # Configure Git hooks
|
||||
```
|
||||
|
||||
### Per App (web/server/docs)
|
||||
|
||||
```bash
|
||||
pnpm dev # Start development server
|
||||
pnpm build # Build for production
|
||||
pnpm lint # Run ESLint
|
||||
pnpm lint:fix # Fix ESLint issues automatically
|
||||
pnpm format # Format code with Prettier
|
||||
pnpm format:check # Check code formatting
|
||||
pnpm type-check # Run TypeScript type checking
|
||||
pnpm validate # Run lint + type-check
|
||||
```
|
||||
|
||||
### Docker
|
||||
|
||||
```bash
|
||||
docker-compose up # Start all services
|
||||
docker-compose down # Stop all services
|
||||
```
|
||||
|
||||
## Resources
|
||||
|
||||
- **Documentation**: [https://palmr.kyantech.com.br](https://palmr.kyantech.com.br)
|
||||
- **Contributing Guide**: [CONTRIBUTING.md](../CONTRIBUTING.md)
|
||||
- **Issue Tracker**: GitHub Issues
|
||||
- **License**: Apache-2.0
|
||||
|
||||
## Getting Help
|
||||
|
||||
- Review existing documentation in `apps/docs/content/docs/`
|
||||
- Check contribution guide in `CONTRIBUTING.md`
|
||||
- Review existing code for patterns and examples
|
||||
- Ask questions in PR discussions or issues
|
||||
- Read error messages and logs carefully
|
||||
|
||||
## Important Notes
|
||||
|
||||
- **Beta Status**: This project is in beta; expect changes and improvements
|
||||
- **Focus on Quality**: Prioritize code quality and maintainability over speed
|
||||
- **Test Locally**: Always test your changes locally before submitting
|
||||
- **Documentation Matters**: Keep documentation synchronized with code
|
||||
- **Community First**: Be respectful, patient, and constructive with all contributors
|
||||
4
.gitignore
vendored
4
.gitignore
vendored
@@ -30,6 +30,8 @@ apps/server/dist/*
|
||||
|
||||
#DEFAULT
|
||||
.env
|
||||
.steering
|
||||
data/
|
||||
|
||||
node_modules/
|
||||
node_modules/
|
||||
screenshots/
|
||||
67
Dockerfile
67
Dockerfile
@@ -1,15 +1,25 @@
|
||||
FROM node:20-alpine AS base
|
||||
FROM node:24-alpine AS base
|
||||
|
||||
# Install system dependencies
|
||||
RUN apk add --no-cache \
|
||||
gcompat \
|
||||
supervisor \
|
||||
curl \
|
||||
wget \
|
||||
openssl \
|
||||
su-exec
|
||||
|
||||
# Enable pnpm
|
||||
RUN corepack enable pnpm
|
||||
|
||||
# Install storage system for S3-compatible storage
|
||||
COPY infra/install-minio.sh /tmp/install-minio.sh
|
||||
RUN chmod +x /tmp/install-minio.sh && /tmp/install-minio.sh
|
||||
|
||||
# Install storage client (mc)
|
||||
RUN wget https://dl.min.io/client/mc/release/linux-amd64/mc -O /usr/local/bin/mc && \
|
||||
chmod +x /usr/local/bin/mc
|
||||
|
||||
# Set working directory
|
||||
WORKDIR /app
|
||||
|
||||
@@ -82,7 +92,7 @@ RUN addgroup --system --gid ${PALMR_GID} nodejs
|
||||
RUN adduser --system --uid ${PALMR_UID} --ingroup nodejs palmr
|
||||
|
||||
# Create application directories
|
||||
RUN mkdir -p /app/palmr-app /app/web /home/palmr/.npm /home/palmr/.cache
|
||||
RUN mkdir -p /app/palmr-app /app/web /app/infra /home/palmr/.npm /home/palmr/.cache
|
||||
RUN chown -R palmr:nodejs /app /home/palmr
|
||||
|
||||
# === Copy Server Files to /app/palmr-app (separate from /app/server for bind mounts) ===
|
||||
@@ -117,10 +127,16 @@ WORKDIR /app
|
||||
# Create supervisor configuration
|
||||
RUN mkdir -p /etc/supervisor/conf.d
|
||||
|
||||
# Copy server start script
|
||||
# Copy server start script and configuration files
|
||||
COPY infra/server-start.sh /app/server-start.sh
|
||||
RUN chmod +x /app/server-start.sh
|
||||
RUN chown palmr:nodejs /app/server-start.sh
|
||||
COPY infra/start-minio.sh /app/start-minio.sh
|
||||
COPY infra/minio-setup.sh /app/minio-setup.sh
|
||||
COPY infra/load-minio-credentials.sh /app/load-minio-credentials.sh
|
||||
COPY infra/configs.json /app/infra/configs.json
|
||||
COPY infra/providers.json /app/infra/providers.json
|
||||
COPY infra/check-missing.js /app/infra/check-missing.js
|
||||
RUN chmod +x /app/server-start.sh /app/start-minio.sh /app/minio-setup.sh /app/load-minio-credentials.sh
|
||||
RUN chown -R palmr:nodejs /app/server-start.sh /app/start-minio.sh /app/minio-setup.sh /app/load-minio-credentials.sh /app/infra
|
||||
|
||||
# Copy supervisor configuration
|
||||
COPY infra/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
|
||||
@@ -133,7 +149,7 @@ set -e
|
||||
echo "Starting Palmr Application..."
|
||||
echo "Storage Mode: \${ENABLE_S3:-false}"
|
||||
echo "Secure Site: \${SECURE_SITE:-false}"
|
||||
echo "Encryption: \${DISABLE_FILESYSTEM_ENCRYPTION:-false}"
|
||||
echo "Encryption: \${DISABLE_FILESYSTEM_ENCRYPTION:-true}"
|
||||
echo "Database: SQLite"
|
||||
|
||||
# Set global environment variables
|
||||
@@ -141,9 +157,42 @@ export DATABASE_URL="file:/app/server/prisma/palmr.db"
|
||||
export NEXT_PUBLIC_DEFAULT_LANGUAGE=\${DEFAULT_LANGUAGE:-en-US}
|
||||
|
||||
# Ensure /app/server directory exists for bind mounts
|
||||
mkdir -p /app/server/uploads /app/server/temp-uploads /app/server/prisma
|
||||
mkdir -p /app/server/uploads /app/server/temp-uploads /app/server/prisma /app/server/minio-data
|
||||
|
||||
echo "Data directories ready for first run..."
|
||||
# CRITICAL: Fix permissions BEFORE starting any services
|
||||
# This runs on EVERY startup to handle updates and corrupted metadata
|
||||
echo "🔐 Fixing permissions for internal storage..."
|
||||
|
||||
# DYNAMIC: Detect palmr user's actual UID and GID
|
||||
# Works with any Docker --user configuration
|
||||
PALMR_UID=\$(id -u palmr 2>/dev/null || echo "1001")
|
||||
PALMR_GID=\$(id -g palmr 2>/dev/null || echo "1001")
|
||||
echo " Target user: palmr (UID:\$PALMR_UID, GID:\$PALMR_GID)"
|
||||
|
||||
# ALWAYS remove storage system metadata to prevent corruption issues
|
||||
# This is safe - storage system recreates it automatically
|
||||
# User data (files) are NOT in .minio.sys, they're safe
|
||||
if [ -d "/app/server/minio-data/.minio.sys" ]; then
|
||||
echo " 🧹 Cleaning storage system metadata (safe, auto-regenerated)..."
|
||||
rm -rf /app/server/minio-data/.minio.sys 2>/dev/null || true
|
||||
fi
|
||||
|
||||
# Fix ownership and permissions (safe for updates)
|
||||
echo " 🔧 Setting ownership and permissions..."
|
||||
chown -R \$PALMR_UID:\$PALMR_GID /app/server 2>/dev/null || echo " ⚠️ chown skipped"
|
||||
chmod -R 755 /app/server 2>/dev/null || echo " ⚠️ chmod skipped"
|
||||
|
||||
# Verify critical directories are writable
|
||||
if touch /app/server/.test-write 2>/dev/null; then
|
||||
rm -f /app/server/.test-write
|
||||
echo " ✅ Storage directory is writable"
|
||||
else
|
||||
echo " ❌ FATAL: /app/server is NOT writable!"
|
||||
echo " Check Docker volume permissions"
|
||||
ls -la /app/server 2>/dev/null || true
|
||||
fi
|
||||
|
||||
echo "✅ Storage ready, starting services..."
|
||||
|
||||
# Start supervisor
|
||||
exec /usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf
|
||||
@@ -155,7 +204,7 @@ RUN chmod +x /app/start.sh
|
||||
VOLUME ["/app/server"]
|
||||
|
||||
# Expose ports
|
||||
EXPOSE 3333 5487
|
||||
EXPOSE 3333 5487 9379 9378
|
||||
|
||||
# Health check
|
||||
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \
|
||||
|
||||
212
LICENSE
212
LICENSE
@@ -1,40 +1,190 @@
|
||||
Kyantech-Permissive License (Based on BSD 2-Clause)
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
Copyright (c) 2025, Daniel Luiz Alves (danielalves96) - Kyantech Solutions
|
||||
All rights reserved.
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted for any purpose — private, commercial,
|
||||
educational, governmental — **fully free and unrestricted**, provided
|
||||
that the following conditions are met:
|
||||
1. Definitions.
|
||||
|
||||
1. Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions, and the following disclaimer.
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
2. Redistributions in binary form must reproduce the above copyright
|
||||
notice, this list of conditions, and the following disclaimer in the
|
||||
documentation and/or other materials provided with the distribution.
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
3. **If this software (or derivative works) is used in any public-facing
|
||||
interface** — such as websites, apps, dashboards, admin panels, or
|
||||
similar — a **simple credit** must appear in the footer or similar
|
||||
location. The credit text should read:
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
> “Powered by Kyantech Solutions · https://kyantech.com.br”
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
This credit must be reasonably visible but **must not interfere** with
|
||||
your UI, branding, or user experience. You may style it to match your
|
||||
own design and choose its size, placement, or color.
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
---
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
|
||||
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
||||
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
|
||||
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
||||
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
||||
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
||||
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
Copyright 2025 Daniel Luiz Alves (danielalves96) - Kyantech Solutions, Inc.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
||||
296
README.md
296
README.md
@@ -1,142 +1,154 @@
|
||||
# 🌴 Palmr. - Open-Source File Transfer
|
||||
|
||||
<p align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1749825361/Group_47_1_bcx8gw.png" alt="Palmr Banner" style="width: 100%;"/>
|
||||
</p>
|
||||
|
||||
**Palmr.** is a **flexible** and **open-source** alternative to file transfer services like **WeTransfer**, **SendGB**, **Send Anywhere**, and **Files.fm**.
|
||||
|
||||
|
||||
🔗 **For detailed documentation visit:** [Palmr. - Documentation](https://palmr.kyantech.com.br)
|
||||
|
||||
## 📌 Why Choose Palmr.?
|
||||
|
||||
- **Self-hosted** – Deploy on your own server or VPS.
|
||||
- **Full control** – No third-party dependencies, ensuring privacy and security.
|
||||
- **No artificial limits** – Share files without hidden restrictions or fees.
|
||||
- **Simple deployment** – SQLite database and filesystem storage for easy setup.
|
||||
- **Scalable storage** – Optional S3-compatible object storage for enterprise needs.
|
||||
|
||||
## 🚀 Technologies Used
|
||||
|
||||
### **Palmr.** is built with a focus on **performance**, **scalability**, and **security**.
|
||||
|
||||
<div align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1745548231/Palmr./Captura_de_Tela_2025-04-24_a%CC%80s_23.24.26_kr4hsl.png" style="width: 100%; border-radius: 15px;" />
|
||||
</div>
|
||||
|
||||
|
||||
### **Backend & API**
|
||||
- **Fastify (Node.js)** – High-performance API framework with built-in schema validation.
|
||||
- **SQLite** – Lightweight, reliable database with zero-configuration setup.
|
||||
- **Filesystem Storage** – Direct file storage with optional S3-compatible object storage.
|
||||
|
||||
### **Frontend**
|
||||
- **NextJS 15 + TypeScript + Shadcn/ui** – Modern and fast web interface.
|
||||
|
||||
|
||||
## 🛠️ How It Works
|
||||
|
||||
1. **Web Interface** → Built with Next, React and TypeScript for a seamless user experience.
|
||||
2. **Backend API** → Fastify handles requests and manages file operations.
|
||||
3. **Database** → SQLite stores metadata and transactional data with zero configuration.
|
||||
4. **Storage** → Filesystem storage ensures reliable file storage with optional S3-compatible object storage for scalability.
|
||||
|
||||
## 📸 Screenshots
|
||||
|
||||
<table>
|
||||
<tr>
|
||||
<td align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1749824929/Login_veq6e7.png" alt="Login Page" style="width: 100%; border-radius: 8px;" />
|
||||
<br /><strong>Login Page</strong>
|
||||
</td>
|
||||
<td align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1749824929/Home_lzvfzu.png" alt="Home Page" style="width: 100%; border-radius: 8px;" />
|
||||
<br /><strong>Home Page</strong>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1749824928/Dashboard_uycmxb.png" alt="Dashboard" style="width: 100%; border-radius: 8px;" />
|
||||
<br /><strong>Dashboard</strong>
|
||||
</td>
|
||||
<td align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1749824929/Profile_wvnlzw.png" alt="Profile Page" style="width: 100%; border-radius: 8px;" />
|
||||
<br /><strong>Profile Page</strong>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1749824928/Files_List_ztwr1e.png" alt="Files List View" style="width: 100%; border-radius: 8px;" />
|
||||
<br /><strong>Files List View</strong>
|
||||
</td>
|
||||
<td align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1749824928/Files_Cards_pwsh5e.png" alt="Files Card View" style="width: 100%; border-radius: 8px;" />
|
||||
<br /><strong>Files Card View</strong>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1749824927/Shares_cgplgw.png" alt="Shares Management" style="width: 100%; border-radius: 8px;" />
|
||||
<br /><strong>Shares Management</strong>
|
||||
</td>
|
||||
<td align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1749824928/Reive_Files_uhkeyc.png" alt="Receive Files" style="width: 100%; border-radius: 8px;" />
|
||||
<br /><strong>Receive Files</strong>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1749824927/Default_Reverse_xedmhw.png" alt="Reverse Share" style="width: 100%; border-radius: 8px;" />
|
||||
<br /><strong>Reverse Share</strong>
|
||||
</td>
|
||||
<td align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1749824928/Settings_oampxr.png" alt="Settings Panel" style="width: 100%; border-radius: 8px;" />
|
||||
<br /><strong>Settings Panel</strong>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1749824928/User_Management_xjbfhn.png" alt="User Management" style="width: 100%; border-radius: 8px;" />
|
||||
<br /><strong>User Management</strong>
|
||||
</td>
|
||||
<td align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1749824928/Forgot_Password_jcz9ad.png" alt="Forgot Password" style="width: 100%; border-radius: 8px;" />
|
||||
<br /><strong>Forgot Password</strong>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1749824928/WeTransfer_Reverse_u0g7eb.png" alt="Forgot Password" style="width: 100%; border-radius: 8px;" />
|
||||
<br /><strong>Reverse Share (WeTransfer Style)</strong>
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
|
||||
## 👨💻 Core Maintainers
|
||||
|
||||
| [**Daniel Luiz Alves**](https://github.com/danielalves96) |
|
||||
|------------------|
|
||||
| <img src="https://github.com/danielalves96.png" width="150px" alt="Daniel Luiz Alves" /> |
|
||||
|
||||
</br>
|
||||
|
||||
## 🤝 Supporters
|
||||
|
||||
[<img src="https://i.ibb.co/nMN40STL/Repoflow.png" width="200px" alt="Daniel Luiz Alves" />](https://www.repoflow.io/)
|
||||
|
||||
## ⭐ Star History
|
||||
|
||||
<a href="https://www.star-history.com/#kyantech/Palmr&Date">
|
||||
<picture>
|
||||
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=kyantech/Palmr&type=Date&theme=dark" />
|
||||
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=kyantech/Palmr&type=Date" />
|
||||
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=kyantech/Palmr&type=Date" />
|
||||
</picture>
|
||||
</a>
|
||||
|
||||
## 🛠️ Contributing
|
||||
|
||||
For contribution guidelines, please refer to the [CONTRIBUTING.md](CONTRIBUTING.md) file.
|
||||
|
||||
# 🌴 Palmr. - Open-Source File Transfer
|
||||
|
||||
<p align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1749825361/Group_47_1_bcx8gw.png" alt="Palmr Banner" style="width: 100%;"/>
|
||||
</p>
|
||||
|
||||
**Palmr.** is a **flexible** and **open-source** alternative to file transfer services like **WeTransfer**, **SendGB**, **Send Anywhere**, and **Files.fm**.
|
||||
|
||||
<div align="center">
|
||||
<div style="background: linear-gradient(135deg, #ff4757, #ff3838); padding: 20px; border-radius: 12px; margin: 20px 0; box-shadow: 0 4px 15px rgba(255, 71, 87, 0.3); border: 2px solid #ff3838;">
|
||||
<h3 style="color: white; margin: 0 0 10px 0; font-size: 18px; font-weight: bold;">
|
||||
⚠️ BETA VERSION
|
||||
</h3>
|
||||
<p style="color: white; margin: 0; font-size: 14px; opacity: 0.95;">
|
||||
<strong>This project is currently in beta phase.</strong><br>
|
||||
Not recommended for production environments.
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
🔗 **For detailed documentation visit:** [Palmr. - Documentation](https://palmr.kyantech.com.br)
|
||||
|
||||
## 📌 Why Choose Palmr.?
|
||||
|
||||
- **Self-hosted** – Deploy on your own server or VPS.
|
||||
- **Full control** – No third-party dependencies, ensuring privacy and security.
|
||||
- **No artificial limits** – Share files without hidden restrictions or fees.
|
||||
- **Folder organization** – Create folders to organize and share files.
|
||||
- **Simple deployment** – SQLite database and filesystem storage for easy setup.
|
||||
- **Scalable storage** – Optional S3-compatible object storage for enterprise needs.
|
||||
|
||||
## 🚀 Technologies Used
|
||||
|
||||
### **Palmr.** is built with a focus on **performance**, **scalability**, and **security**.
|
||||
|
||||
<div align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1745548231/Palmr./Captura_de_Tela_2025-04-24_a%CC%80s_23.24.26_kr4hsl.png" style="width: 100%; border-radius: 15px;" />
|
||||
</div>
|
||||
|
||||
|
||||
### **Backend & API**
|
||||
- **Fastify (Node.js)** – High-performance API framework with built-in schema validation.
|
||||
- **SQLite** – Lightweight, reliable database with zero-configuration setup.
|
||||
- **Filesystem Storage** – Direct file storage with optional S3-compatible object storage.
|
||||
|
||||
### **Frontend**
|
||||
- **NextJS 15 + TypeScript + Shadcn/ui** – Modern and fast web interface.
|
||||
|
||||
|
||||
## 🛠️ How It Works
|
||||
|
||||
1. **Web Interface** → Built with Next, React and TypeScript for a seamless user experience.
|
||||
2. **Backend API** → Fastify handles requests and manages file operations.
|
||||
3. **Database** → SQLite stores metadata and transactional data with zero configuration.
|
||||
4. **Storage** → Filesystem storage ensures reliable file storage with optional S3-compatible object storage for scalability.
|
||||
|
||||
## 📸 Screenshots
|
||||
|
||||
<table>
|
||||
<tr>
|
||||
<td align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1749824929/Login_veq6e7.png" alt="Login Page" style="width: 100%; border-radius: 8px;" />
|
||||
<br /><strong>Login Page</strong>
|
||||
</td>
|
||||
<td align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1749824929/Home_lzvfzu.png" alt="Home Page" style="width: 100%; border-radius: 8px;" />
|
||||
<br /><strong>Home Page</strong>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1749824928/Dashboard_uycmxb.png" alt="Dashboard" style="width: 100%; border-radius: 8px;" />
|
||||
<br /><strong>Dashboard</strong>
|
||||
</td>
|
||||
<td align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1749824929/Profile_wvnlzw.png" alt="Profile Page" style="width: 100%; border-radius: 8px;" />
|
||||
<br /><strong>Profile Page</strong>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1749824928/Files_List_ztwr1e.png" alt="Files List View" style="width: 100%; border-radius: 8px;" />
|
||||
<br /><strong>Files List View</strong>
|
||||
</td>
|
||||
<td align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1749824928/Files_Cards_pwsh5e.png" alt="Files Card View" style="width: 100%; border-radius: 8px;" />
|
||||
<br /><strong>Files Card View</strong>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1749824927/Shares_cgplgw.png" alt="Shares Management" style="width: 100%; border-radius: 8px;" />
|
||||
<br /><strong>Shares Management</strong>
|
||||
</td>
|
||||
<td align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1749824928/Reive_Files_uhkeyc.png" alt="Receive Files" style="width: 100%; border-radius: 8px;" />
|
||||
<br /><strong>Receive Files</strong>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1749824927/Default_Reverse_xedmhw.png" alt="Reverse Share" style="width: 100%; border-radius: 8px;" />
|
||||
<br /><strong>Reverse Share</strong>
|
||||
</td>
|
||||
<td align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1749824928/Settings_oampxr.png" alt="Settings Panel" style="width: 100%; border-radius: 8px;" />
|
||||
<br /><strong>Settings Panel</strong>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1749824928/User_Management_xjbfhn.png" alt="User Management" style="width: 100%; border-radius: 8px;" />
|
||||
<br /><strong>User Management</strong>
|
||||
</td>
|
||||
<td align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1749824928/Forgot_Password_jcz9ad.png" alt="Forgot Password" style="width: 100%; border-radius: 8px;" />
|
||||
<br /><strong>Forgot Password</strong>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center">
|
||||
<img src="https://res.cloudinary.com/technical-intelligence/image/upload/v1749824928/WeTransfer_Reverse_u0g7eb.png" alt="Forgot Password" style="width: 100%; border-radius: 8px;" />
|
||||
<br /><strong>Reverse Share (WeTransfer Style)</strong>
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
|
||||
## 👨💻 Core Maintainers
|
||||
|
||||
| [**Daniel Luiz Alves**](https://github.com/danielalves96) |
|
||||
|------------------|
|
||||
| <img src="https://github.com/danielalves96.png" width="150px" alt="Daniel Luiz Alves" /> |
|
||||
|
||||
</br>
|
||||
|
||||
## 🤝 Supporters
|
||||
|
||||
[<img src="https://i.ibb.co/nMN40STL/Repoflow.png" width="200px" alt="Daniel Luiz Alves" />](https://www.repoflow.io/)
|
||||
|
||||
## ⭐ Star History
|
||||
|
||||
<a href="https://www.star-history.com/#kyantech/Palmr&Date">
|
||||
<picture>
|
||||
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=kyantech/Palmr&type=Date&theme=dark" />
|
||||
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=kyantech/Palmr&type=Date" />
|
||||
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=kyantech/Palmr&type=Date" />
|
||||
</picture>
|
||||
</a>
|
||||
|
||||
## 🛠️ Contributing
|
||||
|
||||
For contribution guidelines, please refer to the [CONTRIBUTING.md](CONTRIBUTING.md) file.
|
||||
|
||||
|
||||
@@ -1,291 +0,0 @@
|
||||
---
|
||||
title: Quick Start (Docker)
|
||||
icon: "Rocket"
|
||||
---
|
||||
|
||||
Welcome to the fastest way to deploy <span className="font-bold">Palmr.</span> - your secure, self-hosted file sharing solution. This guide will have you up and running in minutes, whether you're new to self-hosting or an experienced developer.
|
||||
|
||||
Palmr. offers flexible deployment options to match your infrastructure needs. This guide focuses on Docker deployment with our recommended filesystem storage, perfect for most use cases.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Ensure you have the following installed on your system:
|
||||
|
||||
- **Docker** - Container runtime ([installation guide](https://docs.docker.com/get-docker/))
|
||||
- **Docker Compose** - Multi-container orchestration ([installation guide](https://docs.docker.com/compose/install/))
|
||||
|
||||
> **Platform Support**: Palmr. is developed on macOS and extensively tested on Linux servers. While we haven't formally tested other platforms, Docker's cross-platform nature should ensure compatibility. Report any issues on our [GitHub repository](https://github.com/kyantech/Palmr/issues).
|
||||
|
||||
## Storage Options
|
||||
|
||||
Palmr. supports two storage approaches for persistent data:
|
||||
|
||||
### Named Volumes (Recommended)
|
||||
|
||||
**Best for**: Production environments, automated deployments
|
||||
|
||||
- ✅ **Managed by Docker**: No permission issues or manual path management
|
||||
- ✅ **Optimized Performance**: Docker-native storage optimization
|
||||
- ✅ **Cross-platform**: Consistent behavior across operating systems
|
||||
- ✅ **Simplified Backups**: Docker volume commands for backup/restore
|
||||
|
||||
### Bind Mounts
|
||||
|
||||
**Best for**: Development, direct file access requirements
|
||||
|
||||
- ✅ **Direct Access**: Files stored in local directory you specify
|
||||
- ✅ **Transparent Storage**: Direct filesystem access from host
|
||||
- ✅ **Custom Backup**: Use existing file system backup solutions
|
||||
- ⚠️ **Permission Considerations**: **Common Issue** - Requires UID/GID configuration (see troubleshooting below)
|
||||
|
||||
---
|
||||
|
||||
## Option 1: Named Volumes (Recommended)
|
||||
|
||||
Named volumes provide the best performance and are managed entirely by Docker.
|
||||
|
||||
### Configuration
|
||||
|
||||
Use the provided `docker-compose.yaml` for named volumes:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
palmr:
|
||||
image: kyantech/palmr:latest
|
||||
container_name: palmr
|
||||
environment:
|
||||
- ENABLE_S3=false
|
||||
- ENCRYPTION_KEY=change-this-key-in-production-min-32-chars # CHANGE THIS KEY FOR SECURITY
|
||||
# - DISABLE_FILESYSTEM_ENCRYPTION=false # Set to true to disable file encryption (ENCRYPTION_KEY becomes optional)
|
||||
# - SECURE_SITE=false # Set to true if you are using a reverse proxy
|
||||
# - DEFAULT_LANGUAGE=en-US # Default language for the application (optional, defaults to en-US)
|
||||
ports:
|
||||
- "5487:5487" # Web interface
|
||||
- "3333:3333" # API port (OPTIONAL EXPOSED - ONLY IF YOU WANT TO ACCESS THE API DIRECTLY)
|
||||
volumes:
|
||||
- palmr_data:/app/server # Named volume for the application data
|
||||
restart: unless-stopped # Restart the container unless it is stopped
|
||||
|
||||
volumes:
|
||||
palmr_data:
|
||||
```
|
||||
|
||||
> **Note:** If you haveing problem with uploading files, try to change the `PALMR_UID` and `PALMR_GID` to the UID and GID of the user running the container. You can find the UID and GID of the user running the container with the command `id -u` and `id -g`. in Linux systems the default user is `1000` and the default group is `1000`. For test you can add the environment variables below to the `docker-compose.yaml` file and restart the container.
|
||||
|
||||
```yaml
|
||||
environment:
|
||||
- PALMR_UID=1000 # UID for the container processes (default is 1001)
|
||||
- PALMR_GID=1000 # GID for the container processes (default is 1001)
|
||||
```
|
||||
|
||||
> **Note:** For more information about UID and GID, see our [UID/GID Configuration](/docs/3.1-beta/uid-gid-configuration) guide.
|
||||
|
||||
### Deployment
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Option 2: Bind Mounts
|
||||
|
||||
Bind mounts store data in a local directory, providing direct file system access.
|
||||
|
||||
### Configuration
|
||||
|
||||
To use bind mounts, **replace the content** of your `docker-compose.yaml` with the following configuration (you can also reference `docker-compose-bind-mount-example.yaml` as a template):
|
||||
|
||||
```yaml
|
||||
services:
|
||||
palmr:
|
||||
image: kyantech/palmr:latest
|
||||
container_name: palmr
|
||||
environment:
|
||||
- ENABLE_S3=false
|
||||
- ENCRYPTION_KEY=change-this-key-in-production-min-32-chars # CHANGE THIS KEY FOR SECURITY
|
||||
- PALMR_UID=1000 # UID for the container processes (default is 1001)
|
||||
- PALMR_GID=1000 # GID for the container processes (default is 1001)
|
||||
# - DISABLE_FILESYSTEM_ENCRYPTION=false # Set to true to disable file encryption (ENCRYPTION_KEY becomes optional)
|
||||
# - SECURE_SITE=false # Set to true if you are using a reverse proxy
|
||||
# - DEFAULT_LANGUAGE=en-US # Default language for the application (optional, defaults to en-US)
|
||||
ports:
|
||||
- "5487:5487" # Web port
|
||||
- "3333:3333" # API port (OPTIONAL EXPOSED - ONLY IF YOU WANT TO ACCESS THE API DIRECTLY)
|
||||
volumes:
|
||||
# Bind mount for persistent data (uploads, database, temp files)
|
||||
- ./data:/app/server # Local directory for the application data
|
||||
restart: unless-stopped # Restart the container unless it is stopped
|
||||
```
|
||||
|
||||
### Deployment
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
> **Permission Configuration**: If you encounter permission issues with bind mounts (common on NAS systems), see our [UID/GID Configuration](/docs/3.1-beta/uid-gid-configuration) guide for automatic permission handling.
|
||||
|
||||
---
|
||||
|
||||
## Environment Variables
|
||||
|
||||
Configure Palmr. behavior through environment variables:
|
||||
|
||||
| Variable | Default | Description |
|
||||
| ------------------------------- | ------- | ----------------------------------------------------------------------------------------------------------------- |
|
||||
| `ENABLE_S3` | `false` | Enable S3-compatible storage |
|
||||
| `ENCRYPTION_KEY` | - | **Required** (unless encryption disabled): Minimum 32 characters for file encryption |
|
||||
| `DISABLE_FILESYSTEM_ENCRYPTION` | `false` | Disable file encryption for direct filesystem access |
|
||||
| `SECURE_SITE` | `false` | Enable secure cookies for HTTPS/reverse proxy setups |
|
||||
| `DEFAULT_LANGUAGE` | `en-US` | Set the default application language (see supported languages in docs [here](/docs/3.1-beta/available-languages)) |
|
||||
| `PALMR_UID` | `1001` | Set the UID for the container processes (OPTIONAL - default is 1001) |
|
||||
| `PALMR_GID` | `1001` | Set the GID for the container processes (OPTIONAL - default is 1001) |
|
||||
|
||||
> **⚠️ Security Warning**: Always change the `ENCRYPTION_KEY` in production when encryption is enabled. This key encrypts your files - losing it makes files permanently inaccessible.
|
||||
|
||||
> **🔓 File Encryption Control**: The `DISABLE_FILESYSTEM_ENCRYPTION` variable allows you to store files without encryption for direct filesystem access. When set to `true`, the `ENCRYPTION_KEY` becomes optional. **Important**: Once set, this configuration is permanent for your deployment. Switching between encrypted and unencrypted modes will break file access for existing uploads. Choose your strategy before uploading files. For more details on performance implications of encryption, see [Performance Considerations with Encryption](/docs/3.1-beta/architecture#performance-considerations-with-encryption).
|
||||
|
||||
> **🔗 Reverse Proxy**: If deploying behind a reverse proxy (Traefik, Nginx, etc.), set `SECURE_SITE=true` and review our [Reverse Proxy Configuration](/docs/3.1-beta/reverse-proxy-configuration) guide for proper setup.
|
||||
|
||||
### Generate Secure Encryption Keys
|
||||
|
||||
Need a strong key for `ENCRYPTION_KEY`? Use our built-in generator to create cryptographically secure keys:
|
||||
|
||||
<KeyGenerator />
|
||||
|
||||
> **💡 Pro Tip**: If you're using `DISABLE_FILESYSTEM_ENCRYPTION=true`, you can skip the `ENCRYPTION_KEY` entirely for a simpler setup. However, remember that files will be stored unencrypted on your filesystem.
|
||||
|
||||
---
|
||||
|
||||
## Accessing Palmr.
|
||||
|
||||
Once deployed, access Palmr. through your web browser:
|
||||
|
||||
- **Local**: `http://localhost:5487`
|
||||
- **Server**: `http://YOUR_SERVER_IP:5487`
|
||||
|
||||
### API Access (Optional)
|
||||
|
||||
If you exposed port 3333 in your configuration, you can also access:
|
||||
|
||||
- **API Documentation**: `http://localhost:3333/docs` (local) or `http://YOUR_SERVER_IP:3333/docs` (server)
|
||||
- **API Endpoints**: Available at `http://localhost:3333` (local) or `http://YOUR_SERVER_IP:3333` (server)
|
||||
|
||||
> **📚 Learn More**: For complete API documentation, authentication, and integration examples, see our [API Reference](/docs/3.1-beta/api) guide.
|
||||
|
||||
> **💡 Production Tip**: For production deployments, configure HTTPS with a valid SSL certificate for enhanced security.
|
||||
|
||||
---
|
||||
|
||||
## Docker CLI Alternative
|
||||
|
||||
Prefer using Docker directly? Both storage options are supported:
|
||||
|
||||
**Named Volume:**
|
||||
|
||||
```bash
|
||||
docker run -d \
|
||||
--name palmr \
|
||||
-e ENABLE_S3=false \
|
||||
-e ENCRYPTION_KEY=your-secure-key-min-32-chars \
|
||||
# -e PALMR_UID=1000 # UID for the container processes (default is 1001)
|
||||
# -e PALMR_GID=1000 # GID for the container processes (default is 1001)
|
||||
# -e DISABLE_FILESYSTEM_ENCRYPTION=true # Uncomment to disable file encryption (ENCRYPTION_KEY becomes optional)
|
||||
# -e SECURE_SITE=false # Set to true if you are using a reverse proxy
|
||||
# -e DEFAULT_LANGUAGE=en-US # Default language for the application (optional, defaults to en-US)
|
||||
-p 5487:5487 \
|
||||
-p 3333:3333 \
|
||||
-v palmr_data:/app/server \
|
||||
--restart unless-stopped \
|
||||
kyantech/palmr:latest
|
||||
```
|
||||
|
||||
> **Permission Configuration**: If you encounter permission issues with bind mounts (common on NAS systems), see our [UID/GID Configuration](/docs/3.1-beta/uid-gid-configuration) guide for automatic permission handling.
|
||||
|
||||
**Bind Mount:**
|
||||
|
||||
```bash
|
||||
docker run -d \
|
||||
--name palmr \
|
||||
-e ENABLE_S3=false \
|
||||
-e ENCRYPTION_KEY=your-secure-key-min-32-chars \
|
||||
-e PALMR_UID=1000 # UID for the container processes (default is 1001)
|
||||
-e PALMR_GID=1000 # GID for the container processes (default is 1001)
|
||||
# -e DISABLE_FILESYSTEM_ENCRYPTION=true # Uncomment to disable file encryption (ENCRYPTION_KEY becomes optional)
|
||||
# -e SECURE_SITE=false # Set to true if you are using a reverse proxy
|
||||
# -e DEFAULT_LANGUAGE=en-US # Default language for the application (optional, defaults to en-US)
|
||||
-p 5487:5487 \
|
||||
-p 3333:3333 \
|
||||
-v $(pwd)/data:/app/server \
|
||||
--restart unless-stopped \
|
||||
kyantech/palmr:latest
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Maintenance
|
||||
|
||||
### Updates
|
||||
|
||||
Keep Palmr. current with the latest features and security fixes:
|
||||
|
||||
```bash
|
||||
docker-compose pull
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
### Backup & Restore
|
||||
|
||||
The backup method depends on which storage option you're using:
|
||||
|
||||
**Named Volume Backup:**
|
||||
|
||||
```bash
|
||||
docker run --rm \
|
||||
-v palmr_data:/data \
|
||||
-v $(pwd):/backup \
|
||||
alpine tar czf /backup/palmr-backup.tar.gz -C /data .
|
||||
```
|
||||
|
||||
**Named Volume Restore:**
|
||||
|
||||
```bash
|
||||
docker run --rm \
|
||||
-v palmr_data:/data \
|
||||
-v $(pwd):/backup \
|
||||
alpine tar xzf /backup/palmr-backup.tar.gz -C /data
|
||||
```
|
||||
|
||||
**Bind Mount Backup:**
|
||||
|
||||
```bash
|
||||
tar czf palmr-backup.tar.gz ./data
|
||||
```
|
||||
|
||||
**Bind Mount Restore:**
|
||||
|
||||
```bash
|
||||
tar xzf palmr-backup.tar.gz
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
Your Palmr. instance is now ready! Explore additional configuration options:
|
||||
|
||||
### Advanced Configuration
|
||||
|
||||
- **[UID/GID Configuration](/docs/3.1-beta/uid-gid-configuration)** - Configure user permissions for NAS systems and custom environments
|
||||
- **[S3 Storage](/docs/3.1-beta/s3-configuration)** - Scale with Amazon S3 or compatible storage providers
|
||||
- **[Manual Installation](/docs/3.1-beta/manual-installation)** - Manual installation and custom configurations
|
||||
|
||||
### Integration & Development
|
||||
|
||||
- **[API Reference](/docs/3.1-beta/api)** - Integrate Palmr. with your applications
|
||||
- **[Architecture Guide](/docs/3.1-beta/architecture)** - Understanding Palmr. components and design
|
||||
|
||||
---
|
||||
|
||||
Need help? Visit our [GitHub Issues](https://github.com/kyantech/Palmr/issues) or community discussions.
|
||||
@@ -30,8 +30,6 @@ services:
|
||||
```bash
|
||||
docker run -d \
|
||||
--name palmr \
|
||||
-e ENABLE_S3=false \
|
||||
-e ENCRYPTION_KEY=change-this-key-in-production-min-32-chars \
|
||||
-p 5487:5487 \
|
||||
-p 3333:3333 \
|
||||
-v palmr_data:/app/server \
|
||||
@@ -107,6 +105,12 @@ The Palmr. API provides comprehensive access to all platform features:
|
||||
- **File management** - Rename, delete, and organize files
|
||||
- **Metadata access** - Retrieve file information and properties
|
||||
|
||||
### Folder operations
|
||||
|
||||
- **Create folders** - Build folder structures for organization
|
||||
- **Folder management** - Rename, move, delete folders
|
||||
- **Folder sharing** - Share folders with same controls as files
|
||||
|
||||
### Share management
|
||||
|
||||
- **Create shares** - Generate public links for file sharing
|
||||
@@ -45,13 +45,13 @@ Palmr. uses **filesystem storage** as the default storage solution, keeping thin
|
||||
|
||||
#### Performance Considerations with Encryption
|
||||
|
||||
By default, filesystem storage uses encryption (AES-256-CBC) to protect files at rest, which adds CPU overhead during uploads (encryption) and downloads (decryption). This can make operations slower and consume more resources, particularly for large files or in resource-constrained environments like containers or low-end VMs.
|
||||
By default, filesystem storage operates without encryption for optimal performance, providing fast uploads and downloads with minimal CPU overhead. This approach is ideal for most use cases where performance is prioritized.
|
||||
|
||||
If performance is a priority and you don't need encryption (e.g., for non-sensitive data or testing), you can disable it by setting the environment variable `DISABLE_FILESYSTEM_ENCRYPTION=true` in your `.env` file or Docker configuration. Note that disabling encryption stores files in plaintext on disk, reducing security.
|
||||
If you need to protect sensitive files at rest, you can enable encryption by setting `DISABLE_FILESYSTEM_ENCRYPTION=false` and providing an `ENCRYPTION_KEY` in your configuration. When enabled, Palmr uses AES-256-CBC encryption, which adds CPU overhead during uploads (encryption) and downloads (decryption), particularly for large files or in resource-constrained environments like containers or low-end VMs.
|
||||
|
||||
For optimal performance with encryption enabled, ensure your hardware supports AES-NI acceleration (check with `cat /proc/cpuinfo | grep aes` on Linux).
|
||||
|
||||
As an alternative, consider using S3-compatible object storage (e.g., AWS S3 or MinIO), which can offload file storage from the local filesystem and potentially reduce local CPU overhead for encryption/decryption. See [S3 Providers](/docs/3.1-beta/s3-providers) for setup instructions.
|
||||
As an alternative, consider using S3-compatible object storage (e.g., AWS S3 or MinIO), which can offload file storage from the local filesystem and potentially reduce local CPU overhead for encryption/decryption. See [S3 Providers](/docs/3.2-beta/s3-providers) for setup instructions.
|
||||
|
||||
### Fastify + Zod + TypeScript
|
||||
|
||||
@@ -127,7 +127,7 @@ Palmr. is designed to be flexible in how you handle file storage:
|
||||
|
||||
**Optional S3-compatible storage:**
|
||||
|
||||
- Enable S3 storage by setting `ENABLE_S3=true`, look at [S3 Providers](/docs/3.1-beta/s3-providers) for more information.
|
||||
- Enable S3 storage by setting `ENABLE_S3=true`, look at [S3 Providers](/docs/3.2-beta/s3-providers) for more information.
|
||||
- Compatible with AWS S3, MinIO, and other S3-compatible services
|
||||
- Ideal for cloud deployments and distributed setups
|
||||
- Provides additional scalability and redundancy options
|
||||
374
apps/docs/content/docs/3.2-beta/cleanup-orphan-files.mdx
Normal file
374
apps/docs/content/docs/3.2-beta/cleanup-orphan-files.mdx
Normal file
@@ -0,0 +1,374 @@
|
||||
---
|
||||
title: Cleanup Orphan Files
|
||||
icon: Trash2
|
||||
---
|
||||
|
||||
This guide provides detailed instructions on how to identify and remove orphan file records from your Palmr database. Orphan files are database entries that reference files that no longer exist in the storage system, typically resulting from failed uploads or interrupted transfers.
|
||||
|
||||
## When and why to use this tool
|
||||
|
||||
The orphan file cleanup script is designed to maintain database integrity by removing stale file records. Consider using this tool if:
|
||||
|
||||
- Users are experiencing "File not found" errors when attempting to download files that appear in the UI
|
||||
- You've identified failed uploads that left incomplete database records
|
||||
- You're performing routine database maintenance
|
||||
- You've migrated storage systems and need to verify file consistency
|
||||
- You need to free up quota space occupied by phantom file records
|
||||
|
||||
> **Note:** This script only removes **database records** for files that don't exist in storage. It does not delete physical files. Files that exist in storage will remain untouched.
|
||||
|
||||
## How the cleanup works
|
||||
|
||||
Palmr provides a maintenance script that scans all file records in the database and verifies their existence in the storage system (either filesystem or S3). The script operates in two modes:
|
||||
|
||||
- **Dry-run mode (default):** Identifies orphan files and displays what would be deleted without making any changes
|
||||
- **Confirmation mode:** Actually removes the orphan database records after explicit confirmation
|
||||
|
||||
The script maintains safety by:
|
||||
- Checking file existence before marking as orphan
|
||||
- Providing detailed statistics and file listings
|
||||
- Requiring explicit `--confirm` flag to delete records
|
||||
- Working with both filesystem and S3 storage providers
|
||||
- Preserving all files that exist in storage
|
||||
|
||||
## Understanding orphan files
|
||||
|
||||
### What are orphan files?
|
||||
|
||||
Orphan files occur when:
|
||||
|
||||
1. **Failed chunked uploads:** A large file upload starts, creates a database record, but the upload fails before completion
|
||||
2. **Interrupted transfers:** Network issues or server restarts interrupt file transfers mid-process
|
||||
3. **Manual deletions:** Files are manually deleted from storage without removing the database record
|
||||
4. **Storage migrations:** Files are moved or lost during storage system changes
|
||||
|
||||
### Why they cause problems
|
||||
|
||||
When orphan records exist in the database:
|
||||
- Users see files in the UI that cannot be downloaded
|
||||
- Download attempts result in "ENOENT: no such file or directory" errors
|
||||
- Storage quota calculations become inaccurate
|
||||
- The system returns 500 errors instead of proper 404 responses (in older versions)
|
||||
|
||||
### Renamed files with suffixes
|
||||
|
||||
Files with duplicate names are automatically renamed with suffixes (e.g., `file (1).png`, `file (2).png`). Sometimes the upload fails after the database record is created but before the physical file is saved, creating an orphan record with a suffix.
|
||||
|
||||
**Example:**
|
||||
```
|
||||
Database record: photo (1).png → objectName: user123/1758805195682-Rjn9at692HdR.png
|
||||
Physical file: Does not exist ❌
|
||||
```
|
||||
|
||||
## Step-by-step instructions
|
||||
|
||||
### 1. Access the server environment
|
||||
|
||||
**For Docker installations:**
|
||||
|
||||
```bash
|
||||
docker exec -it <container_name> /bin/sh
|
||||
cd /app/palmr-app
|
||||
```
|
||||
|
||||
**For bare-metal installations:**
|
||||
|
||||
```bash
|
||||
cd /path/to/palmr/apps/server
|
||||
```
|
||||
|
||||
### 2. Run the cleanup script in dry-run mode
|
||||
|
||||
First, run the script without the `--confirm` flag to see what would be deleted:
|
||||
|
||||
```bash
|
||||
pnpm cleanup:orphan-files
|
||||
```
|
||||
|
||||
This will:
|
||||
- Scan all file records in the database
|
||||
- Check if each file exists in storage
|
||||
- Display a summary of orphan files
|
||||
- Show what would be deleted (without actually deleting)
|
||||
|
||||
### 3. Review the output
|
||||
|
||||
The script will provide detailed information about orphan files:
|
||||
|
||||
```text
|
||||
Starting orphan file cleanup...
|
||||
Storage mode: Filesystem
|
||||
Found 7 files in database
|
||||
❌ Orphan: photo(1).png (cmddjchw80000gmiimqnxga2g/1758805195682-Rjn9at692HdR.png)
|
||||
❌ Orphan: document.pdf (cmddjchw80000gmiimqnxga2g/1758803757558-JQxlvF816UVo.pdf)
|
||||
|
||||
📊 Summary:
|
||||
Total files in DB: 7
|
||||
✅ Files with storage: 5
|
||||
❌ Orphan files: 2
|
||||
|
||||
🗑️ Orphan files to be deleted:
|
||||
- photo(1).png (0.76 MB) - cmddjchw80000gmiimqnxga2g/1758805195682-Rjn9at692HdR.png
|
||||
- document.pdf (2.45 MB) - cmddjchw80000gmiimqnxga2g/1758803757558-JQxlvF816UVo.pdf
|
||||
|
||||
⚠️ Dry run mode. To actually delete orphan records, run with --confirm flag:
|
||||
pnpm cleanup:orphan-files:confirm
|
||||
```
|
||||
|
||||
### 4. Confirm and execute the cleanup
|
||||
|
||||
If you're satisfied with the results and want to proceed with the deletion:
|
||||
|
||||
```bash
|
||||
pnpm cleanup:orphan-files:confirm
|
||||
```
|
||||
|
||||
This will remove the orphan database records and display a confirmation:
|
||||
|
||||
```text
|
||||
🗑️ Deleting orphan file records...
|
||||
✓ Deleted: photo(1).png
|
||||
✓ Deleted: document.pdf
|
||||
|
||||
✅ Cleanup complete!
|
||||
Deleted 2 orphan file records
|
||||
```
|
||||
|
||||
## Example session
|
||||
|
||||
Below is a complete example of running the cleanup script:
|
||||
|
||||
```bash
|
||||
$ pnpm cleanup:orphan-files
|
||||
|
||||
> palmr-api@3.2.3-beta cleanup:orphan-files
|
||||
> tsx src/scripts/cleanup-orphan-files.ts
|
||||
|
||||
Starting orphan file cleanup...
|
||||
Storage mode: Filesystem
|
||||
Found 15 files in database
|
||||
❌ Orphan: video.mp4 (user123/1758803869037-1WhtnrQioeFQ.mp4)
|
||||
❌ Orphan: image(1).png (user123/1758805195682-Rjn9at692HdR.png)
|
||||
❌ Orphan: image(2).png (user123/1758803757558-JQxlvF816UVo.png)
|
||||
|
||||
📊 Summary:
|
||||
Total files in DB: 15
|
||||
✅ Files with storage: 12
|
||||
❌ Orphan files: 3
|
||||
|
||||
🗑️ Orphan files to be deleted:
|
||||
- video.mp4 (97.09 MB) - user123/1758803869037-1WhtnrQioeFQ.mp4
|
||||
- image(1).png (0.01 MB) - user123/1758805195682-Rjn9at692HdR.png
|
||||
- image(2).png (0.76 MB) - user123/1758803757558-JQxlvF816UVo.png
|
||||
|
||||
⚠️ Dry run mode. To actually delete orphan records, run with --confirm flag:
|
||||
pnpm cleanup:orphan-files:confirm
|
||||
|
||||
$ pnpm cleanup:orphan-files:confirm
|
||||
|
||||
> palmr-api@3.2.3-beta cleanup:orphan-files:confirm
|
||||
> tsx src/scripts/cleanup-orphan-files.ts --confirm
|
||||
|
||||
Starting orphan file cleanup...
|
||||
Storage mode: Filesystem
|
||||
Found 15 files in database
|
||||
❌ Orphan: video.mp4 (user123/1758803869037-1WhtnrQioeFQ.mp4)
|
||||
❌ Orphan: image(1).png (user123/1758805195682-Rjn9at692HdR.png)
|
||||
❌ Orphan: image(2).png (user123/1758803757558-JQxlvF816UVo.png)
|
||||
|
||||
📊 Summary:
|
||||
Total files in DB: 15
|
||||
✅ Files with storage: 12
|
||||
❌ Orphan files: 3
|
||||
|
||||
🗑️ Orphan files to be deleted:
|
||||
- video.mp4 (97.09 MB) - user123/1758803869037-1WhtnrQioeFQ.mp4
|
||||
- image(1).png (0.01 MB) - user123/1758805195682-Rjn9at692HdR.png
|
||||
- image(2).png (0.76 MB) - user123/1758803757558-JQxlvF816UVo.png
|
||||
|
||||
🗑️ Deleting orphan file records...
|
||||
✓ Deleted: video.mp4
|
||||
✓ Deleted: image(1).png
|
||||
✓ Deleted: image(2).png
|
||||
|
||||
✅ Cleanup complete!
|
||||
Deleted 3 orphan file records
|
||||
|
||||
Script completed successfully
|
||||
```
|
||||
|
||||
## Troubleshooting common issues
|
||||
|
||||
### No orphan files found
|
||||
|
||||
```text
|
||||
📊 Summary:
|
||||
Total files in DB: 10
|
||||
✅ Files with storage: 10
|
||||
❌ Orphan files: 0
|
||||
|
||||
✨ No orphan files found!
|
||||
```
|
||||
|
||||
**This is good!** It means your database is in sync with your storage system.
|
||||
|
||||
### Script cannot connect to database
|
||||
|
||||
If you see database connection errors:
|
||||
|
||||
1. Verify the database file exists:
|
||||
```bash
|
||||
ls -la prisma/palmr.db
|
||||
```
|
||||
|
||||
2. Check database permissions:
|
||||
```bash
|
||||
chmod 644 prisma/palmr.db
|
||||
```
|
||||
|
||||
3. Ensure you're in the correct directory:
|
||||
```bash
|
||||
pwd # Should show .../palmr/apps/server
|
||||
```
|
||||
|
||||
### Storage provider errors
|
||||
|
||||
For **S3 storage:**
|
||||
- Verify your S3 credentials are configured correctly
|
||||
- Check that the bucket is accessible
|
||||
- Ensure network connectivity to S3
|
||||
|
||||
For **Filesystem storage:**
|
||||
- Verify the uploads directory exists and is readable
|
||||
- Check file system permissions
|
||||
- Ensure sufficient disk space
|
||||
|
||||
### Script fails to delete records
|
||||
|
||||
If deletion fails for specific files:
|
||||
- Check database locks (close other connections)
|
||||
- Verify you have write permissions to the database
|
||||
- Review the error message for specific details
|
||||
|
||||
## Understanding the output
|
||||
|
||||
### File statistics
|
||||
|
||||
The script provides several key metrics:
|
||||
|
||||
- **Total files in DB:** All file records in your database
|
||||
- **Files with storage:** Records where the physical file exists
|
||||
- **Orphan files:** Records where the physical file is missing
|
||||
|
||||
### File information
|
||||
|
||||
For each orphan file, you'll see:
|
||||
|
||||
- **Name:** Display name in the UI
|
||||
- **Size:** File size as recorded in the database
|
||||
- **Object name:** Internal storage path
|
||||
|
||||
Example: `photo(1).png (0.76 MB) - user123/1758805195682-Rjn9at692HdR.png`
|
||||
|
||||
## Prevention and best practices
|
||||
|
||||
### Prevent orphan files from occurring
|
||||
|
||||
1. **Monitor upload failures:** Check server logs for upload errors
|
||||
2. **Stable network:** Ensure reliable network connectivity for large uploads
|
||||
3. **Adequate resources:** Provide sufficient disk space and memory
|
||||
4. **Regular maintenance:** Run this script periodically as part of maintenance
|
||||
|
||||
### When to run cleanup
|
||||
|
||||
Consider running the cleanup script:
|
||||
|
||||
- **Monthly:** As part of routine database maintenance
|
||||
- **After incidents:** Following server crashes or storage issues
|
||||
- **Before migrations:** Before moving to new storage systems
|
||||
- **When users report errors:** If download failures are reported
|
||||
|
||||
### Safe cleanup practices
|
||||
|
||||
1. **Always run dry-run first:** Review what will be deleted before confirming
|
||||
2. **Backup your database:** Create a backup before running with `--confirm`
|
||||
3. **Check during low usage:** Run during off-peak hours to minimize disruption
|
||||
4. **Document the cleanup:** Keep records of when and why cleanup was performed
|
||||
5. **Verify after cleanup:** Check that file counts match expectations
|
||||
|
||||
## Technical details
|
||||
|
||||
### How files are stored
|
||||
|
||||
When files are uploaded to Palmr:
|
||||
|
||||
1. Frontend generates a safe object name using random identifiers
|
||||
2. Backend creates the final `objectName` as: `${userId}/${timestamp}-${randomId}.${extension}`
|
||||
3. If a duplicate name exists, the **display name** gets a suffix, but `objectName` remains unique
|
||||
4. Physical file is stored using `objectName`, display name is stored separately in database
|
||||
|
||||
### Storage providers
|
||||
|
||||
The script works with both storage providers:
|
||||
|
||||
- **FilesystemStorageProvider:** Uses `fs.promises.access()` to check file existence
|
||||
- **S3StorageProvider:** Uses `HeadObjectCommand` to verify objects in S3 bucket
|
||||
|
||||
### Database schema
|
||||
|
||||
Files table structure:
|
||||
```typescript
|
||||
{
|
||||
name: string // Display name (can have suffixes like "file (1).png")
|
||||
objectName: string // Physical storage path (always unique)
|
||||
size: bigint // File size in bytes
|
||||
extension: string // File extension
|
||||
userId: string // Owner of the file
|
||||
folderId: string? // Parent folder (null for root)
|
||||
}
|
||||
```
|
||||
|
||||
## Related improvements
|
||||
|
||||
### Download validation (v3.2.3-beta+)
|
||||
|
||||
Starting from version 3.2.3-beta, Palmr includes enhanced download validation:
|
||||
|
||||
- Files are checked for existence **before** attempting download
|
||||
- Returns proper 404 error if file is missing (instead of 500)
|
||||
- Provides helpful error message to users
|
||||
|
||||
This prevents errors when trying to download orphan files that haven't been cleaned up yet.
|
||||
|
||||
## Security considerations
|
||||
|
||||
- **Read-only by default:** Dry-run mode is safe and doesn't modify data
|
||||
- **Explicit confirmation:** Requires `--confirm` flag to delete records
|
||||
- **No file deletion:** Only removes database records, never deletes physical files
|
||||
- **Audit trail:** All actions are logged to console
|
||||
- **Permission-based:** Only users with server access can run the script
|
||||
|
||||
> **Important:** This script does not delete physical files from storage. It only removes database records for files that don't exist. This is intentional to prevent accidental data loss.
|
||||
|
||||
## FAQ
|
||||
|
||||
**Q: Will this delete my files?**
|
||||
A: No. The script only removes database records for files that are already missing from storage. Physical files are never deleted.
|
||||
|
||||
**Q: Can I undo the cleanup?**
|
||||
A: No. Once orphan records are deleted, they cannot be recovered. Always run dry-run mode first and backup your database.
|
||||
|
||||
**Q: Why do orphan files have suffixes like (1), (2)?**
|
||||
A: When duplicate files are uploaded, Palmr renames them with suffixes. If the upload fails after creating the database record, an orphan with a suffix remains.
|
||||
|
||||
**Q: How often should I run this script?**
|
||||
A: Monthly maintenance is usually sufficient. Run more frequently if you experience many upload failures.
|
||||
|
||||
**Q: Does this work with S3 storage?**
|
||||
A: Yes! The script automatically detects your storage provider (filesystem or S3) and works with both.
|
||||
|
||||
**Q: What if I have thousands of orphan files?**
|
||||
A: The script handles large numbers efficiently. Consider running during off-peak hours for very large cleanups.
|
||||
|
||||
**Q: Can this fix "File not found" errors?**
|
||||
A: Yes, if the errors are caused by orphan database records. The script removes those records, preventing future errors.
|
||||
391
apps/docs/content/docs/3.2-beta/download-memory-management.mdx
Normal file
391
apps/docs/content/docs/3.2-beta/download-memory-management.mdx
Normal file
@@ -0,0 +1,391 @@
|
||||
---
|
||||
title: Memory Management
|
||||
icon: Download
|
||||
---
|
||||
|
||||
import { Callout } from "fumadocs-ui/components/callout";
|
||||
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
|
||||
|
||||
Palmr implements an intelligent memory management system that prevents crashes during large file downloads (3GB+ by default), maintaining unlimited download capacity through adaptive resource control and an automatic queue system.
|
||||
|
||||
## How It Works
|
||||
|
||||
### Automatic Resource Detection
|
||||
|
||||
The system automatically detects available container/system memory and configures appropriate limits based on available infrastructure:
|
||||
|
||||
```typescript
|
||||
const totalMemoryGB = require("os").totalmem() / 1024 ** 3;
|
||||
```
|
||||
|
||||
### System Configuration
|
||||
|
||||
The system supports two configuration approaches that you can choose based on your needs:
|
||||
|
||||
<Tabs items={["Manual Configuration", "Auto-scaling (Default)"]}>
|
||||
<Tab value="Manual Configuration">
|
||||
Manually configure all parameters for total control over the system:
|
||||
|
||||
```bash
|
||||
# Custom configuration (overrides auto-scaling)
|
||||
DOWNLOAD_MAX_CONCURRENT=8 # Maximum simultaneous downloads
|
||||
DOWNLOAD_MEMORY_THRESHOLD_MB=1536 # Memory threshold in MB
|
||||
DOWNLOAD_QUEUE_SIZE=40 # Maximum queue size
|
||||
DOWNLOAD_AUTO_SCALE=false # Disable auto-scaling
|
||||
```
|
||||
|
||||
<Callout>
|
||||
Manual configuration offers total control and predictability for specific environments where you know exactly the available resources.
|
||||
</Callout>
|
||||
|
||||
</Tab>
|
||||
<Tab value="Auto-scaling (Default)">
|
||||
Automatic configuration based on detected system memory:
|
||||
|
||||
| Available Memory | Concurrent Downloads | Memory Threshold | Queue Size | Recommended Use |
|
||||
|------------------|----------------------|-------------------|------------|--------------------|
|
||||
| ≤ 2GB | 1 | 256MB | 5 | Development |
|
||||
| 2GB - 4GB | 2 | 512MB | 10 | Small Environment |
|
||||
| 4GB - 8GB | 3 | 1GB | 15 | Standard Production|
|
||||
| 8GB - 16GB | 5 | 2GB | 25 | High Performance |
|
||||
| > 16GB | 10 | 4GB | 50 | Enterprise |
|
||||
|
||||
<Callout>
|
||||
Auto-scaling automatically adapts to different environments without manual configuration, perfect for flexible deployment.
|
||||
</Callout>
|
||||
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
<Callout type="info">If environment variables are configured, they take **priority** over auto-scaling.</Callout>
|
||||
|
||||
## Download Queue System
|
||||
|
||||
### How It Works
|
||||
|
||||
The memory management system only activates for files larger than the configured minimum size (3GB by default). Smaller files bypass the queue system entirely and download immediately without memory management.
|
||||
|
||||
When a user requests a download for a large file but all slots are occupied, the system automatically queues the download instead of returning a 429 error. The queue processes downloads in FIFO order (first in, first out).
|
||||
|
||||
### Practical Example
|
||||
|
||||
Consider a system with 8GB RAM (5 concurrent downloads, queue of 25, 3GB minimum) where users want to download files of various sizes:
|
||||
|
||||
```bash
|
||||
# Small files (< 3GB): Bypass queue entirely
|
||||
[DOWNLOAD MANAGER] File document.pdf (0.05GB) below threshold (3.0GB), bypassing queue
|
||||
|
||||
# Large files 1-5: Start immediately
|
||||
[DOWNLOAD MANAGER] Immediate start: 1734567890-abc123def
|
||||
[DOWNLOAD MANAGER] Starting video1.mp4 (5.2GB)
|
||||
|
||||
# Large files 6-10: Automatically queued
|
||||
[DOWNLOAD MANAGER] Queued: 1734567891-def456ghi (Position: 1/25)
|
||||
[DOWNLOAD MANAGER] Queued file: video2.mp4 (8.1GB)
|
||||
|
||||
# When download 1 finishes: download 6 starts automatically
|
||||
[DOWNLOAD MANAGER] Processing queue: 1734567891-def456ghi (4 remaining)
|
||||
[DOWNLOAD MANAGER] Starting queued file: video2.mp4 (8.1GB)
|
||||
```
|
||||
|
||||
### System Benefits
|
||||
|
||||
**User Experience**
|
||||
|
||||
- Users don't receive errors, they simply wait in queue
|
||||
- Downloads start automatically when slots become available
|
||||
- Transparent operation without client changes
|
||||
- Fair processing order with FIFO queue
|
||||
|
||||
**Technical Features**
|
||||
|
||||
- Limited buffers (64KB per stream) for controlled memory usage
|
||||
- Automatic backpressure control with pipeline streams
|
||||
- Adaptive memory throttling based on usage patterns
|
||||
- Forced garbage collection after large downloads
|
||||
- Smart timeout handling (30 minutes for queued downloads)
|
||||
- Automatic cleanup of orphaned downloads every 30 seconds
|
||||
|
||||
## Container Compatibility
|
||||
|
||||
The system works with Docker, Kubernetes, and any containerized environment:
|
||||
|
||||
<Tabs items={["Docker", "Kubernetes", "Docker Compose"]}>
|
||||
<Tab value="Docker">
|
||||
|
||||
```bash
|
||||
# Example: Container with 8GB
|
||||
docker run -m 8g palmr/server
|
||||
# Result: 5 concurrent downloads, queue of 25, threshold 2GB
|
||||
```
|
||||
|
||||
</Tab>
|
||||
<Tab value="Kubernetes">
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: palmr-server
|
||||
resources:
|
||||
limits:
|
||||
memory: "4Gi" # Detects 4GB
|
||||
cpu: "2"
|
||||
requests:
|
||||
memory: "2Gi"
|
||||
cpu: "1"
|
||||
# Result: 3 concurrent downloads, queue of 15, threshold 1GB
|
||||
```
|
||||
|
||||
</Tab>
|
||||
<Tab value="Docker Compose">
|
||||
|
||||
```yaml
|
||||
services:
|
||||
palmr-server:
|
||||
image: palmr/server
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 16G # Detects 16GB
|
||||
# Result: 10 concurrent downloads, queue of 50, threshold 4GB
|
||||
```
|
||||
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Configure the download memory management system using these environment variables:
|
||||
|
||||
| Variable | Default | Description |
|
||||
| ------------------------------ | ---------- | ----------------------------------------------------- |
|
||||
| `DOWNLOAD_MAX_CONCURRENT` | auto-scale | Maximum number of simultaneous downloads |
|
||||
| `DOWNLOAD_MEMORY_THRESHOLD_MB` | auto-scale | Memory limit in MB before throttling |
|
||||
| `DOWNLOAD_QUEUE_SIZE` | auto-scale | Maximum download queue size |
|
||||
| `DOWNLOAD_AUTO_SCALE` | `true` | Enable/disable auto-scaling based on system memory |
|
||||
| `DOWNLOAD_MIN_FILE_SIZE_GB` | `3.0` | Minimum file size in GB to activate memory management |
|
||||
|
||||
### Configuration Examples by Scenario
|
||||
|
||||
<Tabs items={["Home Server", "Enterprise", "High Performance", "Conservative"]}>
|
||||
<Tab value="Home Server">
|
||||
Configuration optimized for personal use or small groups (4GB RAM):
|
||||
|
||||
```bash
|
||||
DOWNLOAD_MAX_CONCURRENT=2
|
||||
DOWNLOAD_MEMORY_THRESHOLD_MB=1024
|
||||
DOWNLOAD_QUEUE_SIZE=8
|
||||
DOWNLOAD_MIN_FILE_SIZE_GB=2.0
|
||||
DOWNLOAD_AUTO_SCALE=false
|
||||
```
|
||||
|
||||
</Tab>
|
||||
<Tab value="Enterprise">
|
||||
Configuration for corporate environments with multiple users (16GB RAM):
|
||||
|
||||
```bash
|
||||
DOWNLOAD_MAX_CONCURRENT=12
|
||||
DOWNLOAD_MEMORY_THRESHOLD_MB=4096
|
||||
DOWNLOAD_QUEUE_SIZE=60
|
||||
DOWNLOAD_MIN_FILE_SIZE_GB=5.0
|
||||
DOWNLOAD_AUTO_SCALE=false
|
||||
```
|
||||
|
||||
</Tab>
|
||||
<Tab value="High Performance">
|
||||
Configuration for maximum performance and throughput (32GB RAM):
|
||||
|
||||
```bash
|
||||
DOWNLOAD_MAX_CONCURRENT=20
|
||||
DOWNLOAD_MEMORY_THRESHOLD_MB=8192
|
||||
DOWNLOAD_QUEUE_SIZE=100
|
||||
DOWNLOAD_MIN_FILE_SIZE_GB=10.0
|
||||
DOWNLOAD_AUTO_SCALE=false
|
||||
```
|
||||
|
||||
</Tab>
|
||||
<Tab value="Conservative">
|
||||
For environments with limited or shared resources:
|
||||
|
||||
```bash
|
||||
DOWNLOAD_MAX_CONCURRENT=3
|
||||
DOWNLOAD_MEMORY_THRESHOLD_MB=1024
|
||||
DOWNLOAD_QUEUE_SIZE=15
|
||||
DOWNLOAD_MIN_FILE_SIZE_GB=1.0
|
||||
DOWNLOAD_AUTO_SCALE=false
|
||||
```
|
||||
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
### Additional Configuration
|
||||
|
||||
For optimal performance with large downloads, consider these additional settings:
|
||||
|
||||
```bash
|
||||
# Force garbage collection (recommended for large downloads)
|
||||
NODE_OPTIONS="--expose-gc"
|
||||
|
||||
# Adjust timeout for very large downloads
|
||||
KEEP_ALIVE_TIMEOUT=300000
|
||||
REQUEST_TIMEOUT=0
|
||||
```
|
||||
|
||||
## Monitoring and Logs
|
||||
|
||||
### System Logs
|
||||
|
||||
The system provides detailed logs to track operation:
|
||||
|
||||
```bash
|
||||
[DOWNLOAD MANAGER] System Memory: 8.0GB, Max Concurrent: 5, Memory Threshold: 2048MB, Queue Size: 25
|
||||
[DOWNLOAD] Requesting slot for 1734567890-abc123def: video.mp4 (15.2GB)
|
||||
[DOWNLOAD MANAGER] Queued: 1734567890-abc123def (Position: 3/25)
|
||||
[DOWNLOAD MANAGER] Processing queue: 1734567890-abc123def (2 remaining)
|
||||
[DOWNLOAD] Starting 1734567890-abc123def: video.mp4 (15.2GB)
|
||||
[MEMORY THROTTLE] video.mp4 - Pausing stream due to high memory usage: 1843MB
|
||||
[DOWNLOAD] Applying throttling: 100ms delay for 1734567890-abc123def
|
||||
```
|
||||
|
||||
### Configuration Validation
|
||||
|
||||
The system automatically validates configurations at startup and provides warnings or errors:
|
||||
|
||||
**Warnings**
|
||||
|
||||
- `DOWNLOAD_MAX_CONCURRENT > 50`: May cause performance issues
|
||||
- `DOWNLOAD_MEMORY_THRESHOLD_MB < 128MB`: Downloads may be throttled frequently
|
||||
- `DOWNLOAD_MEMORY_THRESHOLD_MB > 16GB`: System may run out of memory
|
||||
- `DOWNLOAD_QUEUE_SIZE > 1000`: May consume significant memory
|
||||
- `DOWNLOAD_QUEUE_SIZE < DOWNLOAD_MAX_CONCURRENT`: Queue smaller than concurrent downloads
|
||||
|
||||
**Errors**
|
||||
|
||||
- `DOWNLOAD_MAX_CONCURRENT < 1`: Invalid value
|
||||
- `DOWNLOAD_QUEUE_SIZE < 1`: Invalid value
|
||||
|
||||
## Queue Management APIs
|
||||
|
||||
The system provides REST APIs to monitor and manage the download queue:
|
||||
|
||||
### Get Queue Status
|
||||
|
||||
```http
|
||||
GET /api/filesystem/download-queue/status
|
||||
```
|
||||
|
||||
**Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
"data": {
|
||||
"queueLength": 3,
|
||||
"maxQueueSize": 25,
|
||||
"activeDownloads": 5,
|
||||
"maxConcurrent": 5,
|
||||
"queuedDownloads": [
|
||||
{
|
||||
"downloadId": "1734567890-abc123def",
|
||||
"position": 1,
|
||||
"waitTime": 45000,
|
||||
"fileName": "video.mp4",
|
||||
"fileSize": 16106127360
|
||||
}
|
||||
]
|
||||
},
|
||||
"status": "success"
|
||||
}
|
||||
```
|
||||
|
||||
### Cancel Download
|
||||
|
||||
```http
|
||||
DELETE /api/filesystem/download-queue/{downloadId}
|
||||
```
|
||||
|
||||
**Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
"downloadId": "1734567890-abc123def",
|
||||
"message": "Download cancelled successfully"
|
||||
}
|
||||
```
|
||||
|
||||
### Clear Queue (Admin)
|
||||
|
||||
```http
|
||||
DELETE /api/filesystem/download-queue
|
||||
```
|
||||
|
||||
**Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
"clearedCount": 8,
|
||||
"message": "Download queue cleared successfully"
|
||||
}
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Downloads failing with "Download queue is full"**
|
||||
|
||||
_Cause:_ Too many simultaneous downloads with a full queue
|
||||
|
||||
_Solutions:_
|
||||
|
||||
- Wait for some downloads to finish
|
||||
- Check for orphaned downloads in queue
|
||||
- Consider increasing container resources
|
||||
- Use API to clear queue if necessary
|
||||
|
||||
**Downloads stay too long in queue**
|
||||
|
||||
_Cause:_ Active downloads are slow or stuck
|
||||
|
||||
_Solutions:_
|
||||
|
||||
- Check logs for orphaned downloads
|
||||
- Use API to cancel specific downloads
|
||||
- Check client network connections
|
||||
- Monitor memory throttling
|
||||
|
||||
**Very slow downloads**
|
||||
|
||||
_Cause:_ Active throttling due to high memory usage
|
||||
|
||||
_Solutions:_
|
||||
|
||||
- Check other processes consuming memory
|
||||
- Consider increasing container resources
|
||||
- Monitor throttling logs
|
||||
- Check number of simultaneous downloads
|
||||
|
||||
## Summary
|
||||
|
||||
This system enables unlimited downloads (including 50TB+ files) without compromising system stability through:
|
||||
|
||||
**Key Features**
|
||||
|
||||
- Auto-configuration based on available resources
|
||||
- Automatic FIFO queue system for pending downloads
|
||||
- Adaptive control of simultaneous downloads
|
||||
- Intelligent throttling when needed
|
||||
|
||||
**System Benefits**
|
||||
|
||||
- Management APIs to monitor and control queue
|
||||
- Automatic cleanup of resources and orphaned downloads
|
||||
- Full compatibility with Docker/Kubernetes
|
||||
- Perfect user experience with no 429 errors
|
||||
|
||||
The system maintains high performance for small/medium files while preventing crashes with gigantic files, offering a seamless experience where users never see 429 errors, they simply wait in queue until their download starts automatically.
|
||||
@@ -49,6 +49,7 @@ The frontend is organized with:
|
||||
- **Custom hooks** to isolate logic and side effects
|
||||
- A **route protection system** using session cookies and middleware
|
||||
- A **file management interface** integrated with the backend
|
||||
- **Folder support** for organizing files hierarchically
|
||||
- A **reusable modal system** used for file actions, confirmations, and more
|
||||
- **Dynamic, locale-aware routing** using next-intl
|
||||
|
||||
@@ -68,7 +69,7 @@ Data is stored in **SQLite**, which handles user info, file metadata, session to
|
||||
Key features include:
|
||||
|
||||
- **Authentication/authorization** with JWT + cookie sessions
|
||||
- **File management logic** including uploads, deletes, and renames
|
||||
- **File management logic** including uploads, deletes, renames, and folders
|
||||
- **Storage operations** to handle file organization, usage tracking, and cleanup
|
||||
- A **share system** that generates tokenized public file links
|
||||
- Schema-based request validation for all endpoints
|
||||
@@ -106,9 +107,10 @@ Volumes are used to persist data locally, and containers are networked together
|
||||
|
||||
### File management
|
||||
|
||||
Files are at the heart of Palmr. Users can upload files via the frontend, and they're stored directly in the filesystem. The backend handles metadata (name, size, type, ownership), and also handles deletion, renaming, and public sharing. Every file operation is tracked, and all actions can be scoped per user.
|
||||
Files are at the heart of Palmr. Users can upload files via the frontend, and they're stored directly in the filesystem. Users can also create folders to organize files. The backend handles metadata (name, size, type, ownership), and also handles deletion, renaming, and public sharing. Every file operation is tracked, and all actions can be scoped per user.
|
||||
|
||||
- Upload/download with instant feedback
|
||||
- Create and organize files in folders
|
||||
- File previews, type validation, and size limits
|
||||
- Token-based sharing system
|
||||
- Disk usage tracking by user
|
||||
@@ -5,7 +5,7 @@ icon: Cog
|
||||
|
||||
Hey there! Looking to run **Palmr.** your way, with complete control over every piece of the stack? This manual installation guide is for you. No Docker, no pre-built containers just the raw source code to tweak, customize, and deploy as you see fit.
|
||||
|
||||
> **Prefer a quicker setup?** If this hands-on approach feels like overkill, check out our [**Quick Start (Docker)**](/docs/3.1-beta/quick-start) guide for a fast, containerized deployment. This manual path is tailored for developers who want to dive deep, modify the codebase, or integrate custom services.
|
||||
> **Prefer a quicker setup?** If this hands-on approach feels like overkill, check out our [**Quick Start (Docker)**](/docs/3.2-beta/quick-start) guide for a fast, containerized deployment. This manual path is tailored for developers who want to dive deep, modify the codebase, or integrate custom services.
|
||||
|
||||
Here's what you'll do at a glance:
|
||||
|
||||
@@ -165,6 +165,27 @@ cp .env.example .env
|
||||
|
||||
This creates a `.env` file with the necessary configurations for the frontend.
|
||||
|
||||
##### Upload Configuration
|
||||
|
||||
Palmr. supports configurable chunked uploading for large files. You can customize the chunk size by setting the following environment variable in your `.env` file:
|
||||
|
||||
```bash
|
||||
NEXT_PUBLIC_UPLOAD_CHUNK_SIZE_MB=100
|
||||
```
|
||||
|
||||
**How it works:**
|
||||
|
||||
- If `NEXT_PUBLIC_UPLOAD_CHUNK_SIZE_MB` is set, Palmr. will use this value (in megabytes) as the chunk size for all file uploads that exceed this threshold.
|
||||
- If not set or left empty, Palmr. automatically calculates optimal chunk sizes based on file size:
|
||||
- Files ≤ 100MB: uploaded without chunking
|
||||
- Files > 100MB and ≤ 1GB: 75MB chunks
|
||||
- Files > 1GB: 150MB chunks
|
||||
|
||||
**When to configure:**
|
||||
|
||||
- **Default (not set):** Recommended for most use cases. Palmr. will intelligently determine the best chunk size.
|
||||
- **Custom value:** Set this if you have specific network conditions or want to optimize for your infrastructure (e.g., slower connections may benefit from smaller chunks like 50MB, while fast networks can handle larger chunks like 200MB, or the upload size per payload may be limited by a proxy like Cloudflare)
|
||||
|
||||
#### Install dependencies
|
||||
|
||||
Install all the frontend dependencies:
|
||||
@@ -201,6 +222,17 @@ You should see the full Palmr. application ready to go!
|
||||
|
||||
This guide sets up Palmr. using the local file system for storage. Want to use an S3-compatible object storage instead? You can configure that in the `.env` file. Check the Palmr. documentation for details on setting up S3 storage just update the environment variables, then build and run as shown here.
|
||||
|
||||
### Custom Installation Paths and Symlinks
|
||||
|
||||
If you're using a custom installation setup with symlinks (for example, `/opt/palmr_data/uploads -> /mnt/data/uploads`), you might encounter issues with disk space detection. Palmr. includes a `CUSTOM_PATH` environment variable to handle these scenarios:
|
||||
|
||||
```bash
|
||||
# In your .env file (apps/server/.env)
|
||||
CUSTOM_PATH=/opt/palmr_data
|
||||
```
|
||||
|
||||
This tells Palmr. to check your custom path first when determining available disk space, ensuring proper detection even when using symlinks or non-standard directory structures.
|
||||
|
||||
---
|
||||
|
||||
## Command cheat sheet
|
||||
@@ -232,10 +264,10 @@ pnpm serve
|
||||
|
||||
Palmr. is now up and running locally . Here are some suggested next steps:
|
||||
|
||||
- **Manage Users**: Dive into the [Users Management](/docs/3.1-beta/manage-users) guide.
|
||||
- **Manage Users**: Dive into the [Users Management](/docs/3.2-beta/manage-users) guide.
|
||||
- **Switch to Object Storage**: Update `.env` variables to use an S3-compatible bucket (see Quick Notes above).
|
||||
- **Secure Your Instance**: Put Palmr. behind a reverse proxy like **Nginx** or **Caddy** and enable HTTPS.
|
||||
- **Learn the Internals**: Explore how everything connects in the [Architecture](/docs/3.1-beta/architecture) overview.
|
||||
- **Learn the Internals**: Explore how everything connects in the [Architecture](/docs/3.2-beta/architecture) overview.
|
||||
|
||||
Jump into whichever area fits your needs our docs are designed for exploration in any order.
|
||||
|
||||
@@ -14,7 +14,9 @@
|
||||
"available-languages",
|
||||
"uid-gid-configuration",
|
||||
"reverse-proxy-configuration",
|
||||
"download-memory-management",
|
||||
"password-reset-without-smtp",
|
||||
"cleanup-orphan-files",
|
||||
"oidc-authentication",
|
||||
"troubleshooting",
|
||||
"---Developers---",
|
||||
@@ -29,5 +31,5 @@
|
||||
"gh-sponsor"
|
||||
],
|
||||
"root": true,
|
||||
"title": "v3.1-beta"
|
||||
"title": "v3.2-beta"
|
||||
}
|
||||
@@ -360,7 +360,7 @@ With Auth0 authentication configured, you might want to:
|
||||
- **Review security settings**: Ensure your authentication setup meets your security requirements
|
||||
- **Monitor usage**: Keep track of authentication patterns and user activity
|
||||
|
||||
For more information about OIDC authentication in Palmr, see the [OIDC Authentication overview](/docs/3.1-beta/oidc-authentication).
|
||||
For more information about OIDC authentication in Palmr, see the [OIDC Authentication overview](/docs/3.2-beta/oidc-authentication).
|
||||
|
||||
## Useful resources
|
||||
|
||||
@@ -374,7 +374,7 @@ With Authentik authentication configured, you might want to:
|
||||
- **Review security settings**: Ensure your authentication setup meets your security requirements
|
||||
- **Monitor usage**: Keep track of authentication patterns and user activity
|
||||
|
||||
For more information about OIDC authentication in Palmr, see the [OIDC Authentication overview](/docs/3.1-beta/oidc-authentication).
|
||||
For more information about OIDC authentication in Palmr, see the [OIDC Authentication overview](/docs/3.2-beta/oidc-authentication).
|
||||
|
||||
## Useful resources
|
||||
|
||||
@@ -332,7 +332,7 @@ With Discord authentication configured, you might want to:
|
||||
- **Review security settings**: Ensure your authentication setup meets your security requirements
|
||||
- **Monitor usage**: Keep track of authentication patterns and user activity
|
||||
|
||||
For more information about OIDC authentication in Palmr, see the [OIDC Authentication overview](/docs/3.1-beta/oidc-authentication).
|
||||
For more information about OIDC authentication in Palmr, see the [OIDC Authentication overview](/docs/3.2-beta/oidc-authentication).
|
||||
|
||||
## Useful resources
|
||||
|
||||
@@ -314,7 +314,7 @@ With Frontegg authentication configured, you might want to:
|
||||
- **Review security settings**: Ensure your authentication setup meets your security requirements
|
||||
- **Monitor usage**: Keep track of authentication patterns and user activity
|
||||
|
||||
For more information about OIDC authentication in Palmr, see the [OIDC Authentication overview](/docs/3.1-beta/oidc-authentication).
|
||||
For more information about OIDC authentication in Palmr, see the [OIDC Authentication overview](/docs/3.2-beta/oidc-authentication).
|
||||
|
||||
## Useful resources
|
||||
|
||||
@@ -295,7 +295,7 @@ With GitHub authentication configured, you might want to:
|
||||
- **Review security settings**: Ensure your authentication setup meets your security requirements
|
||||
- **Monitor usage**: Keep track of authentication patterns and user activity
|
||||
|
||||
For more information about OIDC authentication in Palmr, see the [OIDC Authentication overview](/docs/3.1-beta/oidc-authentication).
|
||||
For more information about OIDC authentication in Palmr, see the [OIDC Authentication overview](/docs/3.2-beta/oidc-authentication).
|
||||
|
||||
## Useful resources
|
||||
|
||||
@@ -325,7 +325,7 @@ With Google authentication configured, you might want to:
|
||||
- **Review security settings**: Ensure your authentication setup meets your security requirements
|
||||
- **Monitor usage**: Keep track of authentication patterns and user activity
|
||||
|
||||
For more information about OIDC authentication in Palmr, see the [OIDC Authentication overview](/docs/3.1-beta/oidc-authentication).
|
||||
For more information about OIDC authentication in Palmr, see the [OIDC Authentication overview](/docs/3.2-beta/oidc-authentication).
|
||||
|
||||
## Useful resources
|
||||
|
||||
@@ -38,14 +38,14 @@ Before configuring OIDC authentication, ensure you have:
|
||||
|
||||
Palmr's OIDC implementation is compatible with any OpenID Connect compliant provider, including as official providers:
|
||||
|
||||
- **[Google](/docs/3.1-beta/oidc-authentication/google)**
|
||||
- **[Discord](/docs/3.1-beta/oidc-authentication/discord)**
|
||||
- **[Github](/docs/3.1-beta/oidc-authentication/github)**
|
||||
- **[Zitadel](/docs/3.1-beta/oidc-authentication/zitadel)**
|
||||
- **[Auth0](/docs/3.1-beta/oidc-authentication/auth0)**
|
||||
- **[Authentik](/docs/3.1-beta/oidc-authentication/authentik)**
|
||||
- **[Frontegg](/docs/3.1-beta/oidc-authentication/frontegg)**
|
||||
- **[Kinde Auth](/docs/3.1-beta/oidc-authentication/kinde-auth)**
|
||||
- **[Google](/docs/3.2-beta/oidc-authentication/google)**
|
||||
- **[Discord](/docs/3.2-beta/oidc-authentication/discord)**
|
||||
- **[Github](/docs/3.2-beta/oidc-authentication/github)**
|
||||
- **[Zitadel](/docs/3.2-beta/oidc-authentication/zitadel)**
|
||||
- **[Auth0](/docs/3.2-beta/oidc-authentication/auth0)**
|
||||
- **[Authentik](/docs/3.2-beta/oidc-authentication/authentik)**
|
||||
- **[Frontegg](/docs/3.2-beta/oidc-authentication/frontegg)**
|
||||
- **[Kinde Auth](/docs/3.2-beta/oidc-authentication/kinde-auth)**
|
||||
|
||||
Although these are the official providers (internally tested with 100% success), you can connect any OIDC provider by providing your credentials and connection URL. We've developed a practical way to integrate virtually all OIDC providers available in the market. In this documentation, you can consult how to configure each of the official providers, as well as include other providers not listed as official. Just below, you will find instructions on how to access the OIDC provider configuration. For specific details about configuring each provider, select the desired option in the sidebar, in the "OIDC Authentication" section.
|
||||
|
||||
@@ -359,7 +359,7 @@ With Kinde Auth authentication configured, you might want to:
|
||||
- **Review security settings**: Ensure your authentication setup meets your security requirements
|
||||
- **Monitor usage**: Keep track of authentication patterns and user activity
|
||||
|
||||
For more information about OIDC authentication in Palmr, see the [OIDC Authentication overview](/docs/3.1-beta/oidc-authentication).
|
||||
For more information about OIDC authentication in Palmr, see the [OIDC Authentication overview](/docs/3.2-beta/oidc-authentication).
|
||||
|
||||
## Useful resources
|
||||
|
||||
@@ -15,4 +15,4 @@
|
||||
"other"
|
||||
],
|
||||
"title": "OIDC Authentication"
|
||||
}
|
||||
}
|
||||
@@ -270,10 +270,10 @@ After configuring Pocket ID authentication:
|
||||
- **User management**: Review auto-registration settings
|
||||
- **Backup verification**: Test backup and restore procedures
|
||||
|
||||
For more information about OIDC authentication in Palmr, see the [OIDC Authentication overview](/docs/3.1-beta/oidc-authentication).
|
||||
For more information about OIDC authentication in Palmr, see the [OIDC Authentication overview](/docs/3.2-beta/oidc-authentication).
|
||||
|
||||
## Useful resources
|
||||
|
||||
- [Pocket ID Documentation](https://docs.pocket-id.org)
|
||||
- [OIDC Specification](https://openid.net/specs/openid-connect-core-1_0.html)
|
||||
- [Palmr OIDC Overview](/docs/3.1-beta/oidc-authentication)
|
||||
- [Palmr OIDC Overview](/docs/3.2-beta/oidc-authentication)
|
||||
@@ -413,7 +413,7 @@ With Zitadel authentication configured, you might want to:
|
||||
- **Review security settings**: Ensure your authentication setup meets your security requirements
|
||||
- **Monitor usage**: Keep track of authentication patterns and user activity
|
||||
|
||||
For more information about OIDC authentication in Palmr, see the [OIDC Authentication overview](/docs/3.1-beta/oidc-authentication).
|
||||
For more information about OIDC authentication in Palmr, see the [OIDC Authentication overview](/docs/3.2-beta/oidc-authentication).
|
||||
|
||||
## Useful resources
|
||||
|
||||
@@ -47,12 +47,12 @@ docker exec -it <container_name_or_id> /bin/sh
|
||||
|
||||
Replace `<container_name_or_id>` with the name or ID of your Palmr container. This command opens an interactive shell session inside the container, allowing you to execute commands directly.
|
||||
|
||||
### 3. Navigate to the server directory
|
||||
### 3. Navigate to the application directory
|
||||
|
||||
Once inside the container, navigate to the server directory where the reset script is located:
|
||||
Once inside the container, navigate to the application directory where the reset script is located:
|
||||
|
||||
```bash
|
||||
cd /app/server
|
||||
cd /app/palmr-app
|
||||
```
|
||||
|
||||
This directory contains the necessary scripts and configurations for managing Palmr's backend operations.
|
||||
@@ -135,11 +135,11 @@ If you encounter issues while running the script, refer to the following solutio
|
||||
- Confirm that the `prisma/palmr.db` file exists and has the correct permissions.
|
||||
- Verify that the container has access to the database volume.
|
||||
|
||||
- **Error: "Script must be run from server directory"**
|
||||
This error appears if you are not in the correct directory. Navigate to the server directory with:
|
||||
- **Error: "Script must be run from application directory"**
|
||||
This error appears if you are not in the correct directory. Navigate to the application directory with:
|
||||
|
||||
```bash
|
||||
cd /app/server
|
||||
cd /app/palmr-app
|
||||
```
|
||||
|
||||
- **Error: "User not found"**
|
||||
409
apps/docs/content/docs/3.2-beta/quick-start.mdx
Normal file
409
apps/docs/content/docs/3.2-beta/quick-start.mdx
Normal file
@@ -0,0 +1,409 @@
|
||||
---
|
||||
title: Quick Start (Docker)
|
||||
icon: "Rocket"
|
||||
---
|
||||
|
||||
import { Callout } from "fumadocs-ui/components/callout";
|
||||
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
|
||||
|
||||
import { Card, CardGrid } from "@/components/ui/card";
|
||||
|
||||
Welcome to the fastest way to deploy <span className="font-bold">Palmr.</span> - your secure, self-hosted file sharing solution. This guide will have you up and running in minutes, whether you're new to self-hosting or an experienced developer.
|
||||
|
||||
Palmr. offers flexible deployment options to match your infrastructure needs. This guide focuses on Docker deployment with our recommended filesystem storage, perfect for most use cases.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before you begin, make sure you have:
|
||||
|
||||
- **Docker** - Container runtime ([installation guide](https://docs.docker.com/get-docker/))
|
||||
- **Docker Compose** - Multi-container orchestration ([installation guide](https://docs.docker.com/compose/install/))
|
||||
- **2GB+ available disk space** for the application and your files
|
||||
- **Port 5487** available for the web interface
|
||||
- **Port 3333** available for API access (optional)
|
||||
|
||||
<Callout>
|
||||
**Platform Support**: Palmr. is developed on macOS and extensively tested on Linux servers. While we haven't formally
|
||||
tested other platforms, Docker's cross-platform nature should ensure compatibility. Report any issues on our [GitHub
|
||||
repository](https://github.com/kyantech/Palmr/issues).
|
||||
</Callout>
|
||||
|
||||
## Storage Options
|
||||
|
||||
Palmr. supports two storage approaches for persistent data:
|
||||
|
||||
- **Named Volumes (Recommended)** - Docker-managed storage with optimal performance and no permission issues
|
||||
- **Bind Mounts** - Direct host filesystem access, ideal for development and direct file management
|
||||
|
||||
## Deployment Options
|
||||
|
||||
Choose your storage method based on your needs:
|
||||
|
||||
<Tabs items={['Named Volumes (Recommended)', 'Bind Mounts']}>
|
||||
<Tab value="Named Volumes (Recommended)">
|
||||
Docker-managed storage that provides the best balance of performance, security, and ease of use:
|
||||
|
||||
- **No Permission Issues**: Docker handles all permission management automatically
|
||||
- **Performance**: Optimized for container workloads with better I/O performance
|
||||
- **Production Ready**: Recommended for production deployments
|
||||
|
||||
### Configuration
|
||||
|
||||
Create a `docker-compose.yml` file:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
palmr:
|
||||
image: kyantech/palmr:latest
|
||||
container_name: palmr
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "5487:5487" # Web interface
|
||||
# - "3333:3333" # API (optional)
|
||||
environment:
|
||||
# Optional: Uncomment and configure as needed (if you don`t use, you can remove)
|
||||
# - ENABLE_S3=true # Set to true to enable S3-compatible storage
|
||||
# - DISABLE_FILESYSTEM_ENCRYPTION=true # Set to false to enable file encryption
|
||||
# - ENCRYPTION_KEY=your-secure-key-min-32-chars # Required only if encryption is enabled
|
||||
# - PALMR_UID=1000 # UID for the container processes (default is 1000)
|
||||
# - PALMR_GID=1000 # GID for the container processes (default is 1000)
|
||||
# - SECURE_SITE=false # Set to true if you are using a reverse proxy
|
||||
# - DEFAULT_LANGUAGE=en-US # Default language for the application (optional, defaults to en-US)
|
||||
# - PRESIGNED_URL_EXPIRATION=3600 # Duration in seconds for presigned URL expiration (optional, defaults to 3600 seconds / 1 hour)
|
||||
# - DOWNLOAD_MAX_CONCURRENT=5 # Maximum simultaneous downloads (auto-scales if not set)
|
||||
# - DOWNLOAD_MEMORY_THRESHOLD_MB=2048 # Memory threshold in MB before throttling (auto-scales if not set)
|
||||
# - DOWNLOAD_QUEUE_SIZE=25 # Maximum queue size for pending downloads (auto-scales if not set)
|
||||
# - DOWNLOAD_MIN_FILE_SIZE_GB=3.0 # Minimum file size in GB to activate memory management (default: 3.0)
|
||||
# - DOWNLOAD_AUTO_SCALE=true # Enable auto-scaling based on system memory (default: true)
|
||||
# - NODE_OPTIONS=--expose-gc # Enable garbage collection for large downloads (recommended for production)
|
||||
# - NEXT_PUBLIC_UPLOAD_CHUNK_SIZE_MB=100 # Chunk size in MB for large file uploads (OPTIONAL - auto-calculates if not set)
|
||||
volumes:
|
||||
- palmr_data:/app/server
|
||||
|
||||
volumes:
|
||||
palmr_data:
|
||||
```
|
||||
|
||||
<Callout type="info">
|
||||
**Having upload or permission issues?** Add `PALMR_UID=1000` and `PALMR_GID=1000` to your environment variables. Check our [UID/GID Configuration](/docs/3.2-beta/uid-gid-configuration) guide for more details.
|
||||
</Callout>
|
||||
|
||||
### Deploy
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
</Tab>
|
||||
<Tab value="Bind Mounts">
|
||||
Direct mapping to host filesystem directories, providing direct file access:
|
||||
|
||||
- **Direct Access**: Files are directly accessible from your host system
|
||||
- **Development Friendly**: Easy to inspect, modify, or backup files manually
|
||||
- **Platform Dependent**: May require UID/GID configuration, especially on NAS systems
|
||||
|
||||
### Configuration
|
||||
|
||||
Create a `docker-compose.yml` file:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
palmr:
|
||||
image: kyantech/palmr:latest
|
||||
container_name: palmr
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "5487:5487" # Web interface
|
||||
# - "3333:3333" # API (optional)
|
||||
environment:
|
||||
# Optional: Uncomment and configure as needed (if you don`t use, you can remove)
|
||||
# - ENABLE_S3=true # Set to true to enable S3-compatible storage
|
||||
# - DISABLE_FILESYSTEM_ENCRYPTION=false # Set to false to enable file encryption
|
||||
# - ENCRYPTION_KEY=your-secure-key-min-32-chars # Required only if encryption is enabled
|
||||
# - PALMR_UID=1000 # UID for the container processes (default is 1000)
|
||||
# - PALMR_GID=1000 # GID for the container processes (default is 1000)
|
||||
# - SECURE_SITE=false # Set to true if you are using a reverse proxy
|
||||
# - DEFAULT_LANGUAGE=en-US # Default language for the application (optional, defaults to en-US)
|
||||
# - PRESIGNED_URL_EXPIRATION=3600 # Duration in seconds for presigned URL expiration (optional, defaults to 3600 seconds / 1 hour)
|
||||
# - DOWNLOAD_MAX_CONCURRENT=5 # Maximum simultaneous downloads (auto-scales if not set)
|
||||
# - DOWNLOAD_MEMORY_THRESHOLD_MB=2048 # Memory threshold in MB before throttling (auto-scales if not set)
|
||||
# - DOWNLOAD_QUEUE_SIZE=25 # Maximum queue size for pending downloads (auto-scales if not set)
|
||||
# - DOWNLOAD_MIN_FILE_SIZE_GB=3.0 # Minimum file size in GB to activate memory management (default: 3.0)
|
||||
# - DOWNLOAD_AUTO_SCALE=true # Enable auto-scaling based on system memory (default: true)
|
||||
# - NODE_OPTIONS=--expose-gc # Enable garbage collection for large downloads (recommended for production)
|
||||
volumes:
|
||||
- ./data:/app/server
|
||||
```
|
||||
|
||||
<Callout type="info">
|
||||
**Having upload or permission issues?** Add `PALMR_UID=1000` and `PALMR_GID=1000` to your environment variables. Check our [UID/GID Configuration](/docs/3.2-beta/uid-gid-configuration) guide for more details.
|
||||
</Callout>
|
||||
|
||||
### Deploy
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Configuration
|
||||
|
||||
Customize Palmr's behavior with these environment variables:
|
||||
|
||||
| Variable | Default | Description |
|
||||
| ---------------------------------- | -------------- | ------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `ENABLE_S3` | `false` | Enable S3-compatible storage backends |
|
||||
| `S3_ENDPOINT` | - | S3 server endpoint URL (required when using S3) |
|
||||
| `S3_PORT` | - | S3 server port (optional when using S3) |
|
||||
| `S3_USE_SSL` | - | Enable SSL for S3 connections (optional when using S3) |
|
||||
| `S3_ACCESS_KEY` | - | S3 access key for authentication (required when using S3) |
|
||||
| `S3_SECRET_KEY` | - | S3 secret key for authentication (required when using S3) |
|
||||
| `S3_REGION` | - | S3 region configuration (optional when using S3) |
|
||||
| `S3_BUCKET_NAME` | - | S3 bucket name for file storage (required when using S3) |
|
||||
| `S3_FORCE_PATH_STYLE` | `false` | Force path-style S3 URLs (optional when using S3) |
|
||||
| `S3_REJECT_UNAUTHORIZED` | `true` | Enable strict SSL certificate validation for S3 (set to `false` for self-signed certificates) |
|
||||
| `ENCRYPTION_KEY` | - | **Required when encryption is enabled**: 32+ character key for file encryption |
|
||||
| `DISABLE_FILESYSTEM_ENCRYPTION` | `true` | Disable file encryption for better performance (set to `false` to enable encryption) |
|
||||
| `PRESIGNED_URL_EXPIRATION` | `3600` | Duration in seconds for presigned URL expiration (applies to both filesystem and S3 storage) |
|
||||
| `CUSTOM_PATH` | - | Custom base path for disk space detection in manual installations with symlinks |
|
||||
| `SECURE_SITE` | `false` | Enable secure cookies for HTTPS/reverse proxy deployments |
|
||||
| `DEFAULT_LANGUAGE` | `en-US` | Default application language ([see available languages](/docs/3.2-beta/available-languages)) |
|
||||
| `PALMR_UID` | `1000` | User ID for container processes (helps with file permissions) |
|
||||
| `PALMR_GID` | `1000` | Group ID for container processes (helps with file permissions) |
|
||||
| `NODE_OPTIONS` | - | Node.js options (recommended: `--expose-gc` for garbage collection in production) |
|
||||
| `DOWNLOAD_MAX_CONCURRENT` | auto-scale | Maximum number of simultaneous downloads (see [Download Memory Management](/docs/3.2-beta/download-memory-management)) |
|
||||
| `DOWNLOAD_MEMORY_THRESHOLD_MB` | auto-scale | Memory threshold in MB before throttling |
|
||||
| `DOWNLOAD_QUEUE_SIZE` | auto-scale | Maximum queue size for pending downloads |
|
||||
| `DOWNLOAD_MIN_FILE_SIZE_GB` | `3.0` | Minimum file size in GB to activate memory management |
|
||||
| `NEXT_PUBLIC_UPLOAD_CHUNK_SIZE_MB` | auto-calculate | Chunk size in MB for large file uploads (see [Chunked Upload Configuration](/docs/3.2-beta/quick-start#chunked-upload-configuration)) |
|
||||
| `DOWNLOAD_AUTO_SCALE` | `true` | Enable auto-scaling based on system memory |
|
||||
|
||||
<Callout type="info">
|
||||
**Performance First**: Palmr runs without encryption by default for optimal speed and lower resource usage—perfect for
|
||||
most use cases.
|
||||
</Callout>
|
||||
|
||||
<Callout type="warn">
|
||||
**Encryption Notice**: To enable encryption, set `DISABLE_FILESYSTEM_ENCRYPTION=false` and provide a 32+ character
|
||||
`ENCRYPTION_KEY`. **Important**: This choice is permanent—switching encryption modes after uploading files will break
|
||||
access to existing uploads.
|
||||
</Callout>
|
||||
|
||||
<Callout>
|
||||
**Using a Reverse Proxy?** Set `SECURE_SITE=true` and check our [Reverse Proxy
|
||||
Configuration](/docs/3.2-beta/reverse-proxy-configuration) guide for proper HTTPS setup.
|
||||
</Callout>
|
||||
|
||||
### Generate Encryption Keys (Optional)
|
||||
|
||||
Need file encryption? Generate a secure key:
|
||||
|
||||
<KeyGenerator />
|
||||
|
||||
> **Pro Tip**: Only enable encryption if you're handling sensitive data. For most users, the default unencrypted mode provides better performance.
|
||||
|
||||
## Access Your Instance
|
||||
|
||||
Once deployed, open Palmr in your browser:
|
||||
|
||||
- **Web Interface**: `http://localhost:5487` (local) or `http://YOUR_SERVER_IP:5487` (remote)
|
||||
- **API Documentation**: `http://localhost:3333/docs` (if port 3333 is exposed)
|
||||
|
||||
<Callout type="info">
|
||||
**Learn More**: For complete API documentation, authentication, and integration examples, see our [API
|
||||
Reference](/docs/3.2-beta/api) guide
|
||||
</Callout>
|
||||
|
||||
<Callout type="warn">
|
||||
**Production Ready?** Configure HTTPS with a valid SSL certificate for secure production deployments.
|
||||
</Callout>
|
||||
|
||||
---
|
||||
|
||||
## Docker CLI Alternative
|
||||
|
||||
Prefer Docker commands over Compose? Here are the equivalent commands:
|
||||
|
||||
<Tabs items={["Named Volume", "Bind Mount"]}>
|
||||
<Tab value="Named Volume">
|
||||
|
||||
```bash
|
||||
docker run -d \
|
||||
--name palmr \
|
||||
# Optional: Uncomment and configure as needed (if you don`t use, you can remove)
|
||||
# -e ENABLE_S3=true \ # Set to true to enable S3-compatible storage (OPTIONAL - default is false)
|
||||
# -e DISABLE_FILESYSTEM_ENCRYPTION=false \ # Set to false to enable file encryption (ENCRYPTION_KEY becomes required) | (OPTIONAL - default is true)
|
||||
# -e ENCRYPTION_KEY=your-secure-key-min-32-chars # Required only if encryption is enabled
|
||||
# -e PALMR_UID=1000 # UID for the container processes (default is 1000)
|
||||
# -e PALMR_GID=1000 # GID for the container processes (default is 1000)
|
||||
# -e SECURE_SITE=false # Set to true if you are using a reverse proxy
|
||||
# -e DEFAULT_LANGUAGE=en-US # Default language for the application (optional, defaults to en-US)
|
||||
-p 5487:5487 \
|
||||
-p 3333:3333 \
|
||||
-v palmr_data:/app/server \
|
||||
--restart unless-stopped \
|
||||
kyantech/palmr:latest
|
||||
```
|
||||
|
||||
|
||||
<Callout type="info">
|
||||
**Permission Issues?** Add `-e PALMR_UID=1000 -e PALMR_GID=1000` to the command above. See our [UID/GID Configuration](/docs/3.2-beta/uid-gid-configuration) guide for details.
|
||||
</Callout>
|
||||
|
||||
</Tab>
|
||||
|
||||
<Tab value="Bind Mount">
|
||||
|
||||
```bash
|
||||
docker run -d \
|
||||
--name palmr \
|
||||
# Optional: Uncomment and configure as needed (if you don`t use, you can remove)
|
||||
# -e ENABLE_S3=true \ # Set to true to enable S3-compatible storage (OPTIONAL - default is false)
|
||||
# -e DISABLE_FILESYSTEM_ENCRYPTION=true \ # Set to false to enable file encryption (ENCRYPTION_KEY becomes required) | (OPTIONAL - default is true)
|
||||
# -e ENCRYPTION_KEY=your-secure-key-min-32-chars # Required only if encryption is enabled
|
||||
# -e PALMR_UID=1000 # UID for the container processes (default is 1000)
|
||||
# -e PALMR_GID=1000 # GID for the container processes (default is 1000)
|
||||
# -e SECURE_SITE=false # Set to true if you are using a reverse proxy
|
||||
# -e DEFAULT_LANGUAGE=en-US # Default language for the application (optional, defaults to en-US)
|
||||
-p 5487:5487 \
|
||||
-p 3333:3333 \
|
||||
-v $(pwd)/data:/app/server \
|
||||
--restart unless-stopped \
|
||||
kyantech/palmr:latest
|
||||
```
|
||||
|
||||
<Callout type="info">
|
||||
**Permission Issues?** Add `-e PALMR_UID=1000 -e PALMR_GID=1000` to the command above. See our [UID/GID Configuration](/docs/3.2-beta/uid-gid-configuration) guide for details.
|
||||
</Callout>
|
||||
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
---
|
||||
|
||||
## Common Configuration Options
|
||||
|
||||
### Presigned URL Expiration
|
||||
|
||||
Palmr. uses temporary URLs (presigned URLs) for secure file access. These URLs expire after a configurable time period to enhance security.
|
||||
|
||||
**Default:** 1 hour (3600 seconds)
|
||||
|
||||
You can customize this for all storage types (filesystem or S3) by adding:
|
||||
|
||||
```yaml
|
||||
environment:
|
||||
- PRESIGNED_URL_EXPIRATION=7200 # 2 hours
|
||||
```
|
||||
|
||||
**When to adjust:**
|
||||
|
||||
- **Shorter time (1800 = 30 min):** Higher security, but users may need to refresh download links
|
||||
- **Longer time (7200-21600 = 2-6 hours):** Better for large file transfers, but URLs stay valid longer
|
||||
- **Default (3600 = 1 hour):** Good balance for most use cases
|
||||
|
||||
### File Encryption
|
||||
|
||||
For filesystem storage, you can enable file encryption:
|
||||
|
||||
```yaml
|
||||
environment:
|
||||
- DISABLE_FILESYSTEM_ENCRYPTION=false
|
||||
- ENCRYPTION_KEY=your-secure-32-character-key-here
|
||||
```
|
||||
|
||||
**Note:** S3 storage handles encryption through your S3 provider's encryption features.
|
||||
|
||||
### Chunked Upload Configuration
|
||||
|
||||
Palmr supports configurable chunked uploading for large files. You can customize the chunk size by setting the following environment variable:
|
||||
|
||||
```yaml
|
||||
environment:
|
||||
- NEXT_PUBLIC_UPLOAD_CHUNK_SIZE_MB=100 # Chunk size in MB
|
||||
```
|
||||
|
||||
**How it works:**
|
||||
|
||||
- If `NEXT_PUBLIC_UPLOAD_CHUNK_SIZE_MB` is set, Palmr will use this value (in megabytes) as the chunk size for all file uploads that exceed this threshold.
|
||||
- If not set or left empty, Palmr automatically calculates optimal chunk sizes based on file size:
|
||||
- Files ≤ 100MB: uploaded without chunking
|
||||
- Files > 100MB and ≤ 1GB: 75MB chunks
|
||||
- Files > 1GB: 150MB chunks
|
||||
|
||||
**When to configure:**
|
||||
|
||||
- **Default (not set):** Recommended for most use cases. Palmr will intelligently determine the best chunk size.
|
||||
- **Custom value:** Set this if you have specific network conditions or want to optimize for your infrastructure (e.g., slower connections may benefit from smaller chunks like 50MB, while fast networks can handle larger chunks like 200MB, or the upload size per payload may be limited by a proxy like Cloudflare)
|
||||
|
||||
---
|
||||
|
||||
## Maintenance
|
||||
|
||||
### Updates
|
||||
|
||||
Keep Palmr up to date with the latest features and security patches:
|
||||
|
||||
```bash
|
||||
docker-compose pull
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
### Backup Your Data
|
||||
|
||||
**Named Volumes:**
|
||||
|
||||
```bash
|
||||
docker run --rm -v palmr_data:/data -v $(pwd):/backup alpine tar czf /backup/palmr-backup.tar.gz -C /data .
|
||||
```
|
||||
|
||||
**Bind Mounts:**
|
||||
|
||||
```bash
|
||||
tar czf palmr-backup.tar.gz ./data
|
||||
```
|
||||
|
||||
### Restore From Backup
|
||||
|
||||
**Named Volumes:**
|
||||
|
||||
```bash
|
||||
docker-compose down
|
||||
docker run --rm -v palmr_data:/data -v $(pwd):/backup alpine tar xzf /backup/palmr-backup.tar.gz -C /data
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
**Bind Mounts:**
|
||||
|
||||
```bash
|
||||
docker-compose down
|
||||
tar xzf palmr-backup.tar.gz
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What's Next?
|
||||
|
||||
Your Palmr instance is ready! Here's what you can explore:
|
||||
|
||||
### Advanced Configuration
|
||||
|
||||
- **[UID/GID Configuration](/docs/3.2-beta/uid-gid-configuration)** - Configure user permissions for NAS systems and custom environments
|
||||
- **[Download Memory Management](/docs/3.2-beta/download-memory-management)** - Configure large file download handling and queue system
|
||||
- **[S3 Storage](/docs/3.2-beta/s3-configuration)** - Scale with Amazon S3 or compatible storage providers
|
||||
- **[Manual Installation](/docs/3.2-beta/manual-installation)** - Manual installation and custom configurations
|
||||
|
||||
### Integration & Development
|
||||
|
||||
- **[API Reference](/docs/3.2-beta/api)** - Integrate Palmr. with your applications
|
||||
|
||||
<Callout type="info">
|
||||
**Need help?** Check our [Troubleshooting Guide](/docs/3.2-beta/troubleshooting) for common issues and solutions.
|
||||
</Callout>
|
||||
|
||||
---
|
||||
|
||||
**Questions?** Visit our [GitHub Issues](https://github.com/kyantech/Palmr/issues) or join the community discussions.
|
||||
@@ -127,10 +127,11 @@ proxy_pass_header Set-Cookie;
|
||||
environment:
|
||||
- PALMR_UID=1000 # Your host UID (check with: id)
|
||||
- PALMR_GID=1000 # Your host GID
|
||||
- ENCRYPTION_KEY=your-key-here
|
||||
- DISABLE_FILESYSTEM_ENCRYPTION=true # Set to false to enable file encryption
|
||||
# - ENCRYPTION_KEY=your-key-here # Required only if encryption is enabled
|
||||
```
|
||||
|
||||
> **💡 Note**: Check your host UID/GID with `id` command and use those values. See [UID/GID Configuration](/docs/3.1-beta/uid-gid-configuration) for detailed setup.
|
||||
> **💡 Note**: Check your host UID/GID with `id` command and use those values. See [UID/GID Configuration](/docs/3.2-beta/uid-gid-configuration) for detailed setup.
|
||||
|
||||
---
|
||||
|
||||
@@ -7,6 +7,8 @@ This guide provides comprehensive configuration instructions for integrating Pal
|
||||
|
||||
> **Overview:** Palmr. supports any S3-compatible storage provider, giving you flexibility to choose the solution that best fits your needs and budget.
|
||||
|
||||
> **Note:** Some configuration options (like presigned URL expiration) apply to **all storage types**, including filesystem storage. These are marked accordingly in the documentation.
|
||||
|
||||
## When to use S3-compatible storage
|
||||
|
||||
Consider using S3-compatible storage when you need:
|
||||
@@ -19,18 +21,27 @@ Consider using S3-compatible storage when you need:
|
||||
|
||||
## Environment variables
|
||||
|
||||
### General configuration (applies to all storage types)
|
||||
|
||||
| Variable | Description | Required | Default |
|
||||
| -------------------------- | ------------------------------------------------ | -------- | --------------- |
|
||||
| `PRESIGNED_URL_EXPIRATION` | Duration in seconds for presigned URL expiration | No | `3600` (1 hour) |
|
||||
|
||||
### S3-specific configuration
|
||||
|
||||
To enable S3-compatible storage, set `ENABLE_S3=true` and configure the following environment variables:
|
||||
|
||||
| Variable | Description | Required | Default |
|
||||
| --------------------- | ----------------------------- | -------- | ----------------- |
|
||||
| `S3_ENDPOINT` | S3 provider endpoint URL | Yes | - |
|
||||
| `S3_PORT` | Connection port | No | Based on protocol |
|
||||
| `S3_USE_SSL` | Enable SSL/TLS encryption | Yes | `true` |
|
||||
| `S3_ACCESS_KEY` | Access key for authentication | Yes | - |
|
||||
| `S3_SECRET_KEY` | Secret key for authentication | Yes | - |
|
||||
| `S3_REGION` | Storage region | Yes | - |
|
||||
| `S3_BUCKET_NAME` | Bucket/container name | Yes | - |
|
||||
| `S3_FORCE_PATH_STYLE` | Use path-style URLs | No | `false` |
|
||||
| Variable | Description | Required | Default |
|
||||
| ------------------------ | ---------------------------------------- | -------- | ----------------- |
|
||||
| `S3_ENDPOINT` | S3 provider endpoint URL | Yes | - |
|
||||
| `S3_PORT` | Connection port | No | Based on protocol |
|
||||
| `S3_USE_SSL` | Enable SSL/TLS encryption | Yes | `true` |
|
||||
| `S3_ACCESS_KEY` | Access key for authentication | Yes | - |
|
||||
| `S3_SECRET_KEY` | Secret key for authentication | Yes | - |
|
||||
| `S3_REGION` | Storage region | Yes | - |
|
||||
| `S3_BUCKET_NAME` | Bucket/container name | Yes | - |
|
||||
| `S3_FORCE_PATH_STYLE` | Use path-style URLs | No | `false` |
|
||||
| `S3_REJECT_UNAUTHORIZED` | Enable strict SSL certificate validation | No | `true` |
|
||||
|
||||
## Provider configurations
|
||||
|
||||
@@ -51,6 +62,7 @@ S3_SECRET_KEY=your-secret-access-key
|
||||
S3_REGION=us-east-1
|
||||
S3_BUCKET_NAME=your-bucket-name
|
||||
S3_FORCE_PATH_STYLE=false
|
||||
# PRESIGNED_URL_EXPIRATION=3600 # Optional: 1 hour (default)
|
||||
```
|
||||
|
||||
**Getting credentials:**
|
||||
@@ -81,6 +93,21 @@ S3_FORCE_PATH_STYLE=true
|
||||
- Default MinIO port is 9000
|
||||
- SSL can be disabled for local development
|
||||
|
||||
**For MinIO with self-signed SSL certificates:**
|
||||
|
||||
```bash
|
||||
ENABLE_S3=true
|
||||
S3_ENDPOINT=your-minio-domain.com
|
||||
S3_PORT=9000
|
||||
S3_USE_SSL=true
|
||||
S3_ACCESS_KEY=your-minio-access-key
|
||||
S3_SECRET_KEY=your-minio-secret-key
|
||||
S3_REGION=us-east-1
|
||||
S3_BUCKET_NAME=your-bucket-name
|
||||
S3_FORCE_PATH_STYLE=true
|
||||
S3_REJECT_UNAUTHORIZED=false # Allows self-signed certificates
|
||||
```
|
||||
|
||||
### Google Cloud Storage
|
||||
|
||||
Google Cloud Storage offers competitive pricing and global infrastructure.
|
||||
@@ -137,6 +164,7 @@ S3_SECRET_KEY=your-application-key
|
||||
S3_REGION=us-west-002
|
||||
S3_BUCKET_NAME=your-bucket-name
|
||||
S3_FORCE_PATH_STYLE=false
|
||||
# PRESIGNED_URL_EXPIRATION=7200 # Optional: 2 hours for large files
|
||||
```
|
||||
|
||||
**Cost advantage:**
|
||||
@@ -187,6 +215,93 @@ S3_FORCE_PATH_STYLE=false
|
||||
- Use container name as bucket name
|
||||
- Configure appropriate access policies
|
||||
|
||||
## Presigned URL configuration
|
||||
|
||||
Palmr. uses presigned URLs to provide secure, temporary access to files stored in **both S3-compatible storage and filesystem storage**. These URLs have a configurable expiration time to balance security and usability.
|
||||
|
||||
> **Note:** This configuration applies to **all storage types** (S3, filesystem, etc.), not just S3-compatible storage.
|
||||
|
||||
### Understanding presigned URLs
|
||||
|
||||
Presigned URLs are temporary URLs that allow direct access to files without exposing storage credentials or requiring authentication. They automatically expire after a specified time period, enhancing security by limiting access duration.
|
||||
|
||||
**How it works:**
|
||||
|
||||
- **S3 Storage:** URLs are signed by AWS/S3-compatible provider credentials
|
||||
- **Filesystem Storage:** URLs use temporary tokens that are validated by Palmr server
|
||||
|
||||
**Default behavior:**
|
||||
|
||||
- Upload URLs: 1 hour (3600 seconds)
|
||||
- Download URLs: 1 hour (3600 seconds)
|
||||
|
||||
### Configuring expiration time
|
||||
|
||||
You can customize the expiration time using the `PRESIGNED_URL_EXPIRATION` environment variable:
|
||||
|
||||
```bash
|
||||
# Set URLs to expire after 2 hours (7200 seconds)
|
||||
PRESIGNED_URL_EXPIRATION=7200
|
||||
|
||||
# Set URLs to expire after 30 minutes (1800 seconds)
|
||||
PRESIGNED_URL_EXPIRATION=1800
|
||||
|
||||
# Set URLs to expire after 6 hours (21600 seconds)
|
||||
PRESIGNED_URL_EXPIRATION=21600
|
||||
```
|
||||
|
||||
### Choosing the right expiration time
|
||||
|
||||
**Shorter expiration (15-30 minutes):**
|
||||
|
||||
- [+] Higher security
|
||||
- [+] Reduced risk of unauthorized access
|
||||
- [-] May interrupt long uploads/downloads
|
||||
- [-] Users may need to refresh links more often
|
||||
|
||||
**Longer expiration (2-6 hours):**
|
||||
|
||||
- [+] Better user experience for large files
|
||||
- [+] Fewer interruptions during transfers
|
||||
- [-] Longer exposure window if URLs are compromised
|
||||
- [-] Potential for increased storage costs if users leave downloads incomplete
|
||||
|
||||
**Recommended settings:**
|
||||
|
||||
- **High security environments:** 1800 seconds (30 minutes)
|
||||
- **Standard usage:** 3600 seconds (1 hour) - default
|
||||
- **Large file transfers:** 7200-21600 seconds (2-6 hours)
|
||||
|
||||
### Example configurations
|
||||
|
||||
**For Backblaze B2 with extended expiration:**
|
||||
|
||||
```bash
|
||||
ENABLE_S3=true
|
||||
S3_ENDPOINT=s3.us-west-002.backblazeb2.com
|
||||
S3_USE_SSL=true
|
||||
S3_ACCESS_KEY=your-key-id
|
||||
S3_SECRET_KEY=your-application-key
|
||||
S3_REGION=us-west-002
|
||||
S3_BUCKET_NAME=your-bucket-name
|
||||
S3_FORCE_PATH_STYLE=false
|
||||
PRESIGNED_URL_EXPIRATION=7200 # 2 hours for large file transfers
|
||||
```
|
||||
|
||||
**For high-security environments:**
|
||||
|
||||
```bash
|
||||
ENABLE_S3=true
|
||||
S3_ENDPOINT=s3.amazonaws.com
|
||||
S3_USE_SSL=true
|
||||
S3_ACCESS_KEY=your-access-key-id
|
||||
S3_SECRET_KEY=your-secret-access-key
|
||||
S3_REGION=us-east-1
|
||||
S3_BUCKET_NAME=your-bucket-name
|
||||
S3_FORCE_PATH_STYLE=false
|
||||
PRESIGNED_URL_EXPIRATION=1800 # 30 minutes for enhanced security
|
||||
```
|
||||
|
||||
## Configuration best practices
|
||||
|
||||
### Security considerations
|
||||
@@ -212,6 +327,19 @@ S3_FORCE_PATH_STYLE=false
|
||||
- Check firewall and network connectivity
|
||||
- Ensure SSL/TLS settings match provider requirements
|
||||
|
||||
**SSL certificate errors (self-signed certificates):**
|
||||
|
||||
If you encounter errors like `unable to verify the first certificate` or `UNABLE_TO_VERIFY_LEAF_SIGNATURE`, you're likely using self-signed SSL certificates. This is common with self-hosted MinIO or other S3-compatible services.
|
||||
|
||||
**Solution:**
|
||||
Set `S3_REJECT_UNAUTHORIZED=false` in your environment variables to allow self-signed certificates:
|
||||
|
||||
```bash
|
||||
S3_REJECT_UNAUTHORIZED=false
|
||||
```
|
||||
|
||||
**Note:** SSL certificate validation is enabled by default (`true`) for security. Set it to `false` only when using self-hosted S3 services with self-signed certificates.
|
||||
|
||||
**Authentication failures:**
|
||||
|
||||
- Confirm access key and secret key are correct
|
||||
@@ -44,7 +44,7 @@ The central hub after login, providing an overview of recent activity, quick act
|
||||
|
||||
### Files list view
|
||||
|
||||
Comprehensive file browser displaying all uploaded files in a detailed list format with metadata, actions, and sorting options.
|
||||
Comprehensive file browser displaying all uploaded files in a detailed list format with metadata, actions, sorting options, and folder navigation.
|
||||
|
||||
<ZoomableImage
|
||||
src="/assets/v3/screenshots/files-list.png"
|
||||
@@ -53,7 +53,7 @@ Comprehensive file browser displaying all uploaded files in a detailed list form
|
||||
|
||||
### Files card view
|
||||
|
||||
Alternative file browser layout showing files as visual cards, perfect for quick browsing and visual file identification.
|
||||
Alternative file browser layout showing files as visual cards, perfect for quick browsing, visual file identification, and folder navigation.
|
||||
|
||||
<ZoomableImage
|
||||
src="/assets/v3/screenshots/files-card.png"
|
||||
@@ -73,7 +73,7 @@ File upload interface where users can drag and drop or select files to upload to
|
||||
|
||||
### Shares page
|
||||
|
||||
Management interface for all shared files and folders, showing share status, permissions, and access controls.
|
||||
Management interface for all shared files and folders, showing share status, permissions, and access controls for both individual files and folders.
|
||||
|
||||
<ZoomableImage
|
||||
src="/assets/v3/screenshots/shares.png"
|
||||
@@ -17,7 +17,7 @@ docker-compose logs palmr | grep -i "permission\|denied\|eacces"
|
||||
|
||||
# Common error messages:
|
||||
# EACCES: permission denied, open '/app/server/uploads/file.txt'
|
||||
# Error: EACCES: permission denied, mkdir '/app/server/temp-chunks'
|
||||
# Error: EACCES: permission denied, mkdir '/app/server/temp-uploads'
|
||||
```
|
||||
|
||||
### The Root Cause
|
||||
@@ -25,7 +25,7 @@ docker-compose logs palmr | grep -i "permission\|denied\|eacces"
|
||||
**Palmr. defaults**: UID 1001, GID 1001
|
||||
**Linux standard**: UID 1000, GID 1000
|
||||
|
||||
When using bind mounts, your host directories are owned by UID 1000, but Palmr. runs as UID 1001.
|
||||
When using bind mounts, your host directories may have different ownership than Palmr's default UID/GID.
|
||||
|
||||
### Solution 1: Environment Variables (Recommended)
|
||||
|
||||
@@ -63,8 +63,8 @@ If you prefer to keep Palmr's defaults:
|
||||
chown -R 1001:1001 ./data
|
||||
|
||||
# For separate upload/temp directories
|
||||
mkdir -p uploads temp-chunks
|
||||
chown -R 1001:1001 uploads temp-chunks
|
||||
mkdir -p uploads temp-uploads
|
||||
chown -R 1001:1001 uploads temp-uploads
|
||||
```
|
||||
|
||||
### Solution 3: Docker Volume (Avoid the Issue)
|
||||
@@ -109,16 +109,19 @@ docker-compose logs palmr
|
||||
2. **Invalid encryption key**
|
||||
|
||||
```bash
|
||||
# Error: Encryption key must be at least 32 characters
|
||||
# Fix: Update ENCRYPTION_KEY in docker-compose.yaml
|
||||
# Error: Encryption key must be at least 32 characters (only if encryption is enabled)
|
||||
# Fix: Either disable encryption or provide a valid key
|
||||
environment:
|
||||
- ENCRYPTION_KEY=your-very-long-secure-key-at-least-32-characters
|
||||
- DISABLE_FILESYSTEM_ENCRYPTION=true # Disable encryption (default)
|
||||
# OR enable encryption with:
|
||||
# - DISABLE_FILESYSTEM_ENCRYPTION=false
|
||||
# - ENCRYPTION_KEY=your-very-long-secure-key-at-least-32-characters
|
||||
```
|
||||
|
||||
3. **Missing environment variables**
|
||||
```bash
|
||||
# Check required variables are set
|
||||
docker exec palmr env | grep -E "ENCRYPTION_KEY|DATABASE_URL"
|
||||
# Check variables are set (encryption is optional)
|
||||
docker exec palmr env | grep -E "DISABLE_FILESYSTEM_ENCRYPTION|ENCRYPTION_KEY|DATABASE_URL"
|
||||
```
|
||||
|
||||
### Container Starts But App Doesn't Load
|
||||
@@ -151,7 +154,7 @@ curl http://localhost:3333/health
|
||||
|
||||
```bash
|
||||
docker exec palmr ls -la /app/server/uploads/
|
||||
# Should show ownership by palmr user
|
||||
# Should show ownership by palmr user (UID 1001)
|
||||
```
|
||||
|
||||
3. **Check upload limits:**
|
||||
@@ -178,13 +181,13 @@ docker exec palmr stat /app/server/uploads/your-file.txt
|
||||
|
||||
```bash
|
||||
# Using the built-in reset script
|
||||
docker exec -it palmr /app/reset-password.sh
|
||||
docker exec -it palmr /app/palmr-app/reset-password.sh
|
||||
```
|
||||
|
||||
2. **Check database permissions:**
|
||||
```bash
|
||||
docker exec palmr ls -la /app/server/prisma/
|
||||
# palmr.db should be writable by palmr user
|
||||
# palmr.db should be writable by palmr user (UID 1001)
|
||||
```
|
||||
|
||||
### OIDC Authentication Not Working
|
||||
@@ -243,8 +246,8 @@ docker exec palmr ls -la /app/server/prisma/palmr.db
|
||||
# Check database logs
|
||||
docker-compose logs palmr | grep -i database
|
||||
|
||||
# Verify Prisma schema
|
||||
docker exec palmr npx prisma db push --schema=./prisma/schema.prisma
|
||||
# Verify Prisma schema (run from palmr-app directory)
|
||||
docker exec palmr sh -c "cd /app/palmr-app && npx prisma db push"
|
||||
```
|
||||
|
||||
### Database Corruption
|
||||
@@ -283,7 +286,7 @@ docker-compose up -d
|
||||
|
||||
3. **Check temp directory permissions:**
|
||||
```bash
|
||||
docker exec palmr ls -la /app/server/temp-chunks/
|
||||
docker exec palmr ls -la /app/server/temp-uploads/
|
||||
```
|
||||
|
||||
### High Memory Usage
|
||||
@@ -318,16 +321,19 @@ docker port palmr
|
||||
echo "4. File Permissions:"
|
||||
docker exec palmr ls -la /app/server/
|
||||
|
||||
echo "5. Environment Variables:"
|
||||
docker exec palmr env | grep -E "PALMR_|ENCRYPTION_|DATABASE_"
|
||||
echo "5. Application Files:"
|
||||
docker exec palmr ls -la /app/palmr-app/
|
||||
|
||||
echo "6. API Health:"
|
||||
echo "6. Environment Variables:"
|
||||
docker exec palmr env | grep -E "PALMR_|DISABLE_FILESYSTEM_ENCRYPTION|ENCRYPTION_|DATABASE_"
|
||||
|
||||
echo "7. API Health:"
|
||||
curl -s http://localhost:3333/health || echo "API not accessible"
|
||||
|
||||
echo "7. Web Interface:"
|
||||
echo "8. Web Interface:"
|
||||
curl -s -o /dev/null -w "%{http_code}" http://localhost:5487 || echo "Web interface not accessible"
|
||||
|
||||
echo "8. Disk Space:"
|
||||
echo "9. Disk Space:"
|
||||
df -h
|
||||
|
||||
echo "=== End Health Check ==="
|
||||
@@ -360,13 +366,11 @@ If none of these solutions work:
|
||||
```
|
||||
|
||||
2. **Check our documentation:**
|
||||
|
||||
- [UID/GID Configuration](/docs/3.0-beta/uid-gid-configuration)
|
||||
- [Quick Start Guide](/docs/3.0-beta/quick-start)
|
||||
- [API Reference](/docs/3.0-beta/api)
|
||||
|
||||
3. **Open an issue on GitHub:**
|
||||
|
||||
- Include your `docker-compose.yaml`
|
||||
- Include relevant log output
|
||||
- Describe your system (OS, Docker version, etc.)
|
||||
@@ -9,15 +9,15 @@ Configure user and group permissions for seamless bind mount compatibility acros
|
||||
|
||||
Palmr. supports runtime UID/GID configuration to resolve permission conflicts when using bind mounts. This eliminates the need for manual permission management on your host system.
|
||||
|
||||
**⚠️ Important**: Palmr uses **UID 1001, GID 1001** by default, which is different from the standard Linux convention of **UID 1000, GID 1000**. This is the most common cause of permission issues with bind mounts.
|
||||
**⚠️ Important**: Palmr uses **UID 1000, GID 1000** by default, which matches the standard Linux convention. However, some systems may use different UID/GID values, which can cause permission issues with bind mounts.
|
||||
|
||||
## The Permission Problem
|
||||
|
||||
### Why This Happens
|
||||
|
||||
- **Palmr Default**: UID 1001, GID 1001 (container)
|
||||
- **Palmr Default**: UID 1000, GID 1000 (container)
|
||||
- **Linux Standard**: UID 1000, GID 1000 (most host systems)
|
||||
- **Result**: Container can't write to host directories
|
||||
- **Result**: Usually compatible, but some systems may use different values
|
||||
|
||||
### Common Error Scenarios
|
||||
|
||||
@@ -30,7 +30,7 @@ EACCES: permission denied, open '/app/server/uploads/file.txt'
|
||||
# Or when checking permissions:
|
||||
$ ls -la uploads/
|
||||
drwxr-xr-x 2 user user 4096 Jan 15 10:00 uploads/
|
||||
# Container tries to write as UID 1001, but directory is owned by UID 1000
|
||||
# Container tries to write with different UID/GID than directory owner
|
||||
```
|
||||
|
||||
## Quick Fix
|
||||
@@ -45,15 +45,13 @@ services:
|
||||
image: kyantech/palmr:latest
|
||||
container_name: palmr
|
||||
environment:
|
||||
- ENABLE_S3=false
|
||||
- ENCRYPTION_KEY=your-secure-key-min-32-chars
|
||||
- PALMR_UID=1000
|
||||
- PALMR_GID=1000
|
||||
ports:
|
||||
- "5487:5487"
|
||||
volumes:
|
||||
- ./uploads:/app/server/uploads:rw
|
||||
- ./temp-chunks:/app/server/temp-chunks:rw
|
||||
- ./temp-uploads:/app/server/temp-uploads:rw
|
||||
restart: unless-stopped
|
||||
```
|
||||
|
||||
@@ -63,8 +61,8 @@ If you prefer to keep Palmr's defaults:
|
||||
|
||||
```bash
|
||||
# Create directories with correct ownership
|
||||
mkdir -p uploads temp-chunks
|
||||
chown -R 1001:1001 uploads temp-chunks
|
||||
mkdir -p uploads temp-uploads
|
||||
chown -R 1001:1001 uploads temp-uploads
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
@@ -104,8 +102,6 @@ services:
|
||||
image: kyantech/palmr:latest
|
||||
container_name: palmr
|
||||
environment:
|
||||
- ENABLE_S3=false
|
||||
- ENCRYPTION_KEY=your-secure-key-min-32-chars
|
||||
- PALMR_UID=1000
|
||||
- PALMR_GID=1000
|
||||
ports:
|
||||
@@ -123,8 +119,6 @@ services:
|
||||
image: kyantech/palmr:latest
|
||||
container_name: palmr
|
||||
environment:
|
||||
- ENABLE_S3=false
|
||||
- ENCRYPTION_KEY=your-secure-key-min-32-chars
|
||||
- PALMR_UID=1026
|
||||
- PALMR_GID=100
|
||||
ports:
|
||||
@@ -142,8 +136,6 @@ services:
|
||||
image: kyantech/palmr:latest
|
||||
container_name: palmr
|
||||
environment:
|
||||
- ENABLE_S3=false
|
||||
- ENCRYPTION_KEY=your-secure-key-min-32-chars
|
||||
- PALMR_UID=1000
|
||||
- PALMR_GID=100
|
||||
ports:
|
||||
@@ -166,7 +158,7 @@ services:
|
||||
id
|
||||
|
||||
# 2. Check directory ownership
|
||||
ls -la uploads/ temp-chunks/
|
||||
ls -la uploads/ temp-uploads/
|
||||
|
||||
# 3. Fix via environment variables (preferred)
|
||||
# Add to docker-compose.yaml:
|
||||
@@ -174,7 +166,7 @@ ls -la uploads/ temp-chunks/
|
||||
# - PALMR_GID=1000
|
||||
|
||||
# 4. Or fix via chown (alternative)
|
||||
chown -R 1001:1001 uploads temp-chunks
|
||||
chown -R 1001:1001 uploads temp-uploads
|
||||
```
|
||||
|
||||
**Error**: Container starts but files aren't accessible
|
||||
@@ -225,11 +217,11 @@ cat /etc/passwd | grep -v nobody
|
||||
```bash
|
||||
# Check if directories exist and are writable
|
||||
test -w uploads && echo "uploads writable" || echo "uploads NOT writable"
|
||||
test -w temp-chunks && echo "temp-chunks writable" || echo "temp-chunks NOT writable"
|
||||
test -w temp-uploads && echo "temp-uploads writable" || echo "temp-uploads NOT writable"
|
||||
|
||||
# Create directories with correct permissions
|
||||
mkdir -p uploads temp-chunks
|
||||
sudo chown -R $(id -u):$(id -g) uploads temp-chunks
|
||||
mkdir -p uploads temp-uploads
|
||||
sudo chown -R $(id -u):$(id -g) uploads temp-uploads
|
||||
```
|
||||
|
||||
---
|
||||
@@ -270,7 +262,7 @@ To add UID/GID configuration to running installations:
|
||||
cp -r ./data ./data-backup
|
||||
# or
|
||||
cp -r ./uploads ./uploads-backup
|
||||
cp -r ./temp-chunks ./temp-chunks-backup
|
||||
cp -r ./temp-uploads ./temp-uploads-backup
|
||||
```
|
||||
|
||||
3. **Check your UID/GID**
|
||||
@@ -344,4 +336,4 @@ For most users experiencing permission issues with bind mounts:
|
||||
```
|
||||
3. **Restart**: `docker-compose down && docker-compose up -d`
|
||||
|
||||
This resolves the mismatch between Palmr's default UID 1001 and the standard Linux UID 1000.
|
||||
This ensures compatibility between Palmr's UID/GID and your host system's file ownership.
|
||||
@@ -1,3 +1,3 @@
|
||||
{
|
||||
"pages": ["3.1-beta", "2.0.0-beta"]
|
||||
"pages": ["3.2-beta", "2.0.0-beta"]
|
||||
}
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "palmr-docs",
|
||||
"version": "3.1.3-beta",
|
||||
"version": "3.2.5-beta",
|
||||
"description": "Docs for Palmr",
|
||||
"private": true,
|
||||
"author": "Daniel Luiz Alves <daniel@kyantech.com.br>",
|
||||
@@ -13,7 +13,7 @@
|
||||
"react",
|
||||
"typescript"
|
||||
],
|
||||
"license": "BSD-2-Clause",
|
||||
"license": "Apache-2.0",
|
||||
"packageManager": "pnpm@10.6.0",
|
||||
"scripts": {
|
||||
"build": "next build",
|
||||
|
||||
4878
apps/docs/pnpm-lock.yaml
generated
4878
apps/docs/pnpm-lock.yaml
generated
File diff suppressed because it is too large
Load Diff
@@ -12,7 +12,6 @@ import {
|
||||
LayoutIcon,
|
||||
LockIcon,
|
||||
MousePointer,
|
||||
RadioIcon,
|
||||
RocketIcon,
|
||||
SearchIcon,
|
||||
TimerIcon,
|
||||
@@ -60,13 +59,13 @@ const images = [
|
||||
"https://res.cloudinary.com/technical-intelligence/image/upload/v1745546005/Palmr./profile_mizwvg.png",
|
||||
];
|
||||
|
||||
const docsLink = "/docs/3.1-beta";
|
||||
const docsLink = "/docs/3.2-beta";
|
||||
|
||||
function Hero() {
|
||||
return (
|
||||
<section className="relative z-[2] flex flex-col border-x border-t px-6 pt-12 pb-10 md:px-12 md:pt-16 max-md:text-center">
|
||||
<h1 className="mb-8 text-6xl font-bold">
|
||||
Palmr. <span className="text-[13px] font-light text-muted-foreground/50 font-mono">v3.1-beta</span>
|
||||
Palmr. <span className="text-[13px] font-light text-muted-foreground/50 font-mono">v3.2-beta</span>
|
||||
</h1>
|
||||
<h1 className="hidden text-4xl font-medium max-w-[600px] md:block mb-4">Modern & efficient file sharing</h1>
|
||||
<p className="mb-8 text-fd-muted-foreground md:max-w-[80%] md:text-xl">
|
||||
@@ -82,23 +81,6 @@ function Hero() {
|
||||
<Link href={docsLink}>Documentation</Link>
|
||||
</div>
|
||||
</PulsatingButton>
|
||||
<RippleButton
|
||||
onClick={() => {
|
||||
const demoId = `${Math.random().toString(36).substr(2, 9)}`;
|
||||
const token = `${Math.random().toString(36).substr(2, 12)}`;
|
||||
|
||||
sessionStorage.setItem("demo_token", token);
|
||||
sessionStorage.setItem("demo_id", demoId);
|
||||
sessionStorage.setItem("demo_expires", (Date.now() + 5 * 60 * 1000).toString());
|
||||
|
||||
window.location.href = `/demo?id=${demoId}&token=${token}`;
|
||||
}}
|
||||
>
|
||||
<div className="flex gap-2 items-center">
|
||||
<RadioIcon size={18} />
|
||||
Live Demo
|
||||
</div>
|
||||
</RippleButton>
|
||||
<RippleButton>
|
||||
<a
|
||||
href="https://github.com/kyantech/Palmr"
|
||||
@@ -312,7 +294,7 @@ function FullWidthFooter() {
|
||||
<div className="flex items-center gap-1 text-sm max-w-7xl">
|
||||
<span>Powered by</span>
|
||||
<Link
|
||||
href="http://kyantech.com.br"
|
||||
href="https://github.com/kyantech"
|
||||
rel="noopener noreferrer"
|
||||
target="_blank"
|
||||
className="flex items-center hover:text-green-700 text-green-500 transition-colors font-light"
|
||||
|
||||
@@ -9,6 +9,6 @@ export const { GET } = createFromSource(source, (page) => {
|
||||
url: page.url,
|
||||
id: page.url,
|
||||
structuredData: page.data.structuredData,
|
||||
tag: page.url.startsWith("/docs/3.1-beta") ? "v3.1-beta" : "v2.0.0-beta",
|
||||
tag: page.url.startsWith("/docs/3.2-beta") ? "v3.2-beta" : "v2.0.0-beta",
|
||||
};
|
||||
});
|
||||
|
||||
@@ -1,225 +0,0 @@
|
||||
"use client";
|
||||
|
||||
import { useEffect, useState } from "react";
|
||||
import { useSearchParams } from "next/navigation";
|
||||
import { Palmtree } from "lucide-react";
|
||||
import { motion } from "motion/react";
|
||||
|
||||
import { BackgroundLights } from "@/components/ui/background-lights";
|
||||
import { Button } from "@/components/ui/button";
|
||||
|
||||
interface DemoStatus {
|
||||
status: "waiting" | "ready";
|
||||
url: string | null;
|
||||
}
|
||||
|
||||
interface CreateDemoResponse {
|
||||
message: string;
|
||||
url: string | null;
|
||||
}
|
||||
|
||||
function DemoClientInner() {
|
||||
const searchParams = useSearchParams();
|
||||
const demoId = searchParams.get("id");
|
||||
const token = searchParams.get("token");
|
||||
|
||||
const [status, setStatus] = useState<DemoStatus | null>(null);
|
||||
const [isLoading, setIsLoading] = useState(true);
|
||||
const [error, setError] = useState<string | null>(null);
|
||||
|
||||
useEffect(() => {
|
||||
const validateAccess = () => {
|
||||
const storedToken = sessionStorage.getItem("demo_token");
|
||||
const storedId = sessionStorage.getItem("demo_id");
|
||||
const expiresAt = sessionStorage.getItem("demo_expires");
|
||||
|
||||
if (!demoId || !token || !storedToken || !storedId || !expiresAt) {
|
||||
return false;
|
||||
}
|
||||
|
||||
if (token !== storedToken || demoId !== storedId || Date.now() > parseInt(expiresAt)) {
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
};
|
||||
|
||||
if (!validateAccess()) {
|
||||
setError("Unauthorized access. Please use the Live Demo button to access this page.");
|
||||
setIsLoading(false);
|
||||
return;
|
||||
}
|
||||
|
||||
const createDemo = async () => {
|
||||
try {
|
||||
const response = await fetch("https://palmr-demo-manager.kyantech.com.br/create-demo", {
|
||||
method: "POST",
|
||||
headers: {
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
body: JSON.stringify({
|
||||
palmr_demo_instance_id: demoId,
|
||||
}),
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error("Failed to create demo");
|
||||
}
|
||||
|
||||
const data: CreateDemoResponse = await response.json();
|
||||
console.log("Demo creation response:", data);
|
||||
} catch (err) {
|
||||
console.error("Error creating demo:", err);
|
||||
setError("Failed to create demo. Please try again.");
|
||||
setIsLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
const checkStatus = async () => {
|
||||
try {
|
||||
const response = await fetch(`https://palmr-demo-manager.kyantech.com.br/status/${demoId}`);
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error("Failed to check demo status");
|
||||
}
|
||||
|
||||
const data: DemoStatus = await response.json();
|
||||
setStatus(data);
|
||||
|
||||
if (data.status === "ready" && data.url) {
|
||||
setIsLoading(false);
|
||||
}
|
||||
} catch (err) {
|
||||
console.error("Error checking status:", err);
|
||||
setError("Failed to check demo status. Please try again.");
|
||||
setIsLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
createDemo();
|
||||
|
||||
const interval = setInterval(checkStatus, 5000); // Check every 5 seconds
|
||||
|
||||
checkStatus();
|
||||
|
||||
return () => {
|
||||
clearInterval(interval);
|
||||
sessionStorage.removeItem("demo_token");
|
||||
sessionStorage.removeItem("demo_id");
|
||||
sessionStorage.removeItem("demo_expires");
|
||||
};
|
||||
}, [demoId, token]);
|
||||
|
||||
const handleGoToDemo = () => {
|
||||
if (status?.url) {
|
||||
window.open(status.url, "_blank");
|
||||
}
|
||||
window.location.href = "/";
|
||||
};
|
||||
|
||||
if (error) {
|
||||
return (
|
||||
<div className="fixed inset-0 bg-background">
|
||||
<BackgroundLights />
|
||||
<div className="relative flex flex-col items-center justify-center h-full">
|
||||
<div className="text-center space-y-6 max-w-md">
|
||||
<h1 className="text-2xl font-bold text-destructive">Error</h1>
|
||||
<p className="text-muted-foreground">{error}</p>
|
||||
<Button
|
||||
onClick={() => {
|
||||
sessionStorage.removeItem("demo_token");
|
||||
sessionStorage.removeItem("demo_id");
|
||||
sessionStorage.removeItem("demo_expires");
|
||||
window.location.href = "/";
|
||||
}}
|
||||
>
|
||||
Go Back
|
||||
</Button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
if (isLoading) {
|
||||
return (
|
||||
<div className="fixed inset-0 bg-background">
|
||||
<BackgroundLights />
|
||||
<div className="flex flex-col items-center gap-6 text-center h-full justify-center">
|
||||
<div className="space-y-4">
|
||||
<h1 className="text-2xl font-bold">Your demo is being generated, please wait...</h1>
|
||||
<p className="text-muted-foreground max-w-lg">
|
||||
This demo will be available for 30 minutes for testing. After that, all data will be permanently deleted
|
||||
and become inaccessible. You can test Palmr. with a 200MB storage limit.
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
return (
|
||||
<div className="fixed inset-0 bg-background">
|
||||
<BackgroundLights />
|
||||
<div className="relative flex flex-col items-center justify-center h-full">
|
||||
<motion.div
|
||||
initial={{ opacity: 0, y: 20 }}
|
||||
animate={{ opacity: 1, y: 0 }}
|
||||
transition={{ duration: 0.5 }}
|
||||
className="container mx-auto max-w-7xl px-6 flex-grow"
|
||||
>
|
||||
<section className="relative flex flex-col items-center justify-center gap-6 m-auto h-full">
|
||||
<motion.div
|
||||
initial={{ opacity: 0, y: 20 }}
|
||||
animate={{ opacity: 1, y: 0 }}
|
||||
transition={{ duration: 0.5, delay: 0.2 }}
|
||||
className="inline-block max-w-xl text-center justify-center"
|
||||
>
|
||||
<div className="flex flex-col gap-8">
|
||||
<div className="flex flex-col gap-2">
|
||||
<motion.span
|
||||
initial={{ opacity: 0, x: -20 }}
|
||||
animate={{ opacity: 1, x: 0 }}
|
||||
transition={{ delay: 0.4, duration: 0.5 }}
|
||||
className="text-4xl lg:text-3xl font-semibold tracking-tight text-primary"
|
||||
>
|
||||
Your demo is ready!
|
||||
</motion.span>
|
||||
<motion.span
|
||||
initial={{ opacity: 0, x: 20 }}
|
||||
animate={{ opacity: 1, x: 0 }}
|
||||
transition={{ delay: 0.6, duration: 0.5 }}
|
||||
className="text-3xl leading-9 font-semibold tracking-tight"
|
||||
>
|
||||
Click the button below to test
|
||||
</motion.span>
|
||||
</div>
|
||||
</div>
|
||||
</motion.div>
|
||||
<motion.div
|
||||
initial={{ opacity: 0, y: 20 }}
|
||||
animate={{ opacity: 1, y: 0 }}
|
||||
transition={{ duration: 0.5, delay: 0.8 }}
|
||||
className="flex flex-col items-center gap-6"
|
||||
>
|
||||
<motion.div
|
||||
initial={{ opacity: 0, scale: 0.9 }}
|
||||
animate={{ opacity: 1, scale: 1 }}
|
||||
transition={{ delay: 1.2, duration: 0.5 }}
|
||||
>
|
||||
<Button onClick={handleGoToDemo} className="flex items-center gap-2 px-8 py-4 text-lg">
|
||||
<Palmtree className="h-5 w-5" />
|
||||
Go to Palmr. Demo
|
||||
</Button>
|
||||
</motion.div>
|
||||
</motion.div>
|
||||
</section>
|
||||
</motion.div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
export default function DemoClient() {
|
||||
return <DemoClientInner />;
|
||||
}
|
||||
@@ -1,13 +0,0 @@
|
||||
"use client";
|
||||
|
||||
import { Suspense } from "react";
|
||||
|
||||
import DemoClient from "./components/demo-client";
|
||||
|
||||
export default function DemoPage() {
|
||||
return (
|
||||
<Suspense>
|
||||
<DemoClient />
|
||||
</Suspense>
|
||||
);
|
||||
}
|
||||
@@ -11,7 +11,7 @@ import { Sponsor } from "../components/sponsor";
|
||||
export default async function Page(props: { params: Promise<{ slug?: string[] }> }) {
|
||||
const params = await props.params;
|
||||
const page = source.getPage(params.slug);
|
||||
if (!page) redirect("/docs/3.1-beta");
|
||||
if (!page) redirect("/docs/3.2-beta");
|
||||
|
||||
const MDXContent = page.data.body;
|
||||
|
||||
@@ -49,7 +49,7 @@ export async function generateStaticParams() {
|
||||
export async function generateMetadata(props: { params: Promise<{ slug?: string[] }> }) {
|
||||
const params = await props.params;
|
||||
const page = source.getPage(params.slug);
|
||||
if (!page) redirect("/docs/3.1-beta");
|
||||
if (!page) redirect("/docs/3.2-beta");
|
||||
|
||||
return {
|
||||
title: page.data.title + " | Palmr. Docs",
|
||||
|
||||
@@ -6,7 +6,7 @@ export function Footer() {
|
||||
<div className="flex items-center gap-1 text-sm ">
|
||||
<span>Powered by</span>
|
||||
<Link
|
||||
href="http://kyantech.com.br"
|
||||
href="https://github.com/kyantech"
|
||||
rel="noopener noreferrer"
|
||||
target="_blank"
|
||||
className="flex items-center hover:text-green-700 text-green-500 transition-colors font-light"
|
||||
|
||||
@@ -28,7 +28,7 @@ export default function Layout({ children }: { children: ReactNode }) {
|
||||
<RootProvider
|
||||
search={{
|
||||
options: {
|
||||
defaultTag: "3.1-beta",
|
||||
defaultTag: "3.2-beta",
|
||||
tags: [
|
||||
{
|
||||
name: "v2.0.0 Beta",
|
||||
@@ -36,7 +36,7 @@ export default function Layout({ children }: { children: ReactNode }) {
|
||||
},
|
||||
{
|
||||
name: "v3.0 Beta ✨",
|
||||
value: "3.1-beta",
|
||||
value: "3.2-beta",
|
||||
props: {
|
||||
style: {
|
||||
border: "1px solid rgba(0,165,80,0.2)",
|
||||
|
||||
@@ -6,61 +6,61 @@ const providers = [
|
||||
{
|
||||
name: "Google",
|
||||
description: "Configure authentication using Google OAuth2 services",
|
||||
href: "/docs/3.1-beta/oidc-authentication/google",
|
||||
href: "/docs/3.2-beta/oidc-authentication/google",
|
||||
icon: <Chrome className="w-4 h-4" />,
|
||||
},
|
||||
{
|
||||
name: "Discord",
|
||||
description: "Set up Discord OAuth2 for community-based authentication",
|
||||
href: "/docs/3.1-beta/oidc-authentication/discord",
|
||||
href: "/docs/3.2-beta/oidc-authentication/discord",
|
||||
icon: <MessageSquare className="w-4 h-4" />,
|
||||
},
|
||||
{
|
||||
name: "GitHub",
|
||||
description: "Enable GitHub OAuth for developer-friendly sign-in",
|
||||
href: "/docs/3.1-beta/oidc-authentication/github",
|
||||
href: "/docs/3.2-beta/oidc-authentication/github",
|
||||
icon: <Github className="w-4 h-4" />,
|
||||
},
|
||||
{
|
||||
name: "Zitadel",
|
||||
description: "Enterprise-grade identity and access management",
|
||||
href: "/docs/3.1-beta/oidc-authentication/zitadel",
|
||||
href: "/docs/3.2-beta/oidc-authentication/zitadel",
|
||||
icon: <Shield className="w-4 h-4" />,
|
||||
},
|
||||
{
|
||||
name: "Auth0",
|
||||
description: "Flexible identity platform with extensive customization",
|
||||
href: "/docs/3.1-beta/oidc-authentication/auth0",
|
||||
href: "/docs/3.2-beta/oidc-authentication/auth0",
|
||||
icon: <Lock className="w-4 h-4" />,
|
||||
},
|
||||
{
|
||||
name: "Authentik",
|
||||
description: "Open-source identity provider with modern features",
|
||||
href: "/docs/3.1-beta/oidc-authentication/authentik",
|
||||
href: "/docs/3.2-beta/oidc-authentication/authentik",
|
||||
icon: <Key className="w-4 h-4" />,
|
||||
},
|
||||
{
|
||||
name: "Frontegg",
|
||||
description: "User management platform for B2B applications",
|
||||
href: "/docs/3.1-beta/oidc-authentication/frontegg",
|
||||
href: "/docs/3.2-beta/oidc-authentication/frontegg",
|
||||
icon: <Egg className="w-4 h-4" />,
|
||||
},
|
||||
{
|
||||
name: "Kinde Auth",
|
||||
description: "Developer-first authentication and user management",
|
||||
href: "/docs/3.1-beta/oidc-authentication/kinde-auth",
|
||||
href: "/docs/3.2-beta/oidc-authentication/kinde-auth",
|
||||
icon: <Users className="w-4 h-" />,
|
||||
},
|
||||
{
|
||||
name: "Pocket ID",
|
||||
description: "Open-source identity provider with OIDC support",
|
||||
href: "/docs/3.1-beta/oidc-authentication/pocket-id",
|
||||
href: "/docs/3.2-beta/oidc-authentication/pocket-id",
|
||||
icon: <Key className="w-4 h-4" />,
|
||||
},
|
||||
{
|
||||
name: "Other",
|
||||
description: "Configure any other OIDC-compliant identity provider",
|
||||
href: "/docs/3.1-beta/oidc-authentication/other",
|
||||
href: "/docs/3.2-beta/oidc-authentication/other",
|
||||
icon: <Settings className="w-4 h-4" />,
|
||||
},
|
||||
];
|
||||
|
||||
@@ -5,14 +5,15 @@ import { cn } from "@/lib/utils";
|
||||
|
||||
interface CardProps {
|
||||
title: string;
|
||||
description: string;
|
||||
description?: string;
|
||||
href?: string;
|
||||
icon?: ReactNode;
|
||||
className?: string;
|
||||
onClick?: () => void;
|
||||
children?: ReactNode;
|
||||
}
|
||||
|
||||
export const Card = ({ title, description, href, icon, className, onClick }: CardProps) => {
|
||||
export const Card = ({ title, description, href, icon, className, onClick, children }: CardProps) => {
|
||||
const cardContent = (
|
||||
<div
|
||||
className={cn(
|
||||
@@ -37,9 +38,16 @@ export const Card = ({ title, description, href, icon, className, onClick }: Car
|
||||
<h3 className="font-medium text-sm text-foreground mb-1 group-hover:text-primary transition-colors duration-200 mt-3 text-decoration-none">
|
||||
{title}
|
||||
</h3>
|
||||
<p className="text-xs text-muted-foreground/80 leading-relaxed line-clamp-2 group-hover:text-muted-foreground transition-colors duration-200">
|
||||
{description}
|
||||
</p>
|
||||
{description && (
|
||||
<p className="text-xs text-muted-foreground/80 leading-relaxed line-clamp-2 group-hover:text-muted-foreground transition-colors duration-200">
|
||||
{description}
|
||||
</p>
|
||||
)}
|
||||
{children && (
|
||||
<div className="text-xs text-muted-foreground/80 leading-relaxed group-hover:text-muted-foreground transition-colors duration-200 mt-2">
|
||||
{children}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
<div className="flex-shrink-0 ml-2">
|
||||
<div className="w-5 h-5 rounded-full bg-muted/40 flex items-center justify-center opacity-0 group-hover:opacity-100 group-hover:bg-primary/10 transition-all duration-200">
|
||||
|
||||
@@ -1,2 +1,2 @@
|
||||
export const LATEST_VERSION_PATH = "/docs/3.1-beta";
|
||||
export const LATEST_VERSION = "v3.1-beta";
|
||||
export const LATEST_VERSION_PATH = "/docs/3.2-beta";
|
||||
export const LATEST_VERSION = "v3.2-beta";
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
# FOR FILESYSTEM STORAGE ENV VARS
|
||||
ENABLE_S3=false
|
||||
ENCRYPTION_KEY=change-this-key-in-production-min-32-chars
|
||||
DISABLE_FILESYSTEM_ENCRYPTION=true
|
||||
# ENCRYPTION_KEY=change-this-key-in-production-min-32-chars # Required only if encryption is enabled (DISABLE_FILESYSTEM_ENCRYPTION=false)
|
||||
DATABASE_URL="file:./palmr.db"
|
||||
|
||||
# FOR USE WITH S3 COMPATIBLE STORAGE
|
||||
@@ -13,3 +14,5 @@ DATABASE_URL="file:./palmr.db"
|
||||
# S3_REGION=
|
||||
# S3_BUCKET_NAME=
|
||||
# S3_FORCE_PATH_STYLE=
|
||||
# S3_REJECT_UNAUTHORIZED=true # Set to false to disable strict SSL certificate validation for self-signed certificates (optional, defaults to true)
|
||||
# PRESIGNED_URL_EXPIRATION=3600 # Duration in seconds for presigned URL expiration (optional, defaults to 3600 seconds / 1 hour)
|
||||
|
||||
1
apps/server/.gitignore
vendored
1
apps/server/.gitignore
vendored
@@ -4,3 +4,4 @@ dist/*
|
||||
uploads/*
|
||||
temp-uploads/*
|
||||
prisma/*.db
|
||||
tsconfig.tsbuildinfo
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "palmr-api",
|
||||
"version": "3.1.3-beta",
|
||||
"version": "3.2.5-beta",
|
||||
"description": "API for Palmr",
|
||||
"private": true,
|
||||
"author": "Daniel Luiz Alves <daniel@kyantech.com.br>",
|
||||
@@ -12,7 +12,7 @@
|
||||
"nodejs",
|
||||
"typescript"
|
||||
],
|
||||
"license": "BSD-2-Clause",
|
||||
"license": "Apache-2.0",
|
||||
"packageManager": "pnpm@10.6.0",
|
||||
"main": "index.js",
|
||||
"scripts": {
|
||||
@@ -25,7 +25,9 @@
|
||||
"format:check": "prettier . --check",
|
||||
"type-check": "npx tsc --noEmit",
|
||||
"validate": "pnpm lint && pnpm type-check",
|
||||
"db:seed": "ts-node prisma/seed.js"
|
||||
"db:seed": "ts-node prisma/seed.js",
|
||||
"cleanup:orphan-files": "tsx src/scripts/cleanup-orphan-files.ts",
|
||||
"cleanup:orphan-files:confirm": "tsx src/scripts/cleanup-orphan-files.ts --confirm"
|
||||
},
|
||||
"prisma": {
|
||||
"seed": "node prisma/seed.js"
|
||||
@@ -77,4 +79,4 @@
|
||||
"tsx": "^4.19.2",
|
||||
"typescript": "^5.7.3"
|
||||
}
|
||||
}
|
||||
}
|
||||
4498
apps/server/pnpm-lock.yaml
generated
4498
apps/server/pnpm-lock.yaml
generated
File diff suppressed because it is too large
Load Diff
@@ -1,287 +1,318 @@
|
||||
generator client {
|
||||
provider = "prisma-client-js"
|
||||
}
|
||||
|
||||
datasource db {
|
||||
provider = "sqlite"
|
||||
url = env("DATABASE_URL")
|
||||
}
|
||||
|
||||
model User {
|
||||
id String @id @default(cuid())
|
||||
firstName String
|
||||
lastName String
|
||||
username String @unique
|
||||
email String @unique
|
||||
password String?
|
||||
image String?
|
||||
isAdmin Boolean @default(false)
|
||||
isActive Boolean @default(true)
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
twoFactorEnabled Boolean @default(false)
|
||||
twoFactorSecret String?
|
||||
twoFactorBackupCodes String?
|
||||
twoFactorVerified Boolean @default(false)
|
||||
|
||||
files File[]
|
||||
shares Share[]
|
||||
reverseShares ReverseShare[]
|
||||
|
||||
loginAttempts LoginAttempt?
|
||||
|
||||
passwordResets PasswordReset[]
|
||||
authProviders UserAuthProvider[]
|
||||
trustedDevices TrustedDevice[]
|
||||
|
||||
@@map("users")
|
||||
}
|
||||
|
||||
model File {
|
||||
id String @id @default(cuid())
|
||||
name String
|
||||
description String?
|
||||
extension String
|
||||
size BigInt
|
||||
objectName String
|
||||
|
||||
userId String
|
||||
user User @relation(fields: [userId], references: [id], onDelete: Cascade)
|
||||
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
shares Share[] @relation("ShareFiles")
|
||||
|
||||
@@map("files")
|
||||
}
|
||||
|
||||
model Share {
|
||||
id String @id @default(cuid())
|
||||
name String?
|
||||
views Int @default(0)
|
||||
expiration DateTime?
|
||||
description String?
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
creatorId String?
|
||||
creator User? @relation(fields: [creatorId], references: [id], onDelete: SetNull)
|
||||
|
||||
securityId String @unique
|
||||
security ShareSecurity @relation(fields: [securityId], references: [id])
|
||||
|
||||
files File[] @relation("ShareFiles")
|
||||
recipients ShareRecipient[]
|
||||
|
||||
alias ShareAlias?
|
||||
|
||||
@@map("shares")
|
||||
}
|
||||
|
||||
model ShareSecurity {
|
||||
id String @id @default(cuid())
|
||||
password String?
|
||||
maxViews Int?
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
share Share?
|
||||
|
||||
@@map("share_security")
|
||||
}
|
||||
|
||||
model ShareRecipient {
|
||||
id String @id @default(cuid())
|
||||
email String
|
||||
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
shareId String
|
||||
share Share @relation(fields: [shareId], references: [id], onDelete: Cascade)
|
||||
|
||||
@@map("share_recipients")
|
||||
}
|
||||
|
||||
model AppConfig {
|
||||
id String @id @default(cuid())
|
||||
key String @unique
|
||||
value String
|
||||
type String
|
||||
group String
|
||||
isSystem Boolean @default(true)
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
@@map("app_configs")
|
||||
}
|
||||
|
||||
model LoginAttempt {
|
||||
id String @id @default(cuid())
|
||||
userId String @unique
|
||||
user User @relation(fields: [userId], references: [id], onDelete: Cascade)
|
||||
attempts Int @default(1)
|
||||
lastAttempt DateTime @default(now())
|
||||
|
||||
@@map("login_attempts")
|
||||
}
|
||||
|
||||
model PasswordReset {
|
||||
id String @id @default(cuid())
|
||||
userId String
|
||||
user User @relation(fields: [userId], references: [id], onDelete: Cascade)
|
||||
token String @unique
|
||||
expiresAt DateTime
|
||||
used Boolean @default(false)
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
@@map("password_resets")
|
||||
}
|
||||
|
||||
model ShareAlias {
|
||||
id String @id @default(cuid())
|
||||
alias String @unique
|
||||
shareId String @unique
|
||||
share Share @relation(fields: [shareId], references: [id], onDelete: Cascade)
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
@@map("share_aliases")
|
||||
}
|
||||
|
||||
model AuthProvider {
|
||||
id String @id @default(cuid())
|
||||
name String @unique
|
||||
displayName String
|
||||
type String
|
||||
icon String?
|
||||
enabled Boolean @default(false)
|
||||
|
||||
issuerUrl String?
|
||||
clientId String?
|
||||
clientSecret String?
|
||||
redirectUri String?
|
||||
scope String? @default("openid profile email")
|
||||
|
||||
authorizationEndpoint String?
|
||||
tokenEndpoint String?
|
||||
userInfoEndpoint String?
|
||||
|
||||
metadata String?
|
||||
|
||||
autoRegister Boolean @default(true)
|
||||
adminEmailDomains String?
|
||||
|
||||
sortOrder Int @default(0)
|
||||
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
userAuthProviders UserAuthProvider[]
|
||||
|
||||
@@map("auth_providers")
|
||||
}
|
||||
|
||||
model UserAuthProvider {
|
||||
id String @id @default(cuid())
|
||||
userId String
|
||||
user User @relation(fields: [userId], references: [id], onDelete: Cascade)
|
||||
|
||||
providerId String
|
||||
authProvider AuthProvider @relation(fields: [providerId], references: [id], onDelete: Cascade)
|
||||
|
||||
provider String?
|
||||
|
||||
externalId String
|
||||
metadata String?
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
@@unique([userId, providerId])
|
||||
@@unique([providerId, externalId])
|
||||
@@map("user_auth_providers")
|
||||
}
|
||||
|
||||
model ReverseShare {
|
||||
id String @id @default(cuid())
|
||||
name String?
|
||||
description String?
|
||||
expiration DateTime?
|
||||
maxFiles Int?
|
||||
maxFileSize BigInt?
|
||||
allowedFileTypes String?
|
||||
password String?
|
||||
pageLayout PageLayout @default(DEFAULT)
|
||||
isActive Boolean @default(true)
|
||||
nameFieldRequired FieldRequirement @default(OPTIONAL)
|
||||
emailFieldRequired FieldRequirement @default(OPTIONAL)
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
creatorId String
|
||||
creator User @relation(fields: [creatorId], references: [id], onDelete: Cascade)
|
||||
|
||||
files ReverseShareFile[]
|
||||
alias ReverseShareAlias?
|
||||
|
||||
@@map("reverse_shares")
|
||||
}
|
||||
|
||||
model ReverseShareFile {
|
||||
id String @id @default(cuid())
|
||||
name String
|
||||
description String?
|
||||
extension String
|
||||
size BigInt
|
||||
objectName String
|
||||
uploaderEmail String?
|
||||
uploaderName String?
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
reverseShareId String
|
||||
reverseShare ReverseShare @relation(fields: [reverseShareId], references: [id], onDelete: Cascade)
|
||||
|
||||
@@map("reverse_share_files")
|
||||
}
|
||||
|
||||
model ReverseShareAlias {
|
||||
id String @id @default(cuid())
|
||||
alias String @unique
|
||||
reverseShareId String @unique
|
||||
reverseShare ReverseShare @relation(fields: [reverseShareId], references: [id], onDelete: Cascade)
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
@@map("reverse_share_aliases")
|
||||
}
|
||||
|
||||
enum FieldRequirement {
|
||||
HIDDEN
|
||||
OPTIONAL
|
||||
REQUIRED
|
||||
}
|
||||
|
||||
enum PageLayout {
|
||||
DEFAULT
|
||||
WETRANSFER
|
||||
}
|
||||
|
||||
model TrustedDevice {
|
||||
id String @id @default(cuid())
|
||||
userId String
|
||||
user User @relation(fields: [userId], references: [id], onDelete: Cascade)
|
||||
deviceHash String @unique
|
||||
deviceName String?
|
||||
userAgent String?
|
||||
ipAddress String?
|
||||
lastUsedAt DateTime @default(now())
|
||||
expiresAt DateTime
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
@@map("trusted_devices")
|
||||
}
|
||||
generator client {
|
||||
provider = "prisma-client-js"
|
||||
}
|
||||
|
||||
datasource db {
|
||||
provider = "sqlite"
|
||||
url = env("DATABASE_URL")
|
||||
}
|
||||
|
||||
model User {
|
||||
id String @id @default(cuid())
|
||||
firstName String
|
||||
lastName String
|
||||
username String @unique
|
||||
email String @unique
|
||||
password String?
|
||||
image String?
|
||||
isAdmin Boolean @default(false)
|
||||
isActive Boolean @default(true)
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
twoFactorEnabled Boolean @default(false)
|
||||
twoFactorSecret String?
|
||||
twoFactorBackupCodes String?
|
||||
twoFactorVerified Boolean @default(false)
|
||||
|
||||
files File[]
|
||||
folders Folder[]
|
||||
shares Share[]
|
||||
reverseShares ReverseShare[]
|
||||
|
||||
loginAttempts LoginAttempt?
|
||||
|
||||
passwordResets PasswordReset[]
|
||||
authProviders UserAuthProvider[]
|
||||
trustedDevices TrustedDevice[]
|
||||
|
||||
@@map("users")
|
||||
}
|
||||
|
||||
model File {
|
||||
id String @id @default(cuid())
|
||||
name String
|
||||
description String?
|
||||
extension String
|
||||
size BigInt
|
||||
objectName String
|
||||
|
||||
userId String
|
||||
user User @relation(fields: [userId], references: [id], onDelete: Cascade)
|
||||
|
||||
folderId String?
|
||||
folder Folder? @relation(fields: [folderId], references: [id], onDelete: Cascade)
|
||||
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
shares Share[] @relation("ShareFiles")
|
||||
|
||||
@@index([folderId])
|
||||
@@map("files")
|
||||
}
|
||||
|
||||
model Share {
|
||||
id String @id @default(cuid())
|
||||
name String?
|
||||
views Int @default(0)
|
||||
expiration DateTime?
|
||||
description String?
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
creatorId String?
|
||||
creator User? @relation(fields: [creatorId], references: [id], onDelete: SetNull)
|
||||
|
||||
securityId String @unique
|
||||
security ShareSecurity @relation(fields: [securityId], references: [id])
|
||||
|
||||
files File[] @relation("ShareFiles")
|
||||
folders Folder[] @relation("ShareFolders")
|
||||
recipients ShareRecipient[]
|
||||
|
||||
alias ShareAlias?
|
||||
|
||||
@@map("shares")
|
||||
}
|
||||
|
||||
model ShareSecurity {
|
||||
id String @id @default(cuid())
|
||||
password String?
|
||||
maxViews Int?
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
share Share?
|
||||
|
||||
@@map("share_security")
|
||||
}
|
||||
|
||||
model ShareRecipient {
|
||||
id String @id @default(cuid())
|
||||
email String
|
||||
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
shareId String
|
||||
share Share @relation(fields: [shareId], references: [id], onDelete: Cascade)
|
||||
|
||||
@@map("share_recipients")
|
||||
}
|
||||
|
||||
model AppConfig {
|
||||
id String @id @default(cuid())
|
||||
key String @unique
|
||||
value String
|
||||
type String
|
||||
group String
|
||||
isSystem Boolean @default(true)
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
@@map("app_configs")
|
||||
}
|
||||
|
||||
model LoginAttempt {
|
||||
id String @id @default(cuid())
|
||||
userId String @unique
|
||||
user User @relation(fields: [userId], references: [id], onDelete: Cascade)
|
||||
attempts Int @default(1)
|
||||
lastAttempt DateTime @default(now())
|
||||
|
||||
@@map("login_attempts")
|
||||
}
|
||||
|
||||
model PasswordReset {
|
||||
id String @id @default(cuid())
|
||||
userId String
|
||||
user User @relation(fields: [userId], references: [id], onDelete: Cascade)
|
||||
token String @unique
|
||||
expiresAt DateTime
|
||||
used Boolean @default(false)
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
@@map("password_resets")
|
||||
}
|
||||
|
||||
model ShareAlias {
|
||||
id String @id @default(cuid())
|
||||
alias String @unique
|
||||
shareId String @unique
|
||||
share Share @relation(fields: [shareId], references: [id], onDelete: Cascade)
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
@@map("share_aliases")
|
||||
}
|
||||
|
||||
model AuthProvider {
|
||||
id String @id @default(cuid())
|
||||
name String @unique
|
||||
displayName String
|
||||
type String
|
||||
icon String?
|
||||
enabled Boolean @default(false)
|
||||
|
||||
issuerUrl String?
|
||||
clientId String?
|
||||
clientSecret String?
|
||||
redirectUri String?
|
||||
scope String? @default("openid profile email")
|
||||
|
||||
authorizationEndpoint String?
|
||||
tokenEndpoint String?
|
||||
userInfoEndpoint String?
|
||||
|
||||
metadata String?
|
||||
|
||||
autoRegister Boolean @default(true)
|
||||
adminEmailDomains String?
|
||||
|
||||
sortOrder Int @default(0)
|
||||
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
userAuthProviders UserAuthProvider[]
|
||||
|
||||
@@map("auth_providers")
|
||||
}
|
||||
|
||||
model UserAuthProvider {
|
||||
id String @id @default(cuid())
|
||||
userId String
|
||||
user User @relation(fields: [userId], references: [id], onDelete: Cascade)
|
||||
|
||||
providerId String
|
||||
authProvider AuthProvider @relation(fields: [providerId], references: [id], onDelete: Cascade)
|
||||
|
||||
provider String?
|
||||
|
||||
externalId String
|
||||
metadata String?
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
@@unique([userId, providerId])
|
||||
@@unique([providerId, externalId])
|
||||
@@map("user_auth_providers")
|
||||
}
|
||||
|
||||
model ReverseShare {
|
||||
id String @id @default(cuid())
|
||||
name String?
|
||||
description String?
|
||||
expiration DateTime?
|
||||
maxFiles Int?
|
||||
maxFileSize BigInt?
|
||||
allowedFileTypes String?
|
||||
password String?
|
||||
pageLayout PageLayout @default(DEFAULT)
|
||||
isActive Boolean @default(true)
|
||||
nameFieldRequired FieldRequirement @default(OPTIONAL)
|
||||
emailFieldRequired FieldRequirement @default(OPTIONAL)
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
creatorId String
|
||||
creator User @relation(fields: [creatorId], references: [id], onDelete: Cascade)
|
||||
|
||||
files ReverseShareFile[]
|
||||
alias ReverseShareAlias?
|
||||
|
||||
@@map("reverse_shares")
|
||||
}
|
||||
|
||||
model ReverseShareFile {
|
||||
id String @id @default(cuid())
|
||||
name String
|
||||
description String?
|
||||
extension String
|
||||
size BigInt
|
||||
objectName String
|
||||
uploaderEmail String?
|
||||
uploaderName String?
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
reverseShareId String
|
||||
reverseShare ReverseShare @relation(fields: [reverseShareId], references: [id], onDelete: Cascade)
|
||||
|
||||
@@map("reverse_share_files")
|
||||
}
|
||||
|
||||
model ReverseShareAlias {
|
||||
id String @id @default(cuid())
|
||||
alias String @unique
|
||||
reverseShareId String @unique
|
||||
reverseShare ReverseShare @relation(fields: [reverseShareId], references: [id], onDelete: Cascade)
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
@@map("reverse_share_aliases")
|
||||
}
|
||||
|
||||
enum FieldRequirement {
|
||||
HIDDEN
|
||||
OPTIONAL
|
||||
REQUIRED
|
||||
}
|
||||
|
||||
enum PageLayout {
|
||||
DEFAULT
|
||||
WETRANSFER
|
||||
}
|
||||
|
||||
model TrustedDevice {
|
||||
id String @id @default(cuid())
|
||||
userId String
|
||||
user User @relation(fields: [userId], references: [id], onDelete: Cascade)
|
||||
deviceHash String @unique
|
||||
deviceName String?
|
||||
userAgent String?
|
||||
ipAddress String?
|
||||
lastUsedAt DateTime @default(now())
|
||||
expiresAt DateTime
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
@@map("trusted_devices")
|
||||
}
|
||||
|
||||
model Folder {
|
||||
id String @id @default(cuid())
|
||||
name String
|
||||
description String?
|
||||
objectName String
|
||||
|
||||
parentId String?
|
||||
parent Folder? @relation("FolderHierarchy", fields: [parentId], references: [id], onDelete: Cascade)
|
||||
children Folder[] @relation("FolderHierarchy")
|
||||
|
||||
userId String
|
||||
user User @relation(fields: [userId], references: [id], onDelete: Cascade)
|
||||
|
||||
files File[]
|
||||
|
||||
shares Share[] @relation("ShareFolders")
|
||||
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
@@index([userId])
|
||||
@@index([parentId])
|
||||
@@map("folders")
|
||||
}
|
||||
|
||||
@@ -17,6 +17,12 @@ const defaultConfigs = [
|
||||
type: "boolean",
|
||||
group: "general",
|
||||
},
|
||||
{
|
||||
key: "hideVersion",
|
||||
value: "false",
|
||||
type: "boolean",
|
||||
group: "general",
|
||||
},
|
||||
{
|
||||
key: "appDescription",
|
||||
value: "Secure and simple file sharing - Your personal cloud",
|
||||
@@ -147,6 +153,12 @@ const defaultConfigs = [
|
||||
type: "boolean",
|
||||
group: "auth-providers",
|
||||
},
|
||||
{
|
||||
key: "passwordAuthEnabled",
|
||||
value: "true",
|
||||
type: "boolean",
|
||||
group: "security",
|
||||
},
|
||||
{
|
||||
key: "serverUrl",
|
||||
value: "http://localhost:3333",
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
echo "🔐 Palmr Password Reset Tool"
|
||||
echo "============================="
|
||||
|
||||
# Check if we're in the right directory
|
||||
# Check if we're in the right directory and set DATABASE_URL
|
||||
if [ ! -f "package.json" ]; then
|
||||
echo "❌ Error: This script must be run from the server directory (/app/server)"
|
||||
echo " Current directory: $(pwd)"
|
||||
@@ -14,18 +14,26 @@ if [ ! -f "package.json" ]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Set DATABASE_URL if not already set
|
||||
if [ -z "$DATABASE_URL" ]; then
|
||||
export DATABASE_URL="file:/app/server/prisma/palmr.db"
|
||||
fi
|
||||
|
||||
# Ensure database directory exists
|
||||
mkdir -p /app/server/prisma
|
||||
|
||||
# Function to check if tsx is available
|
||||
check_tsx() {
|
||||
# Check if tsx binary exists in node_modules
|
||||
if [ -f "node_modules/.bin/tsx" ]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
|
||||
# Fallback: try npx
|
||||
if npx tsx --version >/dev/null 2>&1; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
|
||||
return 1
|
||||
}
|
||||
|
||||
@@ -39,7 +47,7 @@ install_tsx_only() {
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
|
||||
|
||||
return $?
|
||||
}
|
||||
|
||||
@@ -62,7 +70,7 @@ ensure_prisma() {
|
||||
if [ -d "node_modules/@prisma/client" ] && [ -f "node_modules/@prisma/client/index.js" ]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
|
||||
echo "📦 Generating Prisma client..."
|
||||
if npx prisma generate --silent >/dev/null 2>&1; then
|
||||
echo "✅ Prisma client ready"
|
||||
@@ -81,14 +89,14 @@ if check_tsx; then
|
||||
echo "✅ tsx is ready"
|
||||
else
|
||||
echo "📦 tsx not found, installing..."
|
||||
|
||||
|
||||
# Try quick tsx-only install first
|
||||
if install_tsx_only && check_tsx; then
|
||||
echo "✅ tsx installed successfully"
|
||||
else
|
||||
echo "⚠️ Quick install failed, installing all dependencies..."
|
||||
install_all_deps
|
||||
|
||||
|
||||
# Final check
|
||||
if ! check_tsx; then
|
||||
echo "❌ Error: tsx is still not available after full installation"
|
||||
@@ -119,4 +127,4 @@ if [ -f "node_modules/.bin/tsx" ]; then
|
||||
node_modules/.bin/tsx src/scripts/reset-password.ts "$@"
|
||||
else
|
||||
npx tsx src/scripts/reset-password.ts "$@"
|
||||
fi
|
||||
fi
|
||||
@@ -1,4 +1,5 @@
|
||||
import crypto from "node:crypto";
|
||||
import * as http from "node:http";
|
||||
import fastifyCookie from "@fastify/cookie";
|
||||
import { fastifyCors } from "@fastify/cors";
|
||||
import fastifyJwt from "@fastify/jwt";
|
||||
@@ -31,6 +32,31 @@ export async function buildApp() {
|
||||
keepAliveTimeout: envTimeoutOverrides.keepAliveTimeout,
|
||||
requestTimeout: envTimeoutOverrides.requestTimeout,
|
||||
trustProxy: true,
|
||||
maxParamLength: 500,
|
||||
onProtoPoisoning: "ignore",
|
||||
onConstructorPoisoning: "ignore",
|
||||
ignoreTrailingSlash: true,
|
||||
serverFactory: (handler: (req: any, res: any) => void) => {
|
||||
const server = http.createServer((req: http.IncomingMessage, res: http.ServerResponse) => {
|
||||
res.setTimeout(0);
|
||||
req.setTimeout(0);
|
||||
|
||||
req.on("close", () => {
|
||||
if (typeof global !== "undefined" && global.gc) {
|
||||
setImmediate(() => global.gc!());
|
||||
}
|
||||
});
|
||||
|
||||
handler(req, res);
|
||||
});
|
||||
|
||||
server.maxHeadersCount = 0;
|
||||
server.timeout = 0;
|
||||
server.keepAliveTimeout = envTimeoutOverrides.keepAliveTimeout;
|
||||
server.headersTimeout = envTimeoutOverrides.keepAliveTimeout + 1000;
|
||||
|
||||
return server;
|
||||
},
|
||||
}).withTypeProvider<ZodTypeProvider>();
|
||||
|
||||
app.setValidatorCompiler(validatorCompiler);
|
||||
|
||||
@@ -1,9 +1,57 @@
|
||||
import * as fs from "fs";
|
||||
import process from "node:process";
|
||||
import { S3Client } from "@aws-sdk/client-s3";
|
||||
|
||||
import { env } from "../env";
|
||||
import { StorageConfig } from "../types/storage";
|
||||
|
||||
export const storageConfig: StorageConfig = {
|
||||
/**
|
||||
* Load internal storage credentials if they exist
|
||||
* This provides S3-compatible storage automatically when ENABLE_S3=false
|
||||
*/
|
||||
function loadInternalStorageCredentials(): Partial<StorageConfig> | null {
|
||||
const credentialsPath = "/app/server/.minio-credentials";
|
||||
|
||||
try {
|
||||
if (fs.existsSync(credentialsPath)) {
|
||||
const content = fs.readFileSync(credentialsPath, "utf-8");
|
||||
const credentials: any = {};
|
||||
|
||||
content.split("\n").forEach((line) => {
|
||||
const [key, value] = line.split("=");
|
||||
if (key && value) {
|
||||
credentials[key.trim()] = value.trim();
|
||||
}
|
||||
});
|
||||
|
||||
console.log("[STORAGE] Using internal storage system");
|
||||
|
||||
return {
|
||||
endpoint: credentials.S3_ENDPOINT || "127.0.0.1",
|
||||
port: parseInt(credentials.S3_PORT || "9379", 10),
|
||||
useSSL: credentials.S3_USE_SSL === "true",
|
||||
accessKey: credentials.S3_ACCESS_KEY,
|
||||
secretKey: credentials.S3_SECRET_KEY,
|
||||
region: credentials.S3_REGION || "default",
|
||||
bucketName: credentials.S3_BUCKET_NAME || "palmr-files",
|
||||
forcePathStyle: true,
|
||||
};
|
||||
}
|
||||
} catch (error) {
|
||||
console.warn("[STORAGE] Could not load internal storage credentials:", error);
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Storage configuration:
|
||||
* - Default (ENABLE_S3=false or not set): Internal storage (auto-configured, zero config)
|
||||
* - ENABLE_S3=true: External S3 (AWS, S3-compatible, etc) using env vars
|
||||
*/
|
||||
const internalStorageConfig = env.ENABLE_S3 === "true" ? null : loadInternalStorageCredentials();
|
||||
|
||||
export const storageConfig: StorageConfig = (internalStorageConfig as StorageConfig) || {
|
||||
endpoint: env.S3_ENDPOINT || "",
|
||||
port: env.S3_PORT ? Number(env.S3_PORT) : undefined,
|
||||
useSSL: env.S3_USE_SSL === "true",
|
||||
@@ -14,21 +62,82 @@ export const storageConfig: StorageConfig = {
|
||||
forcePathStyle: env.S3_FORCE_PATH_STYLE === "true",
|
||||
};
|
||||
|
||||
export const s3Client =
|
||||
env.ENABLE_S3 === "true"
|
||||
? new S3Client({
|
||||
endpoint: storageConfig.useSSL
|
||||
? `https://${storageConfig.endpoint}${storageConfig.port ? `:${storageConfig.port}` : ""}`
|
||||
: `http://${storageConfig.endpoint}${storageConfig.port ? `:${storageConfig.port}` : ""}`,
|
||||
region: storageConfig.region,
|
||||
credentials: {
|
||||
accessKeyId: storageConfig.accessKey,
|
||||
secretAccessKey: storageConfig.secretKey,
|
||||
},
|
||||
forcePathStyle: storageConfig.forcePathStyle,
|
||||
})
|
||||
: null;
|
||||
if (storageConfig.useSSL && env.S3_REJECT_UNAUTHORIZED === "false") {
|
||||
const originalRejectUnauthorized = process.env.NODE_TLS_REJECT_UNAUTHORIZED;
|
||||
if (!originalRejectUnauthorized) {
|
||||
process.env.NODE_TLS_REJECT_UNAUTHORIZED = "0";
|
||||
(global as any).PALMR_ORIGINAL_TLS_SETTING = originalRejectUnauthorized;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Storage is ALWAYS S3-compatible:
|
||||
* - ENABLE_S3=false → Internal storage (automatic)
|
||||
* - ENABLE_S3=true → External S3 (AWS, S3-compatible, etc)
|
||||
*/
|
||||
const hasValidConfig = storageConfig.endpoint && storageConfig.accessKey && storageConfig.secretKey;
|
||||
|
||||
export const s3Client = hasValidConfig
|
||||
? new S3Client({
|
||||
endpoint: storageConfig.useSSL
|
||||
? `https://${storageConfig.endpoint}${storageConfig.port ? `:${storageConfig.port}` : ""}`
|
||||
: `http://${storageConfig.endpoint}${storageConfig.port ? `:${storageConfig.port}` : ""}`,
|
||||
region: storageConfig.region,
|
||||
credentials: {
|
||||
accessKeyId: storageConfig.accessKey,
|
||||
secretAccessKey: storageConfig.secretKey,
|
||||
},
|
||||
forcePathStyle: storageConfig.forcePathStyle,
|
||||
})
|
||||
: null;
|
||||
|
||||
export const bucketName = storageConfig.bucketName;
|
||||
|
||||
export const isS3Enabled = env.ENABLE_S3 === "true";
|
||||
/**
|
||||
* Storage is always S3-compatible
|
||||
* ENABLE_S3=true means EXTERNAL S3, otherwise uses internal storage
|
||||
*/
|
||||
export const isS3Enabled = s3Client !== null;
|
||||
export const isExternalS3 = env.ENABLE_S3 === "true";
|
||||
export const isInternalStorage = s3Client !== null && env.ENABLE_S3 !== "true";
|
||||
|
||||
/**
|
||||
* Creates a public S3 client for presigned URL generation.
|
||||
* - Internal storage (ENABLE_S3=false): Uses STORAGE_URL (e.g., https://syrg.palmr.com)
|
||||
* - External S3 (ENABLE_S3=true): Uses the original S3 endpoint configuration
|
||||
*
|
||||
* @returns S3Client configured with public endpoint, or null if S3 is disabled
|
||||
*/
|
||||
export function createPublicS3Client(): S3Client | null {
|
||||
if (!s3Client) {
|
||||
return null;
|
||||
}
|
||||
|
||||
let publicEndpoint: string;
|
||||
|
||||
if (isInternalStorage) {
|
||||
// Internal storage: use STORAGE_URL
|
||||
if (!env.STORAGE_URL) {
|
||||
throw new Error(
|
||||
"[STORAGE] STORAGE_URL environment variable is required when using internal storage (ENABLE_S3=false). " +
|
||||
"Set STORAGE_URL to your public storage URL with protocol (e.g., https://syrg.palmr.com or http://192.168.1.100:9379)"
|
||||
);
|
||||
}
|
||||
publicEndpoint = env.STORAGE_URL;
|
||||
} else {
|
||||
// External S3: use the original endpoint configuration
|
||||
publicEndpoint = storageConfig.useSSL
|
||||
? `https://${storageConfig.endpoint}${storageConfig.port ? `:${storageConfig.port}` : ""}`
|
||||
: `http://${storageConfig.endpoint}${storageConfig.port ? `:${storageConfig.port}` : ""}`;
|
||||
}
|
||||
|
||||
return new S3Client({
|
||||
endpoint: publicEndpoint,
|
||||
region: storageConfig.region,
|
||||
credentials: {
|
||||
accessKeyId: storageConfig.accessKey,
|
||||
secretAccessKey: storageConfig.secretKey,
|
||||
},
|
||||
forcePathStyle: storageConfig.forcePathStyle,
|
||||
});
|
||||
}
|
||||
|
||||
@@ -18,6 +18,7 @@ export function registerSwagger(app: any) {
|
||||
{ name: "Auth Providers", description: "External authentication providers management" },
|
||||
{ name: "User", description: "User management endpoints" },
|
||||
{ name: "File", description: "File management endpoints" },
|
||||
{ name: "Folder", description: "Folder management endpoints" },
|
||||
{ name: "Share", description: "File sharing endpoints" },
|
||||
{ name: "Storage", description: "Storage management endpoints" },
|
||||
{ name: "App", description: "Application configuration endpoints" },
|
||||
|
||||
@@ -1,9 +1,8 @@
|
||||
import { z } from "zod";
|
||||
|
||||
const envSchema = z.object({
|
||||
// Storage configuration
|
||||
ENABLE_S3: z.union([z.literal("true"), z.literal("false")]).default("false"),
|
||||
ENCRYPTION_KEY: z.string().optional().default("palmr-default-encryption-key-2025"),
|
||||
DISABLE_FILESYSTEM_ENCRYPTION: z.union([z.literal("true"), z.literal("false")]).default("false"),
|
||||
S3_ENDPOINT: z.string().optional(),
|
||||
S3_PORT: z.string().optional(),
|
||||
S3_USE_SSL: z.string().optional(),
|
||||
@@ -12,9 +11,18 @@ const envSchema = z.object({
|
||||
S3_REGION: z.string().optional(),
|
||||
S3_BUCKET_NAME: z.string().optional(),
|
||||
S3_FORCE_PATH_STYLE: z.union([z.literal("true"), z.literal("false")]).default("false"),
|
||||
S3_REJECT_UNAUTHORIZED: z.union([z.literal("true"), z.literal("false")]).default("true"),
|
||||
|
||||
// Legacy encryption vars (kept for backward compatibility but not used with S3/Garage)
|
||||
ENCRYPTION_KEY: z.string().optional(),
|
||||
DISABLE_FILESYSTEM_ENCRYPTION: z.union([z.literal("true"), z.literal("false")]).default("true"),
|
||||
|
||||
// Application configuration
|
||||
PRESIGNED_URL_EXPIRATION: z.string().optional().default("3600"),
|
||||
SECURE_SITE: z.union([z.literal("true"), z.literal("false")]).default("false"),
|
||||
STORAGE_URL: z.string().optional(), // Storage URL for internal storage presigned URLs (required when ENABLE_S3=false, e.g., https://syrg.palmr.com or http://192.168.1.100:9379)
|
||||
DATABASE_URL: z.string().optional().default("file:/app/server/prisma/palmr.db"),
|
||||
DEMO_MODE: z.union([z.literal("true"), z.literal("false")]).default("false"),
|
||||
CUSTOM_PATH: z.string().optional(),
|
||||
});
|
||||
|
||||
export const env = envSchema.parse(process.env);
|
||||
|
||||
@@ -9,7 +9,7 @@ export class AppController {
|
||||
private logoService = new LogoService();
|
||||
private emailService = new EmailService();
|
||||
|
||||
async getAppInfo(request: FastifyRequest, reply: FastifyReply) {
|
||||
async getAppInfo(_request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
const appInfo = await this.appService.getAppInfo();
|
||||
return reply.send(appInfo);
|
||||
@@ -18,7 +18,16 @@ export class AppController {
|
||||
}
|
||||
}
|
||||
|
||||
async getAllConfigs(request: FastifyRequest, reply: FastifyReply) {
|
||||
async getSystemInfo(_request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
const systemInfo = await this.appService.getSystemInfo();
|
||||
return reply.send(systemInfo);
|
||||
} catch (error: any) {
|
||||
return reply.status(400).send({ error: error.message });
|
||||
}
|
||||
}
|
||||
|
||||
async getAllConfigs(_request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
const configs = await this.appService.getAllConfigs();
|
||||
return reply.send({ configs });
|
||||
@@ -27,6 +36,15 @@ export class AppController {
|
||||
}
|
||||
}
|
||||
|
||||
async getPublicConfigs(_request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
const configs = await this.appService.getPublicConfigs();
|
||||
return reply.send({ configs });
|
||||
} catch (error: any) {
|
||||
return reply.status(400).send({ error: error.message });
|
||||
}
|
||||
}
|
||||
|
||||
async updateConfig(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
const { key } = request.params as { key: string };
|
||||
@@ -81,9 +99,8 @@ export class AppController {
|
||||
return reply.status(400).send({ error: "Only images are allowed" });
|
||||
}
|
||||
|
||||
// Logo files should be small (max 5MB), so we can safely use streaming to buffer
|
||||
const chunks: Buffer[] = [];
|
||||
const maxLogoSize = 5 * 1024 * 1024; // 5MB
|
||||
const maxLogoSize = 5 * 1024 * 1024;
|
||||
let totalSize = 0;
|
||||
|
||||
for await (const chunk of file.file) {
|
||||
@@ -105,7 +122,7 @@ export class AppController {
|
||||
}
|
||||
}
|
||||
|
||||
async removeLogo(request: FastifyRequest, reply: FastifyReply) {
|
||||
async removeLogo(_request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
await this.logoService.deleteLogo();
|
||||
return reply.send({ message: "Logo removed successfully" });
|
||||
|
||||
@@ -53,6 +53,26 @@ export async function appRoutes(app: FastifyInstance) {
|
||||
appController.getAppInfo.bind(appController)
|
||||
);
|
||||
|
||||
app.get(
|
||||
"/app/system-info",
|
||||
{
|
||||
schema: {
|
||||
tags: ["App"],
|
||||
operationId: "getSystemInfo",
|
||||
summary: "Get system information",
|
||||
description: "Get system information including storage provider",
|
||||
response: {
|
||||
200: z.object({
|
||||
storageProvider: z.enum(["s3", "filesystem"]).describe("The active storage provider"),
|
||||
s3Enabled: z.boolean().describe("Whether S3 storage is enabled"),
|
||||
}),
|
||||
400: z.object({ error: z.string().describe("Error message") }),
|
||||
},
|
||||
},
|
||||
},
|
||||
appController.getSystemInfo.bind(appController)
|
||||
);
|
||||
|
||||
app.patch(
|
||||
"/app/configs/:key",
|
||||
{
|
||||
@@ -82,15 +102,34 @@ export async function appRoutes(app: FastifyInstance) {
|
||||
appController.updateConfig.bind(appController)
|
||||
);
|
||||
|
||||
app.get(
|
||||
"/app/configs/public",
|
||||
{
|
||||
schema: {
|
||||
tags: ["App"],
|
||||
operationId: "getPublicConfigs",
|
||||
summary: "List public configurations",
|
||||
description: "List public configurations (excludes sensitive data like SMTP credentials)",
|
||||
response: {
|
||||
200: z.object({
|
||||
configs: z.array(ConfigResponseSchema),
|
||||
}),
|
||||
400: z.object({ error: z.string().describe("Error message") }),
|
||||
},
|
||||
},
|
||||
},
|
||||
appController.getPublicConfigs.bind(appController)
|
||||
);
|
||||
|
||||
app.get(
|
||||
"/app/configs",
|
||||
{
|
||||
// preValidation: adminPreValidation,
|
||||
preValidation: adminPreValidation,
|
||||
schema: {
|
||||
tags: ["App"],
|
||||
operationId: "getAllConfigs",
|
||||
summary: "List all configurations",
|
||||
description: "List all configurations (admin only)",
|
||||
description: "List all configurations including sensitive data (admin only)",
|
||||
response: {
|
||||
200: z.object({
|
||||
configs: z.array(ConfigResponseSchema),
|
||||
|
||||
@@ -20,6 +20,13 @@ export class AppService {
|
||||
};
|
||||
}
|
||||
|
||||
async getSystemInfo() {
|
||||
return {
|
||||
storageProvider: "s3",
|
||||
s3Enabled: true,
|
||||
};
|
||||
}
|
||||
|
||||
async getAllConfigs() {
|
||||
return prisma.appConfig.findMany({
|
||||
where: {
|
||||
@@ -33,11 +40,46 @@ export class AppService {
|
||||
});
|
||||
}
|
||||
|
||||
async getPublicConfigs() {
|
||||
const sensitiveKeys = [
|
||||
"smtpHost",
|
||||
"smtpPort",
|
||||
"smtpUser",
|
||||
"smtpPass",
|
||||
"smtpSecure",
|
||||
"smtpNoAuth",
|
||||
"smtpTrustSelfSigned",
|
||||
"jwtSecret",
|
||||
];
|
||||
|
||||
return prisma.appConfig.findMany({
|
||||
where: {
|
||||
key: {
|
||||
notIn: sensitiveKeys,
|
||||
},
|
||||
},
|
||||
orderBy: {
|
||||
group: "asc",
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
async updateConfig(key: string, value: string) {
|
||||
if (key === "jwtSecret") {
|
||||
throw new Error("JWT Secret cannot be updated through this endpoint");
|
||||
}
|
||||
|
||||
if (key === "passwordAuthEnabled") {
|
||||
if (value === "false") {
|
||||
const canDisable = await this.configService.validatePasswordAuthDisable();
|
||||
if (!canDisable) {
|
||||
throw new Error(
|
||||
"Password authentication cannot be disabled. At least one authentication provider must be active."
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const config = await prisma.appConfig.findUnique({
|
||||
where: { key },
|
||||
});
|
||||
@@ -56,6 +98,15 @@ export class AppService {
|
||||
if (updates.some((update) => update.key === "jwtSecret")) {
|
||||
throw new Error("JWT Secret cannot be updated through this endpoint");
|
||||
}
|
||||
const passwordAuthUpdate = updates.find((update) => update.key === "passwordAuthEnabled");
|
||||
if (passwordAuthUpdate && passwordAuthUpdate.value === "false") {
|
||||
const canDisable = await this.configService.validatePasswordAuthDisable();
|
||||
if (!canDisable) {
|
||||
throw new Error(
|
||||
"Password authentication cannot be disabled. At least one authentication provider must be active."
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
const keys = updates.map((update) => update.key);
|
||||
const existingConfigs = await prisma.appConfig.findMany({
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
import { FastifyReply, FastifyRequest } from "fastify";
|
||||
|
||||
import { ConfigService } from "../config/service";
|
||||
import { UpdateAuthProviderSchema } from "./dto";
|
||||
import { AuthProvidersService } from "./service";
|
||||
import {
|
||||
@@ -39,9 +40,11 @@ const ERROR_MESSAGES = {
|
||||
|
||||
export class AuthProvidersController {
|
||||
private authProvidersService: AuthProvidersService;
|
||||
private configService: ConfigService;
|
||||
|
||||
constructor() {
|
||||
this.authProvidersService = new AuthProvidersService();
|
||||
this.configService = new ConfigService();
|
||||
}
|
||||
|
||||
private buildRequestContext(request: FastifyRequest): RequestContext {
|
||||
@@ -223,13 +226,24 @@ export class AuthProvidersController {
|
||||
|
||||
try {
|
||||
const { id } = request.params;
|
||||
const data = request.body;
|
||||
const data = request.body as any;
|
||||
|
||||
const existingProvider = await this.authProvidersService.getProviderById(id);
|
||||
if (!existingProvider) {
|
||||
return this.sendErrorResponse(reply, 404, ERROR_MESSAGES.PROVIDER_NOT_FOUND);
|
||||
}
|
||||
|
||||
if (data.enabled === false && existingProvider.enabled === true) {
|
||||
const canDisable = await this.configService.validateAllProvidersDisable();
|
||||
if (!canDisable) {
|
||||
return this.sendErrorResponse(
|
||||
reply,
|
||||
400,
|
||||
"Cannot disable the last authentication provider when password authentication is disabled"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
const isOfficial = this.authProvidersService.isOfficialProvider(existingProvider.name);
|
||||
|
||||
if (isOfficial) {
|
||||
@@ -300,6 +314,17 @@ export class AuthProvidersController {
|
||||
return this.sendErrorResponse(reply, 400, ERROR_MESSAGES.OFFICIAL_CANNOT_DELETE);
|
||||
}
|
||||
|
||||
if (provider.enabled) {
|
||||
const canDisable = await this.configService.validateAllProvidersDisable();
|
||||
if (!canDisable) {
|
||||
return this.sendErrorResponse(
|
||||
reply,
|
||||
400,
|
||||
"Cannot delete the last authentication provider when password authentication is disabled"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
await this.authProvidersService.deleteProvider(id);
|
||||
return this.sendSuccessResponse(reply, undefined, "Provider deleted successfully");
|
||||
} catch (error) {
|
||||
|
||||
@@ -617,6 +617,11 @@ export class AuthProvidersService {
|
||||
return await this.linkProviderToExistingUser(existingUser, provider.id, String(externalId), userInfo);
|
||||
}
|
||||
|
||||
// Check if auto-registration is disabled
|
||||
if (provider.autoRegister === false) {
|
||||
throw new Error(`User registration via ${provider.displayName || provider.name} is disabled`);
|
||||
}
|
||||
|
||||
return await this.createNewUserWithProvider(userInfo, provider.id, String(externalId));
|
||||
}
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
import { FastifyReply, FastifyRequest } from "fastify";
|
||||
|
||||
import { env } from "../../env";
|
||||
import { ConfigService } from "../config/service";
|
||||
import {
|
||||
CompleteTwoFactorLoginSchema,
|
||||
createResetPasswordSchema,
|
||||
@@ -11,6 +12,7 @@ import { AuthService } from "./service";
|
||||
|
||||
export class AuthController {
|
||||
private authService = new AuthService();
|
||||
private configService = new ConfigService();
|
||||
|
||||
private getClientInfo(request: FastifyRequest) {
|
||||
const realIP = request.headers["x-real-ip"] as string;
|
||||
@@ -111,14 +113,21 @@ export class AuthController {
|
||||
|
||||
async getCurrentUser(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
const userId = (request as any).user?.userId;
|
||||
let userId: string | null = null;
|
||||
try {
|
||||
await request.jwtVerify();
|
||||
userId = (request as any).user?.userId;
|
||||
} catch (err) {
|
||||
return reply.send({ user: null });
|
||||
}
|
||||
|
||||
if (!userId) {
|
||||
return reply.status(401).send({ error: "Unauthorized: a valid token is required to access this resource." });
|
||||
return reply.send({ user: null });
|
||||
}
|
||||
|
||||
const user = await this.authService.getUserById(userId);
|
||||
if (!user) {
|
||||
return reply.status(404).send({ error: "User not found" });
|
||||
return reply.send({ user: null });
|
||||
}
|
||||
|
||||
return reply.send({ user });
|
||||
@@ -169,4 +178,15 @@ export class AuthController {
|
||||
return reply.status(400).send({ error: error.message });
|
||||
}
|
||||
}
|
||||
|
||||
async getAuthConfig(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
const passwordAuthEnabled = await this.configService.getValue("passwordAuthEnabled");
|
||||
return reply.send({
|
||||
passwordAuthEnabled: passwordAuthEnabled === "true",
|
||||
});
|
||||
} catch (error: any) {
|
||||
return reply.status(400).send({ error: error.message });
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -153,33 +153,29 @@ export async function authRoutes(app: FastifyInstance) {
|
||||
tags: ["Authentication"],
|
||||
operationId: "getCurrentUser",
|
||||
summary: "Get Current User",
|
||||
description: "Returns the current authenticated user's information",
|
||||
description: "Returns the current authenticated user's information or null if not authenticated",
|
||||
response: {
|
||||
200: z.object({
|
||||
user: z.object({
|
||||
id: z.string().describe("User ID"),
|
||||
firstName: z.string().describe("User first name"),
|
||||
lastName: z.string().describe("User last name"),
|
||||
username: z.string().describe("User username"),
|
||||
email: z.string().email().describe("User email"),
|
||||
image: z.string().nullable().describe("User profile image URL"),
|
||||
isAdmin: z.boolean().describe("User is admin"),
|
||||
isActive: z.boolean().describe("User is active"),
|
||||
createdAt: z.date().describe("User creation date"),
|
||||
updatedAt: z.date().describe("User last update date"),
|
||||
200: z.union([
|
||||
z.object({
|
||||
user: z.object({
|
||||
id: z.string().describe("User ID"),
|
||||
firstName: z.string().describe("User first name"),
|
||||
lastName: z.string().describe("User last name"),
|
||||
username: z.string().describe("User username"),
|
||||
email: z.string().email().describe("User email"),
|
||||
image: z.string().nullable().describe("User profile image URL"),
|
||||
isAdmin: z.boolean().describe("User is admin"),
|
||||
isActive: z.boolean().describe("User is active"),
|
||||
createdAt: z.date().describe("User creation date"),
|
||||
updatedAt: z.date().describe("User last update date"),
|
||||
}),
|
||||
}),
|
||||
}),
|
||||
401: z.object({ error: z.string().describe("Error message") }),
|
||||
z.object({
|
||||
user: z.null().describe("No user when not authenticated"),
|
||||
}),
|
||||
]),
|
||||
},
|
||||
},
|
||||
preValidation: async (request: FastifyRequest, reply: FastifyReply) => {
|
||||
try {
|
||||
await request.jwtVerify();
|
||||
} catch (err) {
|
||||
console.error(err);
|
||||
reply.status(401).send({ error: "Unauthorized: a valid token is required to access this resource." });
|
||||
}
|
||||
},
|
||||
},
|
||||
authController.getCurrentUser.bind(authController)
|
||||
);
|
||||
@@ -280,4 +276,23 @@ export async function authRoutes(app: FastifyInstance) {
|
||||
},
|
||||
authController.removeAllTrustedDevices.bind(authController)
|
||||
);
|
||||
|
||||
app.get(
|
||||
"/auth/config",
|
||||
{
|
||||
schema: {
|
||||
tags: ["Authentication"],
|
||||
operationId: "getAuthConfig",
|
||||
summary: "Get Authentication Configuration",
|
||||
description: "Get authentication configuration settings",
|
||||
response: {
|
||||
200: z.object({
|
||||
passwordAuthEnabled: z.boolean().describe("Whether password authentication is enabled"),
|
||||
}),
|
||||
400: z.object({ error: z.string().describe("Error message") }),
|
||||
},
|
||||
},
|
||||
},
|
||||
authController.getAuthConfig.bind(authController)
|
||||
);
|
||||
}
|
||||
|
||||
@@ -18,6 +18,11 @@ export class AuthService {
|
||||
private trustedDeviceService = new TrustedDeviceService();
|
||||
|
||||
async login(data: LoginInput, userAgent?: string, ipAddress?: string) {
|
||||
const passwordAuthEnabled = await this.configService.getValue("passwordAuthEnabled");
|
||||
if (passwordAuthEnabled === "false") {
|
||||
throw new Error("Password authentication is disabled. Please use an external authentication provider.");
|
||||
}
|
||||
|
||||
const user = await this.userRepository.findUserByEmailOrUsername(data.emailOrUsername);
|
||||
if (!user) {
|
||||
throw new Error("Invalid credentials");
|
||||
@@ -146,6 +151,11 @@ export class AuthService {
|
||||
}
|
||||
|
||||
async requestPasswordReset(email: string, origin: string) {
|
||||
const passwordAuthEnabled = await this.configService.getValue("passwordAuthEnabled");
|
||||
if (passwordAuthEnabled === "false") {
|
||||
throw new Error("Password authentication is disabled. Password reset is not available.");
|
||||
}
|
||||
|
||||
const user = await this.userRepository.findUserByEmail(email);
|
||||
if (!user) {
|
||||
return;
|
||||
@@ -171,6 +181,11 @@ export class AuthService {
|
||||
}
|
||||
|
||||
async resetPassword(token: string, newPassword: string) {
|
||||
const passwordAuthEnabled = await this.configService.getValue("passwordAuthEnabled");
|
||||
if (passwordAuthEnabled === "false") {
|
||||
throw new Error("Password authentication is disabled. Password reset is not available.");
|
||||
}
|
||||
|
||||
const resetRequest = await prisma.passwordReset.findFirst({
|
||||
where: {
|
||||
token,
|
||||
|
||||
@@ -13,6 +13,26 @@ export class ConfigService {
|
||||
return config.value;
|
||||
}
|
||||
|
||||
async setValue(key: string, value: string): Promise<void> {
|
||||
await prisma.appConfig.update({
|
||||
where: { key },
|
||||
data: { value },
|
||||
});
|
||||
}
|
||||
|
||||
async validatePasswordAuthDisable(): Promise<boolean> {
|
||||
const enabledProviders = await prisma.authProvider.findMany({
|
||||
where: { enabled: true },
|
||||
});
|
||||
|
||||
return enabledProviders.length > 0;
|
||||
}
|
||||
|
||||
async validateAllProvidersDisable(): Promise<boolean> {
|
||||
const passwordAuthEnabled = await this.getValue("passwordAuthEnabled");
|
||||
return passwordAuthEnabled === "true";
|
||||
}
|
||||
|
||||
async getGroupConfigs(group: string) {
|
||||
const configs = await prisma.appConfig.findMany({
|
||||
where: { group },
|
||||
|
||||
@@ -167,7 +167,7 @@ export class EmailService {
|
||||
});
|
||||
}
|
||||
|
||||
async sendShareNotification(to: string, shareLink: string, shareName?: string) {
|
||||
async sendShareNotification(to: string, shareLink: string, shareName?: string, senderName?: string) {
|
||||
const transporter = await this.createTransporter();
|
||||
if (!transporter) {
|
||||
throw new Error("SMTP is not enabled");
|
||||
@@ -178,19 +178,151 @@ export class EmailService {
|
||||
const appName = await this.configService.getValue("appName");
|
||||
|
||||
const shareTitle = shareName || "Files";
|
||||
const sender = senderName || "Someone";
|
||||
|
||||
await transporter.sendMail({
|
||||
from: `"${fromName}" <${fromEmail}>`,
|
||||
to,
|
||||
subject: `${appName} - ${shareTitle} shared with you`,
|
||||
html: `
|
||||
<h1>${appName} - Shared Files</h1>
|
||||
<p>Someone has shared "${shareTitle}" with you.</p>
|
||||
<p>Click the link below to access the shared files:</p>
|
||||
<a href="${shareLink}">
|
||||
Access Shared Files
|
||||
</a>
|
||||
<p>Note: This share may have an expiration date or view limit.</p>
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>${appName} - Shared Files</title>
|
||||
</head>
|
||||
<body style="margin: 0; padding: 0; font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, 'Helvetica Neue', Arial, sans-serif; background-color: #f5f5f5; color: #333333;">
|
||||
<div style="max-width: 600px; margin: 0 auto; background-color: #ffffff; box-shadow: 0 2px 8px rgba(0, 0, 0, 0.1); overflow: hidden; margin-top: 40px; margin-bottom: 40px;">
|
||||
<!-- Header -->
|
||||
<div style="background-color: #22B14C; padding: 30px 20px; text-align: center;">
|
||||
<h1 style="margin: 0; color: #ffffff; font-size: 28px; font-weight: 600; letter-spacing: -0.5px;">${appName}</h1>
|
||||
<p style="margin: 2px 0 0 0; color: #ffffff; font-size: 16px; opacity: 0.9;">Shared Files</p>
|
||||
</div>
|
||||
|
||||
<!-- Content -->
|
||||
<div style="padding: 40px 30px;">
|
||||
<div style="text-align: center; margin-bottom: 32px;">
|
||||
<h2 style="margin: 0 0 12px 0; color: #1f2937; font-size: 24px; font-weight: 600;">Files Shared With You</h2>
|
||||
<p style="margin: 0; color: #6b7280; font-size: 16px; line-height: 1.6;">
|
||||
<strong style="color: #374151;">${sender}</strong> has shared <strong style="color: #374151;">"${shareTitle}"</strong> with you.
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<!-- CTA Button -->
|
||||
<div style="text-align: center; margin: 32px 0;">
|
||||
<a href="${shareLink}" style="display: inline-block; background-color: #22B14C; color: #ffffff; text-decoration: none; padding: 12px 24px; font-weight: 600; font-size: 16px; border: 2px solid #22B14C; border-radius: 8px; transition: all 0.3s ease;">
|
||||
Access Shared Files
|
||||
</a>
|
||||
</div>
|
||||
|
||||
<!-- Info Box -->
|
||||
<div style="background-color: #f9fafb; border-left: 4px solid #22B14C; padding: 16px 20px; margin-top: 32px;">
|
||||
<p style="margin: 0; color: #4b5563; font-size: 14px; line-height: 1.5;">
|
||||
<strong>Important:</strong> This share may have an expiration date or view limit. Access it as soon as possible to ensure availability.
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Footer -->
|
||||
<div style="background-color: #f9fafb; padding: 24px 30px; text-align: center; border-top: 1px solid #e5e7eb;">
|
||||
<p style="margin: 0; color: #6b7280; font-size: 14px;">
|
||||
This email was sent by <strong>${appName}</strong>
|
||||
</p>
|
||||
<p style="margin: 8px 0 0 0; color: #9ca3af; font-size: 12px;">
|
||||
If you didn't expect this email, you can safely ignore it.
|
||||
</p>
|
||||
<p style="margin: 4px 0 0 0; color: #9ca3af; font-size: 10px;">
|
||||
Powered by <a href="https://kyantech.com.br" style="color: #9ca3af; text-decoration: none;">Kyantech Solutions</a>
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
`,
|
||||
});
|
||||
}
|
||||
|
||||
async sendReverseShareBatchFileNotification(
|
||||
recipientEmail: string,
|
||||
reverseShareName: string,
|
||||
fileCount: number,
|
||||
fileList: string,
|
||||
uploaderName: string
|
||||
) {
|
||||
const transporter = await this.createTransporter();
|
||||
if (!transporter) {
|
||||
throw new Error("SMTP is not enabled");
|
||||
}
|
||||
|
||||
const fromName = await this.configService.getValue("smtpFromName");
|
||||
const fromEmail = await this.configService.getValue("smtpFromEmail");
|
||||
const appName = await this.configService.getValue("appName");
|
||||
|
||||
await transporter.sendMail({
|
||||
from: `"${fromName}" <${fromEmail}>`,
|
||||
to: recipientEmail,
|
||||
subject: `${appName} - ${fileCount} file${fileCount > 1 ? "s" : ""} uploaded to "${reverseShareName}"`,
|
||||
html: `
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>${appName} - File Upload Notification</title>
|
||||
</head>
|
||||
<body style="margin: 0; padding: 0; font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, 'Helvetica Neue', Arial, sans-serif; background-color: #f5f5f5; color: #333333;">
|
||||
<div style="max-width: 600px; margin: 0 auto; background-color: #ffffff; box-shadow: 0 2px 8px rgba(0, 0, 0, 0.1); overflow: hidden; margin-top: 40px; margin-bottom: 40px;">
|
||||
<!-- Header -->
|
||||
<div style="background-color: #22B14C; padding: 30px 20px; text-align: center;">
|
||||
<h1 style="margin: 0; color: #ffffff; font-size: 28px; font-weight: 600; letter-spacing: -0.5px;">${appName}</h1>
|
||||
<p style="margin: 2px 0 0 0; color: #ffffff; font-size: 16px; opacity: 0.9;">File Upload Notification</p>
|
||||
</div>
|
||||
|
||||
<!-- Content -->
|
||||
<div style="padding: 40px 30px;">
|
||||
<div style="text-align: center; margin-bottom: 32px;">
|
||||
<h2 style="margin: 0 0 12px 0; color: #1f2937; font-size: 24px; font-weight: 600;">New File Uploaded</h2>
|
||||
<p style="margin: 0; color: #6b7280; font-size: 16px; line-height: 1.6;">
|
||||
<strong style="color: #374151;">${uploaderName}</strong> has uploaded <strong style="color: #374151;">${fileCount} file${fileCount > 1 ? "s" : ""}</strong> to your reverse share <strong style="color: #374151;">"${reverseShareName}"</strong>.
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<!-- File List -->
|
||||
<div style="background-color: #f9fafb; border-radius: 8px; padding: 16px; margin: 32px 0; border-left: 4px solid #22B14C;">
|
||||
<p style="margin: 0 0 8px 0; color: #374151; font-size: 14px;"><strong>Files (${fileCount}):</strong></p>
|
||||
<ul style="margin: 0; padding-left: 20px; color: #6b7280; font-size: 14px; line-height: 1.5;">
|
||||
${fileList
|
||||
.split(", ")
|
||||
.map((file) => `<li style="margin: 4px 0;">${file}</li>`)
|
||||
.join("")}
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
<!-- Info Text -->
|
||||
<div style="text-align: center; margin-top: 32px;">
|
||||
<p style="margin: 0; color: #9ca3af; font-size: 12px;">
|
||||
You can now access and manage these files through your dashboard.
|
||||
</p>
|
||||
</div>
|
||||
|
||||
</div>
|
||||
|
||||
<!-- Footer -->
|
||||
<div style="background-color: #f9fafb; padding: 24px 30px; text-align: center; border-top: 1px solid #e5e7eb;">
|
||||
<p style="margin: 0; color: #6b7280; font-size: 14px;">
|
||||
This email was sent by <strong>${appName}</strong>
|
||||
</p>
|
||||
<p style="margin: 8px 0 0 0; color: #9ca3af; font-size: 12px;">
|
||||
If you didn't expect this email, you can safely ignore it.
|
||||
</p>
|
||||
<p style="margin: 4px 0 0 0; color: #9ca3af; font-size: 10px;">
|
||||
Powered by <a href="https://kyantech.com.br" style="color: #9ca3af; text-decoration: none;">Kyantech Solutions</a>
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
`,
|
||||
});
|
||||
}
|
||||
|
||||
@@ -1,40 +1,57 @@
|
||||
import bcrypt from "bcryptjs";
|
||||
import { FastifyReply, FastifyRequest } from "fastify";
|
||||
|
||||
import { env } from "../../env";
|
||||
import { prisma } from "../../shared/prisma";
|
||||
import {
|
||||
generateUniqueFileName,
|
||||
generateUniqueFileNameForRename,
|
||||
parseFileName,
|
||||
} from "../../utils/file-name-generator";
|
||||
import { getContentType } from "../../utils/mime-types";
|
||||
import { ConfigService } from "../config/service";
|
||||
import { CheckFileInput, CheckFileSchema, RegisterFileInput, RegisterFileSchema, UpdateFileSchema } from "./dto";
|
||||
import {
|
||||
CheckFileInput,
|
||||
CheckFileSchema,
|
||||
ListFilesInput,
|
||||
ListFilesSchema,
|
||||
MoveFileInput,
|
||||
MoveFileSchema,
|
||||
RegisterFileInput,
|
||||
RegisterFileSchema,
|
||||
UpdateFileInput,
|
||||
UpdateFileSchema,
|
||||
} from "./dto";
|
||||
import { FileService } from "./service";
|
||||
|
||||
export class FileController {
|
||||
private fileService = new FileService();
|
||||
private configService = new ConfigService();
|
||||
|
||||
async getPresignedUrl(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
const { filename, extension } = request.query as {
|
||||
filename?: string;
|
||||
extension?: string;
|
||||
};
|
||||
if (!filename || !extension) {
|
||||
return reply.status(400).send({
|
||||
error: "The 'filename' and 'extension' parameters are required.",
|
||||
});
|
||||
}
|
||||
async getPresignedUrl(request: FastifyRequest, reply: FastifyReply): Promise<void> {
|
||||
const { filename, extension } = request.query as { filename: string; extension: string };
|
||||
|
||||
if (!filename || !extension) {
|
||||
return reply.status(400).send({ error: "filename and extension are required" });
|
||||
}
|
||||
|
||||
try {
|
||||
// JWT already verified by preValidation in routes.ts
|
||||
const userId = (request as any).user?.userId;
|
||||
if (!userId) {
|
||||
return reply.status(401).send({ error: "Unauthorized: a valid token is required to access this resource." });
|
||||
return reply.status(401).send({ error: "Unauthorized" });
|
||||
}
|
||||
|
||||
const objectName = `${userId}/${Date.now()}-${filename}.${extension}`;
|
||||
const expires = 3600;
|
||||
// Generate unique object name
|
||||
const objectName = `${userId}/${Date.now()}-${Math.random().toString(36).substring(7)}-${filename}.${extension}`;
|
||||
const expires = parseInt(env.PRESIGNED_URL_EXPIRATION);
|
||||
|
||||
const url = await this.fileService.getPresignedPutUrl(objectName, expires);
|
||||
return reply.send({ url, objectName });
|
||||
|
||||
return reply.status(200).send({ url, objectName });
|
||||
} catch (error) {
|
||||
console.error("Error in getPresignedUrl:", error);
|
||||
return reply.status(500).send({ error: "Internal server error." });
|
||||
return reply.status(500).send({ error: "Internal server error" });
|
||||
}
|
||||
}
|
||||
|
||||
@@ -56,17 +73,7 @@ export class FileController {
|
||||
});
|
||||
}
|
||||
|
||||
// Check if DEMO_MODE is enabled
|
||||
const isDemoMode = env.DEMO_MODE === "true";
|
||||
|
||||
let maxTotalStorage: bigint;
|
||||
if (isDemoMode) {
|
||||
// In demo mode, limit all users to 200MB
|
||||
maxTotalStorage = BigInt(200 * 1024 * 1024); // 200MB in bytes
|
||||
} else {
|
||||
// Normal behavior - use maxTotalStoragePerUser configuration
|
||||
maxTotalStorage = BigInt(await this.configService.getValue("maxTotalStoragePerUser"));
|
||||
}
|
||||
const maxTotalStorage = BigInt(await this.configService.getValue("maxTotalStoragePerUser"));
|
||||
|
||||
const userFiles = await prisma.file.findMany({
|
||||
where: { userId },
|
||||
@@ -82,14 +89,28 @@ export class FileController {
|
||||
});
|
||||
}
|
||||
|
||||
if (input.folderId) {
|
||||
const folder = await prisma.folder.findFirst({
|
||||
where: { id: input.folderId, userId },
|
||||
});
|
||||
if (!folder) {
|
||||
return reply.status(400).send({ error: "Folder not found or access denied." });
|
||||
}
|
||||
}
|
||||
|
||||
// Parse the filename and generate a unique name if there's a duplicate
|
||||
const { baseName, extension } = parseFileName(input.name);
|
||||
const uniqueName = await generateUniqueFileName(baseName, extension, userId, input.folderId);
|
||||
|
||||
const fileRecord = await prisma.file.create({
|
||||
data: {
|
||||
name: input.name,
|
||||
name: uniqueName,
|
||||
description: input.description,
|
||||
extension: input.extension,
|
||||
size: BigInt(input.size),
|
||||
objectName: input.objectName,
|
||||
userId,
|
||||
folderId: input.folderId,
|
||||
},
|
||||
});
|
||||
|
||||
@@ -101,6 +122,7 @@ export class FileController {
|
||||
size: fileRecord.size.toString(),
|
||||
objectName: fileRecord.objectName,
|
||||
userId: fileRecord.userId,
|
||||
folderId: fileRecord.folderId,
|
||||
createdAt: fileRecord.createdAt,
|
||||
updatedAt: fileRecord.updatedAt,
|
||||
};
|
||||
@@ -138,17 +160,7 @@ export class FileController {
|
||||
});
|
||||
}
|
||||
|
||||
// Check if DEMO_MODE is enabled
|
||||
const isDemoMode = env.DEMO_MODE === "true";
|
||||
|
||||
let maxTotalStorage: bigint;
|
||||
if (isDemoMode) {
|
||||
// In demo mode, limit all users to 200MB
|
||||
maxTotalStorage = BigInt(200 * 1024 * 1024); // 200MB in bytes
|
||||
} else {
|
||||
// Normal behavior - use maxTotalStoragePerUser configuration
|
||||
maxTotalStorage = BigInt(await this.configService.getValue("maxTotalStoragePerUser"));
|
||||
}
|
||||
const maxTotalStorage = BigInt(await this.configService.getValue("maxTotalStoragePerUser"));
|
||||
|
||||
const userFiles = await prisma.file.findMany({
|
||||
where: { userId },
|
||||
@@ -166,9 +178,20 @@ export class FileController {
|
||||
});
|
||||
}
|
||||
|
||||
return reply.status(201).send({
|
||||
// Check for duplicate filename and provide the suggested unique name
|
||||
const { baseName, extension } = parseFileName(input.name);
|
||||
const uniqueName = await generateUniqueFileName(baseName, extension, userId, input.folderId);
|
||||
|
||||
// Include suggestedName in response if the name was changed
|
||||
const response: any = {
|
||||
message: "File checks succeeded.",
|
||||
});
|
||||
};
|
||||
|
||||
if (uniqueName !== input.name) {
|
||||
response.suggestedName = uniqueName;
|
||||
}
|
||||
|
||||
return reply.status(201).send(response);
|
||||
} catch (error: any) {
|
||||
console.error("Error in checkFile:", error);
|
||||
return reply.status(400).send({ error: error.message });
|
||||
@@ -177,10 +200,10 @@ export class FileController {
|
||||
|
||||
async getDownloadUrl(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
const { objectName: encodedObjectName } = request.params as {
|
||||
const { objectName, password } = request.query as {
|
||||
objectName: string;
|
||||
password?: string;
|
||||
};
|
||||
const objectName = decodeURIComponent(encodedObjectName);
|
||||
|
||||
if (!objectName) {
|
||||
return reply.status(400).send({ error: "The 'objectName' parameter is required." });
|
||||
@@ -191,8 +214,53 @@ export class FileController {
|
||||
if (!fileRecord) {
|
||||
return reply.status(404).send({ error: "File not found." });
|
||||
}
|
||||
|
||||
let hasAccess = false;
|
||||
|
||||
const shares = await prisma.share.findMany({
|
||||
where: {
|
||||
files: {
|
||||
some: {
|
||||
id: fileRecord.id,
|
||||
},
|
||||
},
|
||||
},
|
||||
include: {
|
||||
security: true,
|
||||
},
|
||||
});
|
||||
|
||||
for (const share of shares) {
|
||||
if (!share.security.password) {
|
||||
hasAccess = true;
|
||||
break;
|
||||
} else if (password) {
|
||||
const isPasswordValid = await bcrypt.compare(password, share.security.password);
|
||||
if (isPasswordValid) {
|
||||
hasAccess = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!hasAccess) {
|
||||
try {
|
||||
await request.jwtVerify();
|
||||
const userId = (request as any).user?.userId;
|
||||
if (userId && fileRecord.userId === userId) {
|
||||
hasAccess = true;
|
||||
}
|
||||
} catch (err) {}
|
||||
}
|
||||
|
||||
if (!hasAccess) {
|
||||
return reply.status(401).send({ error: "Unauthorized access to file." });
|
||||
}
|
||||
|
||||
const fileName = fileRecord.name;
|
||||
const expires = 3600;
|
||||
const expires = parseInt(env.PRESIGNED_URL_EXPIRATION);
|
||||
|
||||
// Always use presigned URLs (works for both internal and external storage)
|
||||
const url = await this.fileService.getPresignedGetUrl(objectName, expires, fileName);
|
||||
return reply.send({ url, expiresIn: expires });
|
||||
} catch (error) {
|
||||
@@ -201,6 +269,114 @@ export class FileController {
|
||||
}
|
||||
}
|
||||
|
||||
async downloadFile(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
const { objectName, password } = request.query as {
|
||||
objectName: string;
|
||||
password?: string;
|
||||
};
|
||||
|
||||
if (!objectName) {
|
||||
return reply.status(400).send({ error: "The 'objectName' parameter is required." });
|
||||
}
|
||||
|
||||
const fileRecord = await prisma.file.findFirst({ where: { objectName } });
|
||||
|
||||
if (!fileRecord) {
|
||||
if (objectName.startsWith("reverse-shares/")) {
|
||||
const reverseShareFile = await prisma.reverseShareFile.findFirst({
|
||||
where: { objectName },
|
||||
include: {
|
||||
reverseShare: true,
|
||||
},
|
||||
});
|
||||
|
||||
if (!reverseShareFile) {
|
||||
return reply.status(404).send({ error: "File not found." });
|
||||
}
|
||||
|
||||
try {
|
||||
await request.jwtVerify();
|
||||
const userId = (request as any).user?.userId;
|
||||
|
||||
if (!userId || reverseShareFile.reverseShare.creatorId !== userId) {
|
||||
return reply.status(401).send({ error: "Unauthorized access to file." });
|
||||
}
|
||||
} catch (err) {
|
||||
return reply.status(401).send({ error: "Unauthorized access to file." });
|
||||
}
|
||||
|
||||
// Stream from S3/storage system
|
||||
const stream = await this.fileService.getObjectStream(objectName);
|
||||
const contentType = getContentType(reverseShareFile.name);
|
||||
const fileName = reverseShareFile.name;
|
||||
|
||||
reply.header("Content-Type", contentType);
|
||||
reply.header("Content-Disposition", `inline; filename="${encodeURIComponent(fileName)}"`);
|
||||
|
||||
return reply.send(stream);
|
||||
}
|
||||
|
||||
return reply.status(404).send({ error: "File not found." });
|
||||
}
|
||||
|
||||
let hasAccess = false;
|
||||
|
||||
const shares = await prisma.share.findMany({
|
||||
where: {
|
||||
files: {
|
||||
some: {
|
||||
id: fileRecord.id,
|
||||
},
|
||||
},
|
||||
},
|
||||
include: {
|
||||
security: true,
|
||||
},
|
||||
});
|
||||
|
||||
for (const share of shares) {
|
||||
if (!share.security.password) {
|
||||
hasAccess = true;
|
||||
break;
|
||||
} else if (password) {
|
||||
const isPasswordValid = await bcrypt.compare(password, share.security.password);
|
||||
if (isPasswordValid) {
|
||||
hasAccess = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!hasAccess) {
|
||||
try {
|
||||
await request.jwtVerify();
|
||||
const userId = (request as any).user?.userId;
|
||||
if (userId && fileRecord.userId === userId) {
|
||||
hasAccess = true;
|
||||
}
|
||||
} catch (err) {}
|
||||
}
|
||||
|
||||
if (!hasAccess) {
|
||||
return reply.status(401).send({ error: "Unauthorized access to file." });
|
||||
}
|
||||
|
||||
// Stream from S3/MinIO
|
||||
const stream = await this.fileService.getObjectStream(objectName);
|
||||
const contentType = getContentType(fileRecord.name);
|
||||
const fileName = fileRecord.name;
|
||||
|
||||
reply.header("Content-Type", contentType);
|
||||
reply.header("Content-Disposition", `inline; filename="${encodeURIComponent(fileName)}"`);
|
||||
|
||||
return reply.send(stream);
|
||||
} catch (error) {
|
||||
console.error("Error in downloadFile:", error);
|
||||
return reply.status(500).send({ error: "Internal server error." });
|
||||
}
|
||||
}
|
||||
|
||||
async listFiles(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
await request.jwtVerify();
|
||||
@@ -209,18 +385,43 @@ export class FileController {
|
||||
return reply.status(401).send({ error: "Unauthorized: a valid token is required to access this resource." });
|
||||
}
|
||||
|
||||
const files = await prisma.file.findMany({
|
||||
where: { userId },
|
||||
});
|
||||
const input: ListFilesInput = ListFilesSchema.parse(request.query);
|
||||
const { folderId, recursive: recursiveStr } = input;
|
||||
const recursive = recursiveStr === "false" ? false : true;
|
||||
|
||||
const filesResponse = files.map((file) => ({
|
||||
let files: any[];
|
||||
|
||||
let targetFolderId: string | null;
|
||||
if (folderId === "null" || folderId === "" || !folderId) {
|
||||
targetFolderId = null; // Root folder
|
||||
} else {
|
||||
targetFolderId = folderId;
|
||||
}
|
||||
|
||||
if (recursive) {
|
||||
if (targetFolderId === null) {
|
||||
files = await this.getAllUserFilesRecursively(userId);
|
||||
} else {
|
||||
const { FolderService } = await import("../folder/service.js");
|
||||
const folderService = new FolderService();
|
||||
files = await folderService.getAllFilesInFolder(targetFolderId, userId);
|
||||
}
|
||||
} else {
|
||||
files = await prisma.file.findMany({
|
||||
where: { userId, folderId: targetFolderId },
|
||||
});
|
||||
}
|
||||
|
||||
const filesResponse = files.map((file: any) => ({
|
||||
id: file.id,
|
||||
name: file.name,
|
||||
description: file.description,
|
||||
extension: file.extension,
|
||||
size: file.size.toString(),
|
||||
size: typeof file.size === "bigint" ? file.size.toString() : file.size,
|
||||
objectName: file.objectName,
|
||||
userId: file.userId,
|
||||
folderId: file.folderId,
|
||||
relativePath: file.relativePath || null,
|
||||
createdAt: file.createdAt,
|
||||
updatedAt: file.updatedAt,
|
||||
}));
|
||||
@@ -285,6 +486,13 @@ export class FileController {
|
||||
return reply.status(403).send({ error: "Access denied." });
|
||||
}
|
||||
|
||||
// If renaming the file, check for duplicates and auto-rename if necessary
|
||||
if (updateData.name && updateData.name !== fileRecord.name) {
|
||||
const { baseName, extension } = parseFileName(updateData.name);
|
||||
const uniqueName = await generateUniqueFileNameForRename(baseName, extension, userId, fileRecord.folderId, id);
|
||||
updateData.name = uniqueName;
|
||||
}
|
||||
|
||||
const updatedFile = await prisma.file.update({
|
||||
where: { id },
|
||||
data: updateData,
|
||||
@@ -298,6 +506,7 @@ export class FileController {
|
||||
size: updatedFile.size.toString(),
|
||||
objectName: updatedFile.objectName,
|
||||
userId: updatedFile.userId,
|
||||
folderId: updatedFile.folderId,
|
||||
createdAt: updatedFile.createdAt,
|
||||
updatedAt: updatedFile.updatedAt,
|
||||
};
|
||||
@@ -311,4 +520,248 @@ export class FileController {
|
||||
return reply.status(400).send({ error: error.message });
|
||||
}
|
||||
}
|
||||
|
||||
async moveFile(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
await request.jwtVerify();
|
||||
const userId = (request as any).user?.userId;
|
||||
|
||||
if (!userId) {
|
||||
return reply.status(401).send({ error: "Unauthorized: a valid token is required to access this resource." });
|
||||
}
|
||||
|
||||
const { id } = request.params as { id: string };
|
||||
const input: MoveFileInput = MoveFileSchema.parse(request.body);
|
||||
|
||||
const existingFile = await prisma.file.findFirst({
|
||||
where: { id, userId },
|
||||
});
|
||||
|
||||
if (!existingFile) {
|
||||
return reply.status(404).send({ error: "File not found." });
|
||||
}
|
||||
|
||||
if (input.folderId) {
|
||||
const targetFolder = await prisma.folder.findFirst({
|
||||
where: { id: input.folderId, userId },
|
||||
});
|
||||
if (!targetFolder) {
|
||||
return reply.status(400).send({ error: "Target folder not found." });
|
||||
}
|
||||
}
|
||||
|
||||
const updatedFile = await prisma.file.update({
|
||||
where: { id },
|
||||
data: { folderId: input.folderId },
|
||||
});
|
||||
|
||||
const fileResponse = {
|
||||
id: updatedFile.id,
|
||||
name: updatedFile.name,
|
||||
description: updatedFile.description,
|
||||
extension: updatedFile.extension,
|
||||
size: updatedFile.size.toString(),
|
||||
objectName: updatedFile.objectName,
|
||||
userId: updatedFile.userId,
|
||||
folderId: updatedFile.folderId,
|
||||
createdAt: updatedFile.createdAt,
|
||||
updatedAt: updatedFile.updatedAt,
|
||||
};
|
||||
|
||||
return reply.send({
|
||||
file: fileResponse,
|
||||
message: "File moved successfully.",
|
||||
});
|
||||
} catch (error: any) {
|
||||
console.error("Error moving file:", error);
|
||||
return reply.status(400).send({ error: error.message });
|
||||
}
|
||||
}
|
||||
|
||||
async embedFile(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
const { id } = request.params as { id: string };
|
||||
|
||||
if (!id) {
|
||||
return reply.status(400).send({ error: "File ID is required." });
|
||||
}
|
||||
|
||||
const fileRecord = await prisma.file.findUnique({ where: { id } });
|
||||
|
||||
if (!fileRecord) {
|
||||
return reply.status(404).send({ error: "File not found." });
|
||||
}
|
||||
|
||||
const extension = fileRecord.extension.toLowerCase();
|
||||
const imageExts = ["jpg", "jpeg", "png", "gif", "webp", "svg", "bmp", "ico", "avif"];
|
||||
const videoExts = ["mp4", "webm", "ogg", "mov", "avi", "mkv", "flv", "wmv"];
|
||||
const audioExts = ["mp3", "wav", "ogg", "m4a", "flac", "aac", "wma"];
|
||||
|
||||
const isMedia = imageExts.includes(extension) || videoExts.includes(extension) || audioExts.includes(extension);
|
||||
|
||||
if (!isMedia) {
|
||||
return reply.status(403).send({
|
||||
error: "Embed is only allowed for images, videos, and audio files.",
|
||||
});
|
||||
}
|
||||
|
||||
// Stream from S3/MinIO
|
||||
const stream = await this.fileService.getObjectStream(fileRecord.objectName);
|
||||
const contentType = getContentType(fileRecord.name);
|
||||
const fileName = fileRecord.name;
|
||||
|
||||
reply.header("Content-Type", contentType);
|
||||
reply.header("Content-Disposition", `inline; filename="${encodeURIComponent(fileName)}"`);
|
||||
reply.header("Cache-Control", "public, max-age=31536000"); // Cache por 1 ano
|
||||
|
||||
return reply.send(stream);
|
||||
} catch (error) {
|
||||
console.error("Error in embedFile:", error);
|
||||
return reply.status(500).send({ error: "Internal server error." });
|
||||
}
|
||||
}
|
||||
|
||||
private async getAllUserFilesRecursively(userId: string): Promise<any[]> {
|
||||
const rootFiles = await prisma.file.findMany({
|
||||
where: { userId, folderId: null },
|
||||
});
|
||||
|
||||
const rootFolders = await prisma.folder.findMany({
|
||||
where: { userId, parentId: null },
|
||||
select: { id: true },
|
||||
});
|
||||
|
||||
let allFiles = [...rootFiles];
|
||||
|
||||
if (rootFolders.length > 0) {
|
||||
const { FolderService } = await import("../folder/service.js");
|
||||
const folderService = new FolderService();
|
||||
|
||||
for (const folder of rootFolders) {
|
||||
const folderFiles = await folderService.getAllFilesInFolder(folder.id, userId);
|
||||
allFiles = [...allFiles, ...folderFiles];
|
||||
}
|
||||
}
|
||||
|
||||
return allFiles;
|
||||
}
|
||||
|
||||
// Multipart upload endpoints
|
||||
async createMultipartUpload(request: FastifyRequest, reply: FastifyReply): Promise<void> {
|
||||
try {
|
||||
const userId = (request as any).user?.userId;
|
||||
if (!userId) {
|
||||
return reply.status(401).send({ error: "Unauthorized" });
|
||||
}
|
||||
|
||||
const { filename, extension } = request.body as { filename: string; extension: string };
|
||||
|
||||
if (!filename || !extension) {
|
||||
return reply.status(400).send({ error: "filename and extension are required" });
|
||||
}
|
||||
|
||||
// Generate unique object name (same pattern as simple upload)
|
||||
const objectName = `${userId}/${Date.now()}-${Math.random().toString(36).substring(7)}-${filename}.${extension}`;
|
||||
|
||||
const uploadId = await this.fileService.createMultipartUpload(objectName);
|
||||
|
||||
return reply.status(200).send({
|
||||
uploadId,
|
||||
objectName,
|
||||
message: "Multipart upload initialized",
|
||||
});
|
||||
} catch (error) {
|
||||
console.error("[Multipart] Error creating multipart upload:", error);
|
||||
return reply.status(500).send({ error: "Failed to create multipart upload" });
|
||||
}
|
||||
}
|
||||
|
||||
async getMultipartPartUrl(request: FastifyRequest, reply: FastifyReply): Promise<void> {
|
||||
try {
|
||||
const userId = (request as any).user?.userId;
|
||||
if (!userId) {
|
||||
return reply.status(401).send({ error: "Unauthorized" });
|
||||
}
|
||||
|
||||
const { uploadId, objectName, partNumber } = request.query as {
|
||||
uploadId: string;
|
||||
objectName: string;
|
||||
partNumber: string;
|
||||
};
|
||||
|
||||
if (!uploadId || !objectName || !partNumber) {
|
||||
return reply.status(400).send({ error: "uploadId, objectName, and partNumber are required" });
|
||||
}
|
||||
|
||||
const partNum = parseInt(partNumber);
|
||||
if (isNaN(partNum) || partNum < 1 || partNum > 10000) {
|
||||
return reply.status(400).send({ error: "partNumber must be between 1 and 10000" });
|
||||
}
|
||||
|
||||
const expires = parseInt(env.PRESIGNED_URL_EXPIRATION);
|
||||
|
||||
const url = await this.fileService.getPresignedPartUrl(objectName, uploadId, partNum, expires);
|
||||
|
||||
return reply.status(200).send({ url });
|
||||
} catch (error) {
|
||||
console.error("[Multipart] Error getting part URL:", error);
|
||||
return reply.status(500).send({ error: "Failed to get presigned URL for part" });
|
||||
}
|
||||
}
|
||||
|
||||
async completeMultipartUpload(request: FastifyRequest, reply: FastifyReply): Promise<void> {
|
||||
try {
|
||||
const userId = (request as any).user?.userId;
|
||||
if (!userId) {
|
||||
return reply.status(401).send({ error: "Unauthorized" });
|
||||
}
|
||||
|
||||
const { uploadId, objectName, parts } = request.body as {
|
||||
uploadId: string;
|
||||
objectName: string;
|
||||
parts: Array<{ PartNumber: number; ETag: string }>;
|
||||
};
|
||||
|
||||
if (!uploadId || !objectName || !parts || !Array.isArray(parts)) {
|
||||
return reply.status(400).send({ error: "uploadId, objectName, and parts are required" });
|
||||
}
|
||||
|
||||
await this.fileService.completeMultipartUpload(objectName, uploadId, parts);
|
||||
|
||||
return reply.status(200).send({
|
||||
message: "Multipart upload completed successfully",
|
||||
objectName,
|
||||
});
|
||||
} catch (error) {
|
||||
console.error("[Multipart] Error completing multipart upload:", error);
|
||||
return reply.status(500).send({ error: "Failed to complete multipart upload" });
|
||||
}
|
||||
}
|
||||
|
||||
async abortMultipartUpload(request: FastifyRequest, reply: FastifyReply): Promise<void> {
|
||||
try {
|
||||
const userId = (request as any).user?.userId;
|
||||
if (!userId) {
|
||||
return reply.status(401).send({ error: "Unauthorized" });
|
||||
}
|
||||
|
||||
const { uploadId, objectName } = request.body as {
|
||||
uploadId: string;
|
||||
objectName: string;
|
||||
};
|
||||
|
||||
if (!uploadId || !objectName) {
|
||||
return reply.status(400).send({ error: "uploadId and objectName are required" });
|
||||
}
|
||||
|
||||
await this.fileService.abortMultipartUpload(objectName, uploadId);
|
||||
|
||||
return reply.status(200).send({
|
||||
message: "Multipart upload aborted successfully",
|
||||
});
|
||||
} catch (error) {
|
||||
console.error("[Multipart] Error aborting multipart upload:", error);
|
||||
return reply.status(500).send({ error: "Failed to abort multipart upload" });
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -9,6 +9,7 @@ export const RegisterFileSchema = z.object({
|
||||
invalid_type_error: "O tamanho deve ser um número",
|
||||
}),
|
||||
objectName: z.string().min(1, "O objectName é obrigatório"),
|
||||
folderId: z.string().optional(),
|
||||
});
|
||||
|
||||
export const CheckFileSchema = z.object({
|
||||
@@ -20,6 +21,7 @@ export const CheckFileSchema = z.object({
|
||||
invalid_type_error: "O tamanho deve ser um número",
|
||||
}),
|
||||
objectName: z.string().min(1, "O objectName é obrigatório"),
|
||||
folderId: z.string().optional(),
|
||||
});
|
||||
|
||||
export type RegisterFileInput = z.infer<typeof RegisterFileSchema>;
|
||||
@@ -30,4 +32,15 @@ export const UpdateFileSchema = z.object({
|
||||
description: z.string().optional().nullable().describe("The file description"),
|
||||
});
|
||||
|
||||
export const MoveFileSchema = z.object({
|
||||
folderId: z.string().nullable(),
|
||||
});
|
||||
|
||||
export const ListFilesSchema = z.object({
|
||||
folderId: z.string().optional().describe("The folder ID"),
|
||||
recursive: z.string().optional().default("true").describe("Include files from subfolders"),
|
||||
});
|
||||
|
||||
export type UpdateFileInput = z.infer<typeof UpdateFileSchema>;
|
||||
export type MoveFileInput = z.infer<typeof MoveFileSchema>;
|
||||
export type ListFilesInput = z.infer<typeof ListFilesSchema>;
|
||||
|
||||
@@ -2,7 +2,7 @@ import { FastifyInstance, FastifyReply, FastifyRequest } from "fastify";
|
||||
import { z } from "zod";
|
||||
|
||||
import { FileController } from "./controller";
|
||||
import { CheckFileSchema, RegisterFileSchema, UpdateFileSchema } from "./dto";
|
||||
import { CheckFileSchema, ListFilesSchema, MoveFileSchema, RegisterFileSchema, UpdateFileSchema } from "./dto";
|
||||
|
||||
export async function fileRoutes(app: FastifyInstance) {
|
||||
const fileController = new FileController();
|
||||
@@ -62,6 +62,7 @@ export async function fileRoutes(app: FastifyInstance) {
|
||||
size: z.string().describe("The file size"),
|
||||
objectName: z.string().describe("The object name of the file"),
|
||||
userId: z.string().describe("The user ID"),
|
||||
folderId: z.string().nullable().describe("The folder ID"),
|
||||
createdAt: z.date().describe("The file creation date"),
|
||||
updatedAt: z.date().describe("The file last update date"),
|
||||
}),
|
||||
@@ -78,6 +79,7 @@ export async function fileRoutes(app: FastifyInstance) {
|
||||
app.post(
|
||||
"/files/check",
|
||||
{
|
||||
preValidation,
|
||||
schema: {
|
||||
tags: ["File"],
|
||||
operationId: "checkFile",
|
||||
@@ -104,15 +106,16 @@ export async function fileRoutes(app: FastifyInstance) {
|
||||
);
|
||||
|
||||
app.get(
|
||||
"/files/:objectName/download",
|
||||
"/files/download-url",
|
||||
{
|
||||
schema: {
|
||||
tags: ["File"],
|
||||
operationId: "getDownloadUrl",
|
||||
summary: "Get Download URL",
|
||||
description: "Generates a pre-signed URL for downloading a private file",
|
||||
params: z.object({
|
||||
description: "Generates a pre-signed URL for downloading a file",
|
||||
querystring: z.object({
|
||||
objectName: z.string().min(1, "The objectName is required"),
|
||||
password: z.string().optional().describe("Share password if required"),
|
||||
}),
|
||||
response: {
|
||||
200: z.object({
|
||||
@@ -128,6 +131,46 @@ export async function fileRoutes(app: FastifyInstance) {
|
||||
fileController.getDownloadUrl.bind(fileController)
|
||||
);
|
||||
|
||||
app.get(
|
||||
"/embed/:id",
|
||||
{
|
||||
schema: {
|
||||
tags: ["File"],
|
||||
operationId: "embedFile",
|
||||
summary: "Embed File (Public Access)",
|
||||
description:
|
||||
"Returns a media file (image/video/audio) for public embedding without authentication. Only works for media files.",
|
||||
params: z.object({
|
||||
id: z.string().min(1, "File ID is required").describe("The file ID"),
|
||||
}),
|
||||
response: {
|
||||
400: z.object({ error: z.string().describe("Error message") }),
|
||||
403: z.object({ error: z.string().describe("Error message - not a media file") }),
|
||||
404: z.object({ error: z.string().describe("Error message") }),
|
||||
500: z.object({ error: z.string().describe("Error message") }),
|
||||
},
|
||||
},
|
||||
},
|
||||
fileController.embedFile.bind(fileController)
|
||||
);
|
||||
|
||||
app.get(
|
||||
"/files/download",
|
||||
{
|
||||
schema: {
|
||||
tags: ["File"],
|
||||
operationId: "downloadFile",
|
||||
summary: "Download File",
|
||||
description: "Downloads a file directly (returns file content)",
|
||||
querystring: z.object({
|
||||
objectName: z.string().min(1, "The objectName is required"),
|
||||
password: z.string().optional().describe("Share password if required"),
|
||||
}),
|
||||
},
|
||||
},
|
||||
fileController.downloadFile.bind(fileController)
|
||||
);
|
||||
|
||||
app.get(
|
||||
"/files",
|
||||
{
|
||||
@@ -136,7 +179,8 @@ export async function fileRoutes(app: FastifyInstance) {
|
||||
tags: ["File"],
|
||||
operationId: "listFiles",
|
||||
summary: "List Files",
|
||||
description: "Lists user files",
|
||||
description: "Lists user files recursively by default, optionally filtered by folder",
|
||||
querystring: ListFilesSchema,
|
||||
response: {
|
||||
200: z.object({
|
||||
files: z.array(
|
||||
@@ -148,6 +192,8 @@ export async function fileRoutes(app: FastifyInstance) {
|
||||
size: z.string().describe("The file size"),
|
||||
objectName: z.string().describe("The object name of the file"),
|
||||
userId: z.string().describe("The user ID"),
|
||||
folderId: z.string().nullable().describe("The folder ID"),
|
||||
relativePath: z.string().nullable().describe("The relative path (only for recursive listing)"),
|
||||
createdAt: z.date().describe("The file creation date"),
|
||||
updatedAt: z.date().describe("The file last update date"),
|
||||
})
|
||||
@@ -160,6 +206,84 @@ export async function fileRoutes(app: FastifyInstance) {
|
||||
fileController.listFiles.bind(fileController)
|
||||
);
|
||||
|
||||
app.patch(
|
||||
"/files/:id",
|
||||
{
|
||||
preValidation,
|
||||
schema: {
|
||||
tags: ["File"],
|
||||
operationId: "updateFile",
|
||||
summary: "Update File Metadata",
|
||||
description: "Updates file metadata in the database",
|
||||
params: z.object({
|
||||
id: z.string().min(1, "The file id is required").describe("The file ID"),
|
||||
}),
|
||||
body: UpdateFileSchema,
|
||||
response: {
|
||||
200: z.object({
|
||||
file: z.object({
|
||||
id: z.string().describe("The file ID"),
|
||||
name: z.string().describe("The file name"),
|
||||
description: z.string().nullable().describe("The file description"),
|
||||
extension: z.string().describe("The file extension"),
|
||||
size: z.string().describe("The file size"),
|
||||
objectName: z.string().describe("The object name of the file"),
|
||||
userId: z.string().describe("The user ID"),
|
||||
folderId: z.string().nullable().describe("The folder ID"),
|
||||
createdAt: z.date().describe("The file creation date"),
|
||||
updatedAt: z.date().describe("The file last update date"),
|
||||
}),
|
||||
message: z.string().describe("Success message"),
|
||||
}),
|
||||
400: z.object({ error: z.string().describe("Error message") }),
|
||||
401: z.object({ error: z.string().describe("Error message") }),
|
||||
403: z.object({ error: z.string().describe("Error message") }),
|
||||
404: z.object({ error: z.string().describe("Error message") }),
|
||||
},
|
||||
},
|
||||
},
|
||||
fileController.updateFile.bind(fileController)
|
||||
);
|
||||
|
||||
app.put(
|
||||
"/files/:id/move",
|
||||
{
|
||||
preValidation,
|
||||
schema: {
|
||||
tags: ["File"],
|
||||
operationId: "moveFile",
|
||||
summary: "Move File",
|
||||
description: "Moves a file to a different folder",
|
||||
params: z.object({
|
||||
id: z.string().min(1, "The file id is required").describe("The file ID"),
|
||||
}),
|
||||
body: MoveFileSchema,
|
||||
response: {
|
||||
200: z.object({
|
||||
file: z.object({
|
||||
id: z.string().describe("The file ID"),
|
||||
name: z.string().describe("The file name"),
|
||||
description: z.string().nullable().describe("The file description"),
|
||||
extension: z.string().describe("The file extension"),
|
||||
size: z.string().describe("The file size"),
|
||||
objectName: z.string().describe("The object name of the file"),
|
||||
userId: z.string().describe("The user ID"),
|
||||
folderId: z.string().nullable().describe("The folder ID"),
|
||||
createdAt: z.date().describe("The file creation date"),
|
||||
updatedAt: z.date().describe("The file last update date"),
|
||||
}),
|
||||
message: z.string().describe("Success message"),
|
||||
}),
|
||||
400: z.object({ error: z.string().describe("Error message") }),
|
||||
401: z.object({ error: z.string().describe("Error message") }),
|
||||
403: z.object({ error: z.string().describe("Error message") }),
|
||||
404: z.object({ error: z.string().describe("Error message") }),
|
||||
},
|
||||
},
|
||||
},
|
||||
fileController.moveFile.bind(fileController)
|
||||
);
|
||||
|
||||
app.delete(
|
||||
"/files/:id",
|
||||
{
|
||||
@@ -186,41 +310,121 @@ export async function fileRoutes(app: FastifyInstance) {
|
||||
fileController.deleteFile.bind(fileController)
|
||||
);
|
||||
|
||||
app.patch(
|
||||
"/files/:id",
|
||||
// Multipart upload routes
|
||||
app.post(
|
||||
"/files/multipart/create",
|
||||
{
|
||||
preValidation,
|
||||
schema: {
|
||||
tags: ["File"],
|
||||
operationId: "updateFile",
|
||||
summary: "Update File Metadata",
|
||||
description: "Updates file metadata in the database",
|
||||
params: z.object({
|
||||
id: z.string().min(1, "The file id is required").describe("The file ID"),
|
||||
operationId: "createMultipartUpload",
|
||||
summary: "Create Multipart Upload",
|
||||
description:
|
||||
"Initializes a multipart upload for large files (≥100MB). Returns uploadId for subsequent part uploads.",
|
||||
body: z.object({
|
||||
filename: z.string().min(1).describe("The filename without extension"),
|
||||
extension: z.string().min(1).describe("The file extension"),
|
||||
}),
|
||||
body: UpdateFileSchema,
|
||||
response: {
|
||||
200: z.object({
|
||||
file: z.object({
|
||||
id: z.string().describe("The file ID"),
|
||||
name: z.string().describe("The file name"),
|
||||
description: z.string().nullable().describe("The file description"),
|
||||
extension: z.string().describe("The file extension"),
|
||||
size: z.string().describe("The file size"),
|
||||
objectName: z.string().describe("The object name of the file"),
|
||||
userId: z.string().describe("The user ID"),
|
||||
createdAt: z.date().describe("The file creation date"),
|
||||
updatedAt: z.date().describe("The file last update date"),
|
||||
}),
|
||||
uploadId: z.string().describe("The upload ID for this multipart upload"),
|
||||
objectName: z.string().describe("The object name in storage"),
|
||||
message: z.string().describe("Success message"),
|
||||
}),
|
||||
400: z.object({ error: z.string().describe("Error message") }),
|
||||
401: z.object({ error: z.string().describe("Error message") }),
|
||||
403: z.object({ error: z.string().describe("Error message") }),
|
||||
404: z.object({ error: z.string().describe("Error message") }),
|
||||
400: z.object({ error: z.string() }),
|
||||
401: z.object({ error: z.string() }),
|
||||
500: z.object({ error: z.string() }),
|
||||
},
|
||||
},
|
||||
},
|
||||
fileController.updateFile.bind(fileController)
|
||||
fileController.createMultipartUpload.bind(fileController)
|
||||
);
|
||||
|
||||
app.get(
|
||||
"/files/multipart/part-url",
|
||||
{
|
||||
preValidation,
|
||||
schema: {
|
||||
tags: ["File"],
|
||||
operationId: "getMultipartPartUrl",
|
||||
summary: "Get Presigned URL for Part",
|
||||
description: "Gets a presigned URL for uploading a specific part of a multipart upload",
|
||||
querystring: z.object({
|
||||
uploadId: z.string().min(1).describe("The multipart upload ID"),
|
||||
objectName: z.string().min(1).describe("The object name"),
|
||||
partNumber: z.string().min(1).describe("The part number (1-10000)"),
|
||||
}),
|
||||
response: {
|
||||
200: z.object({
|
||||
url: z.string().describe("The presigned URL for uploading this part"),
|
||||
}),
|
||||
400: z.object({ error: z.string() }),
|
||||
401: z.object({ error: z.string() }),
|
||||
500: z.object({ error: z.string() }),
|
||||
},
|
||||
},
|
||||
},
|
||||
fileController.getMultipartPartUrl.bind(fileController)
|
||||
);
|
||||
|
||||
app.post(
|
||||
"/files/multipart/complete",
|
||||
{
|
||||
preValidation,
|
||||
schema: {
|
||||
tags: ["File"],
|
||||
operationId: "completeMultipartUpload",
|
||||
summary: "Complete Multipart Upload",
|
||||
description: "Completes a multipart upload by combining all uploaded parts",
|
||||
body: z.object({
|
||||
uploadId: z.string().min(1).describe("The multipart upload ID"),
|
||||
objectName: z.string().min(1).describe("The object name"),
|
||||
parts: z
|
||||
.array(
|
||||
z.object({
|
||||
PartNumber: z.number().min(1).max(10000).describe("The part number"),
|
||||
ETag: z.string().min(1).describe("The ETag returned from uploading the part"),
|
||||
})
|
||||
)
|
||||
.describe("Array of uploaded parts"),
|
||||
}),
|
||||
response: {
|
||||
200: z.object({
|
||||
message: z.string().describe("Success message"),
|
||||
objectName: z.string().describe("The completed object name"),
|
||||
}),
|
||||
400: z.object({ error: z.string() }),
|
||||
401: z.object({ error: z.string() }),
|
||||
500: z.object({ error: z.string() }),
|
||||
},
|
||||
},
|
||||
},
|
||||
fileController.completeMultipartUpload.bind(fileController)
|
||||
);
|
||||
|
||||
app.post(
|
||||
"/files/multipart/abort",
|
||||
{
|
||||
preValidation,
|
||||
schema: {
|
||||
tags: ["File"],
|
||||
operationId: "abortMultipartUpload",
|
||||
summary: "Abort Multipart Upload",
|
||||
description: "Aborts a multipart upload and cleans up all uploaded parts",
|
||||
body: z.object({
|
||||
uploadId: z.string().min(1).describe("The multipart upload ID"),
|
||||
objectName: z.string().min(1).describe("The object name"),
|
||||
}),
|
||||
response: {
|
||||
200: z.object({
|
||||
message: z.string().describe("Success message"),
|
||||
}),
|
||||
400: z.object({ error: z.string() }),
|
||||
401: z.object({ error: z.string() }),
|
||||
500: z.object({ error: z.string() }),
|
||||
},
|
||||
},
|
||||
},
|
||||
fileController.abortMultipartUpload.bind(fileController)
|
||||
);
|
||||
}
|
||||
|
||||
@@ -1,5 +1,3 @@
|
||||
import { isS3Enabled } from "../../config/storage.config";
|
||||
import { FilesystemStorageProvider } from "../../providers/filesystem-storage.provider";
|
||||
import { S3StorageProvider } from "../../providers/s3-storage.provider";
|
||||
import { StorageProvider } from "../../types/storage";
|
||||
|
||||
@@ -7,29 +5,16 @@ export class FileService {
|
||||
private storageProvider: StorageProvider;
|
||||
|
||||
constructor() {
|
||||
if (isS3Enabled) {
|
||||
this.storageProvider = new S3StorageProvider();
|
||||
} else {
|
||||
this.storageProvider = FilesystemStorageProvider.getInstance();
|
||||
}
|
||||
// Always use S3 (Garage internal or external S3)
|
||||
this.storageProvider = new S3StorageProvider();
|
||||
}
|
||||
|
||||
async getPresignedPutUrl(objectName: string, expires: number): Promise<string> {
|
||||
try {
|
||||
return await this.storageProvider.getPresignedPutUrl(objectName, expires);
|
||||
} catch (err) {
|
||||
console.error("Erro no presignedPutObject:", err);
|
||||
throw err;
|
||||
}
|
||||
async getPresignedPutUrl(objectName: string, expires: number = 3600): Promise<string> {
|
||||
return await this.storageProvider.getPresignedPutUrl(objectName, expires);
|
||||
}
|
||||
|
||||
async getPresignedGetUrl(objectName: string, expires: number, fileName?: string): Promise<string> {
|
||||
try {
|
||||
return await this.storageProvider.getPresignedGetUrl(objectName, expires, fileName);
|
||||
} catch (err) {
|
||||
console.error("Erro no presignedGetObject:", err);
|
||||
throw err;
|
||||
}
|
||||
async getPresignedGetUrl(objectName: string, expires: number = 3600, fileName?: string): Promise<string> {
|
||||
return await this.storageProvider.getPresignedGetUrl(objectName, expires, fileName);
|
||||
}
|
||||
|
||||
async deleteObject(objectName: string): Promise<void> {
|
||||
@@ -41,7 +26,38 @@ export class FileService {
|
||||
}
|
||||
}
|
||||
|
||||
isFilesystemMode(): boolean {
|
||||
return !isS3Enabled;
|
||||
async getObjectStream(objectName: string): Promise<NodeJS.ReadableStream> {
|
||||
try {
|
||||
return await this.storageProvider.getObjectStream(objectName);
|
||||
} catch (err) {
|
||||
console.error("Error getting object stream:", err);
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
|
||||
// Multipart upload methods
|
||||
async createMultipartUpload(objectName: string): Promise<string> {
|
||||
return await this.storageProvider.createMultipartUpload(objectName);
|
||||
}
|
||||
|
||||
async getPresignedPartUrl(
|
||||
objectName: string,
|
||||
uploadId: string,
|
||||
partNumber: number,
|
||||
expires: number = 3600
|
||||
): Promise<string> {
|
||||
return await this.storageProvider.getPresignedPartUrl(objectName, uploadId, partNumber, expires);
|
||||
}
|
||||
|
||||
async completeMultipartUpload(
|
||||
objectName: string,
|
||||
uploadId: string,
|
||||
parts: Array<{ PartNumber: number; ETag: string }>
|
||||
): Promise<void> {
|
||||
await this.storageProvider.completeMultipartUpload(objectName, uploadId, parts);
|
||||
}
|
||||
|
||||
async abortMultipartUpload(objectName: string, uploadId: string): Promise<void> {
|
||||
await this.storageProvider.abortMultipartUpload(objectName, uploadId);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,345 +0,0 @@
|
||||
import * as fs from "fs";
|
||||
import * as path from "path";
|
||||
|
||||
import { getTempFilePath } from "../../config/directories.config";
|
||||
import { FilesystemStorageProvider } from "../../providers/filesystem-storage.provider";
|
||||
|
||||
export interface ChunkMetadata {
|
||||
fileId: string;
|
||||
chunkIndex: number;
|
||||
totalChunks: number;
|
||||
chunkSize: number;
|
||||
totalSize: number;
|
||||
fileName: string;
|
||||
isLastChunk: boolean;
|
||||
}
|
||||
|
||||
export interface ChunkInfo {
|
||||
fileId: string;
|
||||
fileName: string;
|
||||
totalSize: number;
|
||||
totalChunks: number;
|
||||
uploadedChunks: Set<number>;
|
||||
tempPath: string;
|
||||
createdAt: number;
|
||||
}
|
||||
|
||||
export class ChunkManager {
|
||||
private static instance: ChunkManager;
|
||||
private activeUploads = new Map<string, ChunkInfo>();
|
||||
private finalizingUploads = new Set<string>(); // Track uploads currently being finalized
|
||||
private cleanupInterval: NodeJS.Timeout;
|
||||
|
||||
private constructor() {
|
||||
// Cleanup expired uploads every 30 minutes
|
||||
this.cleanupInterval = setInterval(
|
||||
() => {
|
||||
this.cleanupExpiredUploads();
|
||||
},
|
||||
30 * 60 * 1000
|
||||
);
|
||||
}
|
||||
|
||||
public static getInstance(): ChunkManager {
|
||||
if (!ChunkManager.instance) {
|
||||
ChunkManager.instance = new ChunkManager();
|
||||
}
|
||||
return ChunkManager.instance;
|
||||
}
|
||||
|
||||
/**
|
||||
* Process a chunk upload with streaming
|
||||
*/
|
||||
async processChunk(
|
||||
metadata: ChunkMetadata,
|
||||
inputStream: NodeJS.ReadableStream,
|
||||
originalObjectName: string
|
||||
): Promise<{ isComplete: boolean; finalPath?: string }> {
|
||||
const startTime = Date.now();
|
||||
const { fileId, chunkIndex, totalChunks, fileName, totalSize, isLastChunk } = metadata;
|
||||
|
||||
console.log(`Processing chunk ${chunkIndex + 1}/${totalChunks} for file ${fileName} (${fileId})`);
|
||||
|
||||
let chunkInfo = this.activeUploads.get(fileId);
|
||||
if (!chunkInfo) {
|
||||
if (chunkIndex !== 0) {
|
||||
throw new Error("First chunk must be chunk 0");
|
||||
}
|
||||
|
||||
const tempPath = getTempFilePath(fileId);
|
||||
chunkInfo = {
|
||||
fileId,
|
||||
fileName,
|
||||
totalSize,
|
||||
totalChunks,
|
||||
uploadedChunks: new Set(),
|
||||
tempPath,
|
||||
createdAt: Date.now(),
|
||||
};
|
||||
this.activeUploads.set(fileId, chunkInfo);
|
||||
console.log(`Created new upload session for ${fileName} at ${tempPath}`);
|
||||
}
|
||||
|
||||
console.log(
|
||||
`Validating chunk ${chunkIndex} (total: ${totalChunks}, uploaded: ${Array.from(chunkInfo.uploadedChunks).join(",")})`
|
||||
);
|
||||
|
||||
if (chunkIndex < 0 || chunkIndex >= totalChunks) {
|
||||
throw new Error(`Invalid chunk index: ${chunkIndex} (must be 0-${totalChunks - 1})`);
|
||||
}
|
||||
|
||||
if (chunkInfo.uploadedChunks.has(chunkIndex)) {
|
||||
console.log(`Chunk ${chunkIndex} already uploaded, treating as success`);
|
||||
|
||||
if (isLastChunk && chunkInfo.uploadedChunks.size === totalChunks) {
|
||||
if (this.finalizingUploads.has(fileId)) {
|
||||
console.log(`Upload ${fileId} is already being finalized, waiting...`);
|
||||
return { isComplete: false };
|
||||
}
|
||||
|
||||
console.log(`All chunks uploaded, finalizing ${fileName}`);
|
||||
return await this.finalizeUpload(chunkInfo, metadata, originalObjectName);
|
||||
}
|
||||
|
||||
return { isComplete: false };
|
||||
}
|
||||
|
||||
const tempDir = path.dirname(chunkInfo.tempPath);
|
||||
await fs.promises.mkdir(tempDir, { recursive: true });
|
||||
console.log(`Temp directory ensured: ${tempDir}`);
|
||||
|
||||
await this.writeChunkToFile(chunkInfo.tempPath, inputStream, chunkIndex === 0);
|
||||
|
||||
chunkInfo.uploadedChunks.add(chunkIndex);
|
||||
|
||||
try {
|
||||
const stats = await fs.promises.stat(chunkInfo.tempPath);
|
||||
const processingTime = Date.now() - startTime;
|
||||
console.log(
|
||||
`Chunk ${chunkIndex + 1}/${totalChunks} uploaded successfully in ${processingTime}ms. Temp file size: ${stats.size} bytes`
|
||||
);
|
||||
} catch (error) {
|
||||
console.warn(`Could not get temp file stats:`, error);
|
||||
}
|
||||
|
||||
console.log(
|
||||
`Checking completion: isLastChunk=${isLastChunk}, uploadedChunks.size=${chunkInfo.uploadedChunks.size}, totalChunks=${totalChunks}`
|
||||
);
|
||||
|
||||
if (isLastChunk && chunkInfo.uploadedChunks.size === totalChunks) {
|
||||
if (this.finalizingUploads.has(fileId)) {
|
||||
console.log(`Upload ${fileId} is already being finalized, waiting...`);
|
||||
return { isComplete: false };
|
||||
}
|
||||
|
||||
console.log(`All chunks uploaded, finalizing ${fileName}`);
|
||||
|
||||
const uploadedChunksArray = Array.from(chunkInfo.uploadedChunks).sort((a, b) => a - b);
|
||||
console.log(`Uploaded chunks in order: ${uploadedChunksArray.join(", ")}`);
|
||||
|
||||
const expectedChunks = Array.from({ length: totalChunks }, (_, i) => i);
|
||||
const missingChunks = expectedChunks.filter((chunk) => !chunkInfo.uploadedChunks.has(chunk));
|
||||
|
||||
if (missingChunks.length > 0) {
|
||||
throw new Error(`Missing chunks: ${missingChunks.join(", ")}`);
|
||||
}
|
||||
|
||||
return await this.finalizeUpload(chunkInfo, metadata, originalObjectName);
|
||||
} else {
|
||||
console.log(
|
||||
`Not ready for finalization: isLastChunk=${isLastChunk}, uploadedChunks.size=${chunkInfo.uploadedChunks.size}, totalChunks=${totalChunks}`
|
||||
);
|
||||
}
|
||||
|
||||
return { isComplete: false };
|
||||
}
|
||||
|
||||
/**
|
||||
* Write chunk to file using streaming
|
||||
*/
|
||||
private async writeChunkToFile(
|
||||
filePath: string,
|
||||
inputStream: NodeJS.ReadableStream,
|
||||
isFirstChunk: boolean
|
||||
): Promise<void> {
|
||||
return new Promise((resolve, reject) => {
|
||||
console.log(`Writing chunk to ${filePath} (first: ${isFirstChunk})`);
|
||||
|
||||
if (isFirstChunk) {
|
||||
const writeStream = fs.createWriteStream(filePath, {
|
||||
highWaterMark: 64 * 1024 * 1024, // 64MB buffer for better performance
|
||||
});
|
||||
writeStream.on("error", (error) => {
|
||||
console.error("Write stream error:", error);
|
||||
reject(error);
|
||||
});
|
||||
writeStream.on("finish", () => {
|
||||
console.log("Write stream finished successfully");
|
||||
resolve();
|
||||
});
|
||||
inputStream.pipe(writeStream);
|
||||
} else {
|
||||
const writeStream = fs.createWriteStream(filePath, {
|
||||
flags: "a",
|
||||
highWaterMark: 64 * 1024 * 1024, // 64MB buffer for better performance
|
||||
});
|
||||
writeStream.on("error", (error) => {
|
||||
console.error("Write stream error:", error);
|
||||
reject(error);
|
||||
});
|
||||
writeStream.on("finish", () => {
|
||||
console.log("Write stream finished successfully");
|
||||
resolve();
|
||||
});
|
||||
inputStream.pipe(writeStream);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Finalize upload by moving temp file to final location and encrypting (if enabled)
|
||||
*/
|
||||
private async finalizeUpload(
|
||||
chunkInfo: ChunkInfo,
|
||||
metadata: ChunkMetadata,
|
||||
originalObjectName: string
|
||||
): Promise<{ isComplete: boolean; finalPath: string }> {
|
||||
// Mark as finalizing to prevent race conditions
|
||||
this.finalizingUploads.add(chunkInfo.fileId);
|
||||
|
||||
try {
|
||||
console.log(`Finalizing upload for ${chunkInfo.fileName}`);
|
||||
|
||||
const tempStats = await fs.promises.stat(chunkInfo.tempPath);
|
||||
console.log(`Temp file size: ${tempStats.size} bytes, expected: ${chunkInfo.totalSize} bytes`);
|
||||
|
||||
if (tempStats.size !== chunkInfo.totalSize) {
|
||||
console.warn(`Size mismatch! Temp: ${tempStats.size}, Expected: ${chunkInfo.totalSize}`);
|
||||
}
|
||||
|
||||
const provider = FilesystemStorageProvider.getInstance();
|
||||
const finalObjectName = originalObjectName;
|
||||
const filePath = provider.getFilePath(finalObjectName);
|
||||
const dir = path.dirname(filePath);
|
||||
|
||||
console.log(`Starting finalization: ${finalObjectName}`);
|
||||
|
||||
await fs.promises.mkdir(dir, { recursive: true });
|
||||
|
||||
const tempReadStream = fs.createReadStream(chunkInfo.tempPath, {
|
||||
highWaterMark: 64 * 1024 * 1024, // 64MB buffer for better performance
|
||||
});
|
||||
const writeStream = fs.createWriteStream(filePath, {
|
||||
highWaterMark: 64 * 1024 * 1024,
|
||||
});
|
||||
const encryptStream = provider.createEncryptStream();
|
||||
|
||||
await new Promise<void>((resolve, reject) => {
|
||||
const startTime = Date.now();
|
||||
|
||||
tempReadStream
|
||||
.pipe(encryptStream)
|
||||
.pipe(writeStream)
|
||||
.on("finish", () => {
|
||||
const duration = Date.now() - startTime;
|
||||
console.log(`File processed and saved to: ${filePath} in ${duration}ms`);
|
||||
resolve();
|
||||
})
|
||||
.on("error", (error) => {
|
||||
console.error("Error during processing:", error);
|
||||
reject(error);
|
||||
});
|
||||
});
|
||||
|
||||
console.log(`File successfully uploaded and processed: ${finalObjectName}`);
|
||||
|
||||
await this.cleanupTempFile(chunkInfo.tempPath);
|
||||
|
||||
this.activeUploads.delete(chunkInfo.fileId);
|
||||
this.finalizingUploads.delete(chunkInfo.fileId);
|
||||
|
||||
return { isComplete: true, finalPath: finalObjectName };
|
||||
} catch (error) {
|
||||
console.error("Error during finalization:", error);
|
||||
await this.cleanupTempFile(chunkInfo.tempPath);
|
||||
this.activeUploads.delete(chunkInfo.fileId);
|
||||
this.finalizingUploads.delete(chunkInfo.fileId);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Cleanup temporary file
|
||||
*/
|
||||
private async cleanupTempFile(tempPath: string): Promise<void> {
|
||||
try {
|
||||
await fs.promises.access(tempPath);
|
||||
await fs.promises.unlink(tempPath);
|
||||
console.log(`Temp file cleaned up: ${tempPath}`);
|
||||
} catch (error: any) {
|
||||
if (error.code === "ENOENT") {
|
||||
console.log(`Temp file already cleaned up: ${tempPath}`);
|
||||
} else {
|
||||
console.warn(`Failed to cleanup temp file ${tempPath}:`, error);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Cleanup expired uploads (older than 2 hours)
|
||||
*/
|
||||
private async cleanupExpiredUploads(): Promise<void> {
|
||||
const now = Date.now();
|
||||
const maxAge = 2 * 60 * 60 * 1000; // 2 hours
|
||||
|
||||
for (const [fileId, chunkInfo] of this.activeUploads.entries()) {
|
||||
if (now - chunkInfo.createdAt > maxAge) {
|
||||
console.log(`Cleaning up expired upload: ${fileId}`);
|
||||
await this.cleanupTempFile(chunkInfo.tempPath);
|
||||
this.activeUploads.delete(fileId);
|
||||
this.finalizingUploads.delete(fileId);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get upload progress
|
||||
*/
|
||||
getUploadProgress(fileId: string): { uploaded: number; total: number; percentage: number } | null {
|
||||
const chunkInfo = this.activeUploads.get(fileId);
|
||||
if (!chunkInfo) return null;
|
||||
|
||||
return {
|
||||
uploaded: chunkInfo.uploadedChunks.size,
|
||||
total: chunkInfo.totalChunks,
|
||||
percentage: Math.round((chunkInfo.uploadedChunks.size / chunkInfo.totalChunks) * 100),
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Cancel upload
|
||||
*/
|
||||
async cancelUpload(fileId: string): Promise<void> {
|
||||
const chunkInfo = this.activeUploads.get(fileId);
|
||||
if (chunkInfo) {
|
||||
await this.cleanupTempFile(chunkInfo.tempPath);
|
||||
this.activeUploads.delete(fileId);
|
||||
this.finalizingUploads.delete(fileId);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Cleanup on shutdown
|
||||
*/
|
||||
destroy(): void {
|
||||
if (this.cleanupInterval) {
|
||||
clearInterval(this.cleanupInterval);
|
||||
}
|
||||
|
||||
for (const [fileId, chunkInfo] of this.activeUploads.entries()) {
|
||||
this.cleanupTempFile(chunkInfo.tempPath);
|
||||
}
|
||||
this.activeUploads.clear();
|
||||
this.finalizingUploads.clear();
|
||||
}
|
||||
}
|
||||
@@ -1,262 +0,0 @@
|
||||
import * as fs from "fs";
|
||||
import { pipeline } from "stream/promises";
|
||||
import { FastifyReply, FastifyRequest } from "fastify";
|
||||
|
||||
import { FilesystemStorageProvider } from "../../providers/filesystem-storage.provider";
|
||||
import { ChunkManager, ChunkMetadata } from "./chunk-manager";
|
||||
|
||||
export class FilesystemController {
|
||||
private chunkManager = ChunkManager.getInstance();
|
||||
|
||||
/**
|
||||
* Safely encode filename for Content-Disposition header
|
||||
*/
|
||||
private encodeFilenameForHeader(filename: string): string {
|
||||
if (!filename || filename.trim() === "") {
|
||||
return 'attachment; filename="download"';
|
||||
}
|
||||
|
||||
let sanitized = filename
|
||||
.replace(/"/g, "'")
|
||||
.replace(/[\r\n\t\v\f]/g, "")
|
||||
.replace(/[\\|/]/g, "-")
|
||||
.replace(/[<>:|*?]/g, "");
|
||||
|
||||
sanitized = sanitized
|
||||
.split("")
|
||||
.filter((char) => {
|
||||
const code = char.charCodeAt(0);
|
||||
return code >= 32 && !(code >= 127 && code <= 159);
|
||||
})
|
||||
.join("")
|
||||
.trim();
|
||||
|
||||
if (!sanitized) {
|
||||
return 'attachment; filename="download"';
|
||||
}
|
||||
|
||||
const asciiSafe = sanitized
|
||||
.split("")
|
||||
.filter((char) => {
|
||||
const code = char.charCodeAt(0);
|
||||
return code >= 32 && code <= 126;
|
||||
})
|
||||
.join("");
|
||||
|
||||
if (asciiSafe && asciiSafe.trim()) {
|
||||
const encoded = encodeURIComponent(sanitized);
|
||||
return `attachment; filename="${asciiSafe}"; filename*=UTF-8''${encoded}`;
|
||||
} else {
|
||||
const encoded = encodeURIComponent(sanitized);
|
||||
return `attachment; filename*=UTF-8''${encoded}`;
|
||||
}
|
||||
}
|
||||
|
||||
async upload(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
const { token } = request.params as { token: string };
|
||||
|
||||
const provider = FilesystemStorageProvider.getInstance();
|
||||
|
||||
const tokenData = provider.validateUploadToken(token);
|
||||
|
||||
if (!tokenData) {
|
||||
return reply.status(400).send({ error: "Invalid or expired upload token" });
|
||||
}
|
||||
|
||||
const chunkMetadata = this.extractChunkMetadata(request);
|
||||
|
||||
if (chunkMetadata) {
|
||||
try {
|
||||
const result = await this.handleChunkedUpload(request, chunkMetadata, tokenData.objectName);
|
||||
|
||||
if (result.isComplete) {
|
||||
provider.consumeUploadToken(token);
|
||||
reply.status(200).send({
|
||||
message: "File uploaded successfully",
|
||||
objectName: result.finalPath,
|
||||
finalObjectName: result.finalPath,
|
||||
});
|
||||
} else {
|
||||
reply.status(200).send({
|
||||
message: "Chunk uploaded successfully",
|
||||
progress: this.chunkManager.getUploadProgress(chunkMetadata.fileId),
|
||||
});
|
||||
}
|
||||
} catch (chunkError: any) {
|
||||
return reply.status(400).send({
|
||||
error: chunkError.message || "Chunked upload failed",
|
||||
details: chunkError.toString(),
|
||||
});
|
||||
}
|
||||
} else {
|
||||
await this.uploadFileStream(request, provider, tokenData.objectName);
|
||||
provider.consumeUploadToken(token);
|
||||
reply.status(200).send({ message: "File uploaded successfully" });
|
||||
}
|
||||
} catch (error) {
|
||||
return reply.status(500).send({ error: "Internal server error" });
|
||||
}
|
||||
}
|
||||
|
||||
private async uploadFileStream(request: FastifyRequest, provider: FilesystemStorageProvider, objectName: string) {
|
||||
await provider.uploadFileFromStream(objectName, request.raw);
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract chunk metadata from request headers
|
||||
*/
|
||||
private extractChunkMetadata(request: FastifyRequest): ChunkMetadata | null {
|
||||
const fileId = request.headers["x-file-id"] as string;
|
||||
const chunkIndex = request.headers["x-chunk-index"] as string;
|
||||
const totalChunks = request.headers["x-total-chunks"] as string;
|
||||
const chunkSize = request.headers["x-chunk-size"] as string;
|
||||
const totalSize = request.headers["x-total-size"] as string;
|
||||
const fileName = request.headers["x-file-name"] as string;
|
||||
const isLastChunk = request.headers["x-is-last-chunk"] as string;
|
||||
|
||||
if (!fileId || !chunkIndex || !totalChunks || !chunkSize || !totalSize || !fileName) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const metadata = {
|
||||
fileId,
|
||||
chunkIndex: parseInt(chunkIndex, 10),
|
||||
totalChunks: parseInt(totalChunks, 10),
|
||||
chunkSize: parseInt(chunkSize, 10),
|
||||
totalSize: parseInt(totalSize, 10),
|
||||
fileName,
|
||||
isLastChunk: isLastChunk === "true",
|
||||
};
|
||||
|
||||
return metadata;
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle chunked upload with streaming
|
||||
*/
|
||||
private async handleChunkedUpload(request: FastifyRequest, metadata: ChunkMetadata, originalObjectName: string) {
|
||||
const stream = request.raw;
|
||||
|
||||
stream.on("error", (error) => {
|
||||
console.error("Request stream error:", error);
|
||||
});
|
||||
|
||||
return await this.chunkManager.processChunk(metadata, stream, originalObjectName);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get upload progress for chunked uploads
|
||||
*/
|
||||
async getUploadProgress(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
const { fileId } = request.params as { fileId: string };
|
||||
|
||||
const progress = this.chunkManager.getUploadProgress(fileId);
|
||||
|
||||
if (!progress) {
|
||||
return reply.status(404).send({ error: "Upload not found" });
|
||||
}
|
||||
|
||||
reply.status(200).send(progress);
|
||||
} catch (error) {
|
||||
return reply.status(500).send({ error: "Internal server error" });
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Cancel chunked upload
|
||||
*/
|
||||
async cancelUpload(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
const { fileId } = request.params as { fileId: string };
|
||||
|
||||
await this.chunkManager.cancelUpload(fileId);
|
||||
|
||||
reply.status(200).send({ message: "Upload cancelled successfully" });
|
||||
} catch (error) {
|
||||
return reply.status(500).send({ error: "Internal server error" });
|
||||
}
|
||||
}
|
||||
|
||||
async download(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
const { token } = request.params as { token: string };
|
||||
|
||||
const provider = FilesystemStorageProvider.getInstance();
|
||||
|
||||
const tokenData = provider.validateDownloadToken(token);
|
||||
|
||||
if (!tokenData) {
|
||||
return reply.status(400).send({ error: "Invalid or expired download token" });
|
||||
}
|
||||
|
||||
const filePath = provider.getFilePath(tokenData.objectName);
|
||||
const stats = await fs.promises.stat(filePath);
|
||||
const fileSize = stats.size;
|
||||
const isLargeFile = fileSize > 50 * 1024 * 1024;
|
||||
|
||||
const fileName = tokenData.fileName || "download";
|
||||
const range = request.headers.range;
|
||||
|
||||
reply.header("Content-Disposition", this.encodeFilenameForHeader(fileName));
|
||||
reply.header("Content-Type", "application/octet-stream");
|
||||
reply.header("Accept-Ranges", "bytes");
|
||||
|
||||
if (range) {
|
||||
const parts = range.replace(/bytes=/, "").split("-");
|
||||
const start = parseInt(parts[0], 10);
|
||||
const end = parts[1] ? parseInt(parts[1], 10) : fileSize - 1;
|
||||
const chunkSize = end - start + 1;
|
||||
|
||||
reply.status(206);
|
||||
reply.header("Content-Range", `bytes ${start}-${end}/${fileSize}`);
|
||||
reply.header("Content-Length", chunkSize);
|
||||
|
||||
if (isLargeFile) {
|
||||
await this.downloadLargeFileRange(reply, provider, tokenData.objectName, start, end);
|
||||
} else {
|
||||
const buffer = await provider.downloadFile(tokenData.objectName);
|
||||
const chunk = buffer.slice(start, end + 1);
|
||||
reply.send(chunk);
|
||||
}
|
||||
} else {
|
||||
reply.header("Content-Length", fileSize);
|
||||
|
||||
if (isLargeFile) {
|
||||
await this.downloadLargeFile(reply, provider, filePath);
|
||||
} else {
|
||||
const buffer = await provider.downloadFile(tokenData.objectName);
|
||||
reply.send(buffer);
|
||||
}
|
||||
}
|
||||
|
||||
provider.consumeDownloadToken(token);
|
||||
} catch (error) {
|
||||
return reply.status(500).send({ error: "Internal server error" });
|
||||
}
|
||||
}
|
||||
|
||||
private async downloadLargeFile(reply: FastifyReply, provider: FilesystemStorageProvider, filePath: string) {
|
||||
const readStream = fs.createReadStream(filePath);
|
||||
const decryptStream = provider.createDecryptStream();
|
||||
|
||||
try {
|
||||
await pipeline(readStream, decryptStream, reply.raw);
|
||||
} catch (error) {
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
private async downloadLargeFileRange(
|
||||
reply: FastifyReply,
|
||||
provider: FilesystemStorageProvider,
|
||||
objectName: string,
|
||||
start: number,
|
||||
end: number
|
||||
) {
|
||||
const buffer = await provider.downloadFile(objectName);
|
||||
const chunk = buffer.slice(start, end + 1);
|
||||
reply.send(chunk);
|
||||
}
|
||||
}
|
||||
@@ -1,123 +0,0 @@
|
||||
import { FastifyInstance, FastifyRequest } from "fastify";
|
||||
import { z } from "zod";
|
||||
|
||||
import { FilesystemController } from "./controller";
|
||||
|
||||
export async function filesystemRoutes(app: FastifyInstance) {
|
||||
const filesystemController = new FilesystemController();
|
||||
|
||||
app.addContentTypeParser("*", async (request: FastifyRequest, payload: any) => {
|
||||
return payload;
|
||||
});
|
||||
|
||||
app.addContentTypeParser("application/json", async (request: FastifyRequest, payload: any) => {
|
||||
return payload;
|
||||
});
|
||||
|
||||
app.put(
|
||||
"/filesystem/upload/:token",
|
||||
{
|
||||
bodyLimit: 1024 * 1024 * 1024 * 1024 * 1024, // 1PB limit
|
||||
schema: {
|
||||
tags: ["Filesystem"],
|
||||
operationId: "uploadToFilesystem",
|
||||
summary: "Upload file to filesystem storage",
|
||||
description: "Upload a file directly to the encrypted filesystem storage",
|
||||
params: z.object({
|
||||
token: z.string().describe("Upload token"),
|
||||
}),
|
||||
response: {
|
||||
200: z.object({
|
||||
message: z.string(),
|
||||
}),
|
||||
400: z.object({
|
||||
error: z.string(),
|
||||
}),
|
||||
500: z.object({
|
||||
error: z.string(),
|
||||
}),
|
||||
},
|
||||
},
|
||||
},
|
||||
filesystemController.upload.bind(filesystemController)
|
||||
);
|
||||
|
||||
app.get(
|
||||
"/filesystem/download/:token",
|
||||
{
|
||||
bodyLimit: 1024 * 1024 * 1024 * 1024 * 1024, // 1PB limit
|
||||
schema: {
|
||||
tags: ["Filesystem"],
|
||||
operationId: "downloadFromFilesystem",
|
||||
summary: "Download file from filesystem storage",
|
||||
description: "Download a file directly from the encrypted filesystem storage",
|
||||
params: z.object({
|
||||
token: z.string().describe("Download token"),
|
||||
}),
|
||||
response: {
|
||||
200: z.string().describe("File content"),
|
||||
400: z.object({
|
||||
error: z.string(),
|
||||
}),
|
||||
500: z.object({
|
||||
error: z.string(),
|
||||
}),
|
||||
},
|
||||
},
|
||||
},
|
||||
filesystemController.download.bind(filesystemController)
|
||||
);
|
||||
|
||||
app.get(
|
||||
"/filesystem/upload-progress/:fileId",
|
||||
{
|
||||
schema: {
|
||||
tags: ["Filesystem"],
|
||||
operationId: "getUploadProgress",
|
||||
summary: "Get chunked upload progress",
|
||||
description: "Get the progress of a chunked upload",
|
||||
params: z.object({
|
||||
fileId: z.string().describe("File ID"),
|
||||
}),
|
||||
response: {
|
||||
200: z.object({
|
||||
uploaded: z.number(),
|
||||
total: z.number(),
|
||||
percentage: z.number(),
|
||||
}),
|
||||
404: z.object({
|
||||
error: z.string(),
|
||||
}),
|
||||
500: z.object({
|
||||
error: z.string(),
|
||||
}),
|
||||
},
|
||||
},
|
||||
},
|
||||
filesystemController.getUploadProgress.bind(filesystemController)
|
||||
);
|
||||
|
||||
app.delete(
|
||||
"/filesystem/cancel-upload/:fileId",
|
||||
{
|
||||
schema: {
|
||||
tags: ["Filesystem"],
|
||||
operationId: "cancelUpload",
|
||||
summary: "Cancel chunked upload",
|
||||
description: "Cancel an ongoing chunked upload",
|
||||
params: z.object({
|
||||
fileId: z.string().describe("File ID"),
|
||||
}),
|
||||
response: {
|
||||
200: z.object({
|
||||
message: z.string(),
|
||||
}),
|
||||
500: z.object({
|
||||
error: z.string(),
|
||||
}),
|
||||
},
|
||||
},
|
||||
},
|
||||
filesystemController.cancelUpload.bind(filesystemController)
|
||||
);
|
||||
}
|
||||
396
apps/server/src/modules/folder/controller.ts
Normal file
396
apps/server/src/modules/folder/controller.ts
Normal file
@@ -0,0 +1,396 @@
|
||||
import { FastifyReply, FastifyRequest } from "fastify";
|
||||
|
||||
import { env } from "../../env";
|
||||
import { prisma } from "../../shared/prisma";
|
||||
import { ConfigService } from "../config/service";
|
||||
import {
|
||||
CheckFolderSchema,
|
||||
ListFoldersSchema,
|
||||
MoveFolderSchema,
|
||||
RegisterFolderSchema,
|
||||
UpdateFolderSchema,
|
||||
} from "./dto";
|
||||
import { FolderService } from "./service";
|
||||
|
||||
export class FolderController {
|
||||
private folderService = new FolderService();
|
||||
private configService = new ConfigService();
|
||||
|
||||
async registerFolder(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
await request.jwtVerify();
|
||||
const userId = (request as any).user?.userId;
|
||||
if (!userId) {
|
||||
return reply.status(401).send({ error: "Unauthorized: a valid token is required to access this resource." });
|
||||
}
|
||||
|
||||
const input = RegisterFolderSchema.parse(request.body);
|
||||
|
||||
if (input.parentId) {
|
||||
const parentFolder = await prisma.folder.findFirst({
|
||||
where: { id: input.parentId, userId },
|
||||
});
|
||||
if (!parentFolder) {
|
||||
return reply.status(400).send({ error: "Parent folder not found or access denied" });
|
||||
}
|
||||
}
|
||||
|
||||
// Check for duplicates and auto-rename if necessary
|
||||
const { generateUniqueFolderName } = await import("../../utils/file-name-generator.js");
|
||||
const uniqueName = await generateUniqueFolderName(input.name, userId, input.parentId);
|
||||
|
||||
const folderRecord = await prisma.folder.create({
|
||||
data: {
|
||||
name: uniqueName,
|
||||
description: input.description,
|
||||
objectName: input.objectName,
|
||||
parentId: input.parentId,
|
||||
userId,
|
||||
},
|
||||
include: {
|
||||
_count: {
|
||||
select: {
|
||||
files: true,
|
||||
children: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
const totalSize = await this.folderService.calculateFolderSize(folderRecord.id, userId);
|
||||
|
||||
const folderResponse = {
|
||||
id: folderRecord.id,
|
||||
name: folderRecord.name,
|
||||
description: folderRecord.description,
|
||||
objectName: folderRecord.objectName,
|
||||
parentId: folderRecord.parentId,
|
||||
userId: folderRecord.userId,
|
||||
createdAt: folderRecord.createdAt,
|
||||
updatedAt: folderRecord.updatedAt,
|
||||
totalSize: totalSize.toString(),
|
||||
_count: folderRecord._count,
|
||||
};
|
||||
|
||||
return reply.status(201).send({
|
||||
folder: folderResponse,
|
||||
message: "Folder registered successfully.",
|
||||
});
|
||||
} catch (error: any) {
|
||||
console.error("Error in registerFolder:", error);
|
||||
return reply.status(400).send({ error: error.message });
|
||||
}
|
||||
}
|
||||
|
||||
async checkFolder(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
await request.jwtVerify();
|
||||
const userId = (request as any).user?.userId;
|
||||
if (!userId) {
|
||||
return reply.status(401).send({
|
||||
error: "Unauthorized: a valid token is required to access this resource.",
|
||||
code: "unauthorized",
|
||||
});
|
||||
}
|
||||
|
||||
const input = CheckFolderSchema.parse(request.body);
|
||||
|
||||
if (input.name.length > 100) {
|
||||
return reply.status(400).send({
|
||||
code: "folderNameTooLong",
|
||||
error: "Folder name exceeds maximum length of 100 characters",
|
||||
details: "100",
|
||||
});
|
||||
}
|
||||
|
||||
const existingFolder = await prisma.folder.findFirst({
|
||||
where: {
|
||||
name: input.name,
|
||||
parentId: input.parentId || null,
|
||||
userId,
|
||||
},
|
||||
});
|
||||
|
||||
if (existingFolder) {
|
||||
return reply.status(400).send({
|
||||
error: "A folder with this name already exists in this location",
|
||||
code: "duplicateFolderName",
|
||||
});
|
||||
}
|
||||
|
||||
return reply.status(201).send({
|
||||
message: "Folder checks succeeded.",
|
||||
});
|
||||
} catch (error: any) {
|
||||
console.error("Error in checkFolder:", error);
|
||||
return reply.status(400).send({ error: error.message });
|
||||
}
|
||||
}
|
||||
|
||||
async listFolders(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
await request.jwtVerify();
|
||||
const userId = (request as any).user?.userId;
|
||||
if (!userId) {
|
||||
return reply.status(401).send({ error: "Unauthorized: a valid token is required to access this resource." });
|
||||
}
|
||||
|
||||
const input = ListFoldersSchema.parse(request.query);
|
||||
const { parentId, recursive: recursiveStr } = input;
|
||||
const recursive = recursiveStr === "false" ? false : true;
|
||||
|
||||
let folders: any[];
|
||||
|
||||
if (recursive) {
|
||||
folders = await prisma.folder.findMany({
|
||||
where: { userId },
|
||||
include: {
|
||||
_count: {
|
||||
select: {
|
||||
files: true,
|
||||
children: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
orderBy: [{ name: "asc" }],
|
||||
});
|
||||
} else {
|
||||
// Get only direct children of specified parent
|
||||
const targetParentId = parentId === "null" || parentId === "" || !parentId ? null : parentId;
|
||||
folders = await prisma.folder.findMany({
|
||||
where: {
|
||||
userId,
|
||||
parentId: targetParentId,
|
||||
},
|
||||
include: {
|
||||
_count: {
|
||||
select: {
|
||||
files: true,
|
||||
children: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
orderBy: [{ name: "asc" }],
|
||||
});
|
||||
}
|
||||
|
||||
const foldersResponse = await Promise.all(
|
||||
folders.map(async (folder) => {
|
||||
const totalSize = await this.folderService.calculateFolderSize(folder.id, userId);
|
||||
return {
|
||||
id: folder.id,
|
||||
name: folder.name,
|
||||
description: folder.description,
|
||||
objectName: folder.objectName,
|
||||
parentId: folder.parentId,
|
||||
userId: folder.userId,
|
||||
createdAt: folder.createdAt,
|
||||
updatedAt: folder.updatedAt,
|
||||
totalSize: totalSize.toString(),
|
||||
_count: folder._count,
|
||||
};
|
||||
})
|
||||
);
|
||||
|
||||
return reply.send({ folders: foldersResponse });
|
||||
} catch (error: any) {
|
||||
console.error("Error in listFolders:", error);
|
||||
return reply.status(500).send({ error: error.message });
|
||||
}
|
||||
}
|
||||
|
||||
async updateFolder(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
await request.jwtVerify();
|
||||
const { id } = request.params as { id: string };
|
||||
const userId = (request as any).user?.userId;
|
||||
|
||||
if (!userId) {
|
||||
return reply.status(401).send({
|
||||
error: "Unauthorized: a valid token is required to access this resource.",
|
||||
});
|
||||
}
|
||||
|
||||
const updateData = UpdateFolderSchema.parse(request.body);
|
||||
|
||||
const folderRecord = await prisma.folder.findUnique({ where: { id } });
|
||||
|
||||
if (!folderRecord) {
|
||||
return reply.status(404).send({ error: "Folder not found." });
|
||||
}
|
||||
|
||||
if (folderRecord.userId !== userId) {
|
||||
return reply.status(403).send({ error: "Access denied." });
|
||||
}
|
||||
|
||||
// If renaming the folder, check for duplicates and auto-rename if necessary
|
||||
if (updateData.name && updateData.name !== folderRecord.name) {
|
||||
const { generateUniqueFolderName } = await import("../../utils/file-name-generator.js");
|
||||
const uniqueName = await generateUniqueFolderName(updateData.name, userId, folderRecord.parentId, id);
|
||||
updateData.name = uniqueName;
|
||||
}
|
||||
|
||||
const updatedFolder = await prisma.folder.update({
|
||||
where: { id },
|
||||
data: updateData,
|
||||
include: {
|
||||
_count: {
|
||||
select: {
|
||||
files: true,
|
||||
children: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
const totalSize = await this.folderService.calculateFolderSize(updatedFolder.id, userId);
|
||||
|
||||
const folderResponse = {
|
||||
id: updatedFolder.id,
|
||||
name: updatedFolder.name,
|
||||
description: updatedFolder.description,
|
||||
objectName: updatedFolder.objectName,
|
||||
parentId: updatedFolder.parentId,
|
||||
userId: updatedFolder.userId,
|
||||
createdAt: updatedFolder.createdAt,
|
||||
updatedAt: updatedFolder.updatedAt,
|
||||
totalSize: totalSize.toString(),
|
||||
_count: updatedFolder._count,
|
||||
};
|
||||
|
||||
return reply.send({
|
||||
folder: folderResponse,
|
||||
message: "Folder updated successfully.",
|
||||
});
|
||||
} catch (error: any) {
|
||||
console.error("Error in updateFolder:", error);
|
||||
return reply.status(400).send({ error: error.message });
|
||||
}
|
||||
}
|
||||
|
||||
async moveFolder(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
await request.jwtVerify();
|
||||
const userId = (request as any).user?.userId;
|
||||
|
||||
if (!userId) {
|
||||
return reply.status(401).send({ error: "Unauthorized: a valid token is required to access this resource." });
|
||||
}
|
||||
|
||||
const { id } = request.params as { id: string };
|
||||
const body = request.body as any;
|
||||
|
||||
const input = {
|
||||
parentId: body.parentId === undefined ? null : body.parentId,
|
||||
};
|
||||
|
||||
const validatedInput = MoveFolderSchema.parse(input);
|
||||
|
||||
const existingFolder = await prisma.folder.findFirst({
|
||||
where: { id, userId },
|
||||
});
|
||||
|
||||
if (!existingFolder) {
|
||||
return reply.status(404).send({ error: "Folder not found." });
|
||||
}
|
||||
|
||||
if (validatedInput.parentId) {
|
||||
const parentFolder = await prisma.folder.findFirst({
|
||||
where: { id: validatedInput.parentId, userId },
|
||||
});
|
||||
if (!parentFolder) {
|
||||
return reply.status(400).send({ error: "Parent folder not found or access denied" });
|
||||
}
|
||||
|
||||
if (await this.isDescendantOf(validatedInput.parentId, id, userId)) {
|
||||
return reply.status(400).send({ error: "Cannot move a folder into itself or its subfolders" });
|
||||
}
|
||||
}
|
||||
|
||||
const updatedFolder = await prisma.folder.update({
|
||||
where: { id },
|
||||
data: { parentId: validatedInput.parentId },
|
||||
include: {
|
||||
_count: {
|
||||
select: {
|
||||
files: true,
|
||||
children: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
const totalSize = await this.folderService.calculateFolderSize(updatedFolder.id, userId);
|
||||
|
||||
const folderResponse = {
|
||||
id: updatedFolder.id,
|
||||
name: updatedFolder.name,
|
||||
description: updatedFolder.description,
|
||||
objectName: updatedFolder.objectName,
|
||||
parentId: updatedFolder.parentId,
|
||||
userId: updatedFolder.userId,
|
||||
createdAt: updatedFolder.createdAt,
|
||||
updatedAt: updatedFolder.updatedAt,
|
||||
totalSize: totalSize.toString(),
|
||||
_count: updatedFolder._count,
|
||||
};
|
||||
|
||||
return reply.send({
|
||||
folder: folderResponse,
|
||||
message: "Folder moved successfully.",
|
||||
});
|
||||
} catch (error: any) {
|
||||
console.error("Error in moveFolder:", error);
|
||||
const statusCode = error.message === "Folder not found" ? 404 : 400;
|
||||
return reply.status(statusCode).send({ error: error.message });
|
||||
}
|
||||
}
|
||||
|
||||
async deleteFolder(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
await request.jwtVerify();
|
||||
const { id } = request.params as { id: string };
|
||||
if (!id) {
|
||||
return reply.status(400).send({ error: "The 'id' parameter is required." });
|
||||
}
|
||||
|
||||
const folderRecord = await prisma.folder.findUnique({ where: { id } });
|
||||
if (!folderRecord) {
|
||||
return reply.status(404).send({ error: "Folder not found." });
|
||||
}
|
||||
|
||||
const userId = (request as any).user?.userId;
|
||||
if (folderRecord.userId !== userId) {
|
||||
return reply.status(403).send({ error: "Access denied." });
|
||||
}
|
||||
|
||||
await this.folderService.deleteObject(folderRecord.objectName);
|
||||
|
||||
await prisma.folder.delete({ where: { id } });
|
||||
|
||||
return reply.send({ message: "Folder deleted successfully." });
|
||||
} catch (error) {
|
||||
console.error("Error in deleteFolder:", error);
|
||||
return reply.status(500).send({ error: "Internal server error." });
|
||||
}
|
||||
}
|
||||
|
||||
private async isDescendantOf(potentialDescendantId: string, ancestorId: string, userId: string): Promise<boolean> {
|
||||
let currentId: string | null = potentialDescendantId;
|
||||
|
||||
while (currentId) {
|
||||
if (currentId === ancestorId) {
|
||||
return true;
|
||||
}
|
||||
|
||||
const folder: { parentId: string | null } | null = await prisma.folder.findFirst({
|
||||
where: { id: currentId, userId },
|
||||
});
|
||||
|
||||
if (!folder) break;
|
||||
currentId = folder.parentId;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
}
|
||||
56
apps/server/src/modules/folder/dto.ts
Normal file
56
apps/server/src/modules/folder/dto.ts
Normal file
@@ -0,0 +1,56 @@
|
||||
import { z } from "zod";
|
||||
|
||||
export const RegisterFolderSchema = z.object({
|
||||
name: z.string().min(1, "O nome da pasta é obrigatório"),
|
||||
description: z.string().optional(),
|
||||
objectName: z.string().min(1, "O objectName é obrigatório"),
|
||||
parentId: z.string().optional(),
|
||||
});
|
||||
|
||||
export const UpdateFolderSchema = z.object({
|
||||
name: z.string().optional(),
|
||||
description: z.string().optional().nullable(),
|
||||
});
|
||||
|
||||
export const MoveFolderSchema = z.object({
|
||||
parentId: z.string().nullable(),
|
||||
});
|
||||
|
||||
export const FolderResponseSchema = z.object({
|
||||
id: z.string(),
|
||||
name: z.string(),
|
||||
description: z.string().nullable(),
|
||||
parentId: z.string().nullable(),
|
||||
userId: z.string(),
|
||||
createdAt: z.date(),
|
||||
updatedAt: z.date(),
|
||||
totalSize: z
|
||||
.bigint()
|
||||
.transform((val) => val.toString())
|
||||
.optional(),
|
||||
_count: z
|
||||
.object({
|
||||
files: z.number(),
|
||||
children: z.number(),
|
||||
})
|
||||
.optional(),
|
||||
});
|
||||
|
||||
export const CheckFolderSchema = z.object({
|
||||
name: z.string().min(1, "O nome da pasta é obrigatório"),
|
||||
description: z.string().optional(),
|
||||
objectName: z.string().min(1, "O objectName é obrigatório"),
|
||||
parentId: z.string().optional(),
|
||||
});
|
||||
|
||||
export const ListFoldersSchema = z.object({
|
||||
parentId: z.string().optional(),
|
||||
recursive: z.string().optional().default("true"),
|
||||
});
|
||||
|
||||
export type RegisterFolderInput = z.infer<typeof RegisterFolderSchema>;
|
||||
export type UpdateFolderInput = z.infer<typeof UpdateFolderSchema>;
|
||||
export type MoveFolderInput = z.infer<typeof MoveFolderSchema>;
|
||||
export type CheckFolderInput = z.infer<typeof CheckFolderSchema>;
|
||||
export type ListFoldersInput = z.infer<typeof ListFoldersSchema>;
|
||||
export type FolderResponse = z.infer<typeof FolderResponseSchema>;
|
||||
245
apps/server/src/modules/folder/routes.ts
Normal file
245
apps/server/src/modules/folder/routes.ts
Normal file
@@ -0,0 +1,245 @@
|
||||
import { FastifyInstance, FastifyReply, FastifyRequest } from "fastify";
|
||||
import { z } from "zod";
|
||||
|
||||
import { FolderController } from "./controller";
|
||||
import {
|
||||
CheckFolderSchema,
|
||||
FolderResponseSchema,
|
||||
ListFoldersSchema,
|
||||
MoveFolderSchema,
|
||||
RegisterFolderSchema,
|
||||
UpdateFolderSchema,
|
||||
} from "./dto";
|
||||
|
||||
export async function folderRoutes(app: FastifyInstance) {
|
||||
const folderController = new FolderController();
|
||||
|
||||
const preValidation = async (request: FastifyRequest, reply: FastifyReply) => {
|
||||
try {
|
||||
await request.jwtVerify();
|
||||
} catch (err) {
|
||||
console.error(err);
|
||||
reply.status(401).send({ error: "Token inválido ou ausente." });
|
||||
}
|
||||
};
|
||||
|
||||
app.post(
|
||||
"/folders",
|
||||
{
|
||||
schema: {
|
||||
tags: ["Folder"],
|
||||
operationId: "registerFolder",
|
||||
summary: "Register Folder Metadata",
|
||||
description: "Registers folder metadata in the database",
|
||||
body: RegisterFolderSchema,
|
||||
response: {
|
||||
201: z.object({
|
||||
folder: z.object({
|
||||
id: z.string().describe("The folder ID"),
|
||||
name: z.string().describe("The folder name"),
|
||||
description: z.string().nullable().describe("The folder description"),
|
||||
parentId: z.string().nullable().describe("The parent folder ID"),
|
||||
userId: z.string().describe("The user ID"),
|
||||
createdAt: z.date().describe("The folder creation date"),
|
||||
updatedAt: z.date().describe("The folder last update date"),
|
||||
totalSize: z.string().optional().describe("The total size of the folder"),
|
||||
_count: z
|
||||
.object({
|
||||
files: z.number().describe("Number of files in folder"),
|
||||
children: z.number().describe("Number of subfolders"),
|
||||
})
|
||||
.optional()
|
||||
.describe("Count statistics"),
|
||||
}),
|
||||
message: z.string().describe("The folder registration message"),
|
||||
}),
|
||||
400: z.object({ error: z.string().describe("Error message") }),
|
||||
401: z.object({ error: z.string().describe("Error message") }),
|
||||
},
|
||||
},
|
||||
},
|
||||
folderController.registerFolder.bind(folderController)
|
||||
);
|
||||
|
||||
app.post(
|
||||
"/folders/check",
|
||||
{
|
||||
preValidation,
|
||||
schema: {
|
||||
tags: ["Folder"],
|
||||
operationId: "checkFolder",
|
||||
summary: "Check Folder validity",
|
||||
description: "Checks if the folder meets all requirements",
|
||||
body: CheckFolderSchema,
|
||||
response: {
|
||||
201: z.object({
|
||||
message: z.string().describe("The folder check success message"),
|
||||
}),
|
||||
400: z.object({
|
||||
error: z.string().describe("Error message"),
|
||||
code: z.string().optional().describe("Error code"),
|
||||
details: z.string().optional().describe("Error details"),
|
||||
}),
|
||||
401: z.object({
|
||||
error: z.string().describe("Error message"),
|
||||
code: z.string().optional().describe("Error code"),
|
||||
}),
|
||||
},
|
||||
},
|
||||
},
|
||||
folderController.checkFolder.bind(folderController)
|
||||
);
|
||||
|
||||
app.get(
|
||||
"/folders",
|
||||
{
|
||||
preValidation,
|
||||
schema: {
|
||||
tags: ["Folder"],
|
||||
operationId: "listFolders",
|
||||
summary: "List Folders",
|
||||
description: "Lists user folders recursively by default, optionally filtered by folder",
|
||||
querystring: ListFoldersSchema,
|
||||
response: {
|
||||
200: z.object({
|
||||
folders: z.array(
|
||||
z.object({
|
||||
id: z.string().describe("The folder ID"),
|
||||
name: z.string().describe("The folder name"),
|
||||
description: z.string().nullable().describe("The folder description"),
|
||||
parentId: z.string().nullable().describe("The parent folder ID"),
|
||||
userId: z.string().describe("The user ID"),
|
||||
createdAt: z.date().describe("The folder creation date"),
|
||||
updatedAt: z.date().describe("The folder last update date"),
|
||||
totalSize: z.string().optional().describe("The total size of the folder"),
|
||||
_count: z
|
||||
.object({
|
||||
files: z.number().describe("Number of files in folder"),
|
||||
children: z.number().describe("Number of subfolders"),
|
||||
})
|
||||
.optional()
|
||||
.describe("Count statistics"),
|
||||
})
|
||||
),
|
||||
}),
|
||||
500: z.object({ error: z.string().describe("Error message") }),
|
||||
},
|
||||
},
|
||||
},
|
||||
folderController.listFolders.bind(folderController)
|
||||
);
|
||||
|
||||
app.patch(
|
||||
"/folders/:id",
|
||||
{
|
||||
preValidation,
|
||||
schema: {
|
||||
tags: ["Folder"],
|
||||
operationId: "updateFolder",
|
||||
summary: "Update Folder Metadata",
|
||||
description: "Updates folder metadata in the database",
|
||||
params: z.object({
|
||||
id: z.string().min(1, "The folder id is required").describe("The folder ID"),
|
||||
}),
|
||||
body: UpdateFolderSchema,
|
||||
response: {
|
||||
200: z.object({
|
||||
folder: z.object({
|
||||
id: z.string().describe("The folder ID"),
|
||||
name: z.string().describe("The folder name"),
|
||||
description: z.string().nullable().describe("The folder description"),
|
||||
parentId: z.string().nullable().describe("The parent folder ID"),
|
||||
userId: z.string().describe("The user ID"),
|
||||
createdAt: z.date().describe("The folder creation date"),
|
||||
updatedAt: z.date().describe("The folder last update date"),
|
||||
totalSize: z.string().optional().describe("The total size of the folder"),
|
||||
_count: z
|
||||
.object({
|
||||
files: z.number().describe("Number of files in folder"),
|
||||
children: z.number().describe("Number of subfolders"),
|
||||
})
|
||||
.optional()
|
||||
.describe("Count statistics"),
|
||||
}),
|
||||
message: z.string().describe("Success message"),
|
||||
}),
|
||||
400: z.object({ error: z.string().describe("Error message") }),
|
||||
401: z.object({ error: z.string().describe("Error message") }),
|
||||
403: z.object({ error: z.string().describe("Error message") }),
|
||||
404: z.object({ error: z.string().describe("Error message") }),
|
||||
},
|
||||
},
|
||||
},
|
||||
folderController.updateFolder.bind(folderController)
|
||||
);
|
||||
|
||||
app.put(
|
||||
"/folders/:id/move",
|
||||
{
|
||||
preValidation,
|
||||
schema: {
|
||||
tags: ["Folder"],
|
||||
operationId: "moveFolder",
|
||||
summary: "Move Folder",
|
||||
description: "Moves a folder to a different parent folder",
|
||||
params: z.object({
|
||||
id: z.string().min(1, "The folder id is required").describe("The folder ID"),
|
||||
}),
|
||||
body: MoveFolderSchema,
|
||||
response: {
|
||||
200: z.object({
|
||||
folder: z.object({
|
||||
id: z.string().describe("The folder ID"),
|
||||
name: z.string().describe("The folder name"),
|
||||
description: z.string().nullable().describe("The folder description"),
|
||||
parentId: z.string().nullable().describe("The parent folder ID"),
|
||||
userId: z.string().describe("The user ID"),
|
||||
createdAt: z.date().describe("The folder creation date"),
|
||||
updatedAt: z.date().describe("The folder last update date"),
|
||||
totalSize: z.string().optional().describe("The total size of the folder"),
|
||||
_count: z
|
||||
.object({
|
||||
files: z.number().describe("Number of files in folder"),
|
||||
children: z.number().describe("Number of subfolders"),
|
||||
})
|
||||
.optional()
|
||||
.describe("Count statistics"),
|
||||
}),
|
||||
message: z.string().describe("Success message"),
|
||||
}),
|
||||
400: z.object({ error: z.string().describe("Error message") }),
|
||||
401: z.object({ error: z.string().describe("Error message") }),
|
||||
403: z.object({ error: z.string().describe("Error message") }),
|
||||
404: z.object({ error: z.string().describe("Error message") }),
|
||||
},
|
||||
},
|
||||
},
|
||||
folderController.moveFolder.bind(folderController)
|
||||
);
|
||||
|
||||
app.delete(
|
||||
"/folders/:id",
|
||||
{
|
||||
preValidation,
|
||||
schema: {
|
||||
tags: ["Folder"],
|
||||
operationId: "deleteFolder",
|
||||
summary: "Delete Folder",
|
||||
description: "Deletes a folder and all its contents",
|
||||
params: z.object({
|
||||
id: z.string().min(1, "The folder id is required").describe("The folder ID"),
|
||||
}),
|
||||
response: {
|
||||
200: z.object({
|
||||
message: z.string().describe("The folder deletion message"),
|
||||
}),
|
||||
400: z.object({ error: z.string().describe("Error message") }),
|
||||
401: z.object({ error: z.string().describe("Error message") }),
|
||||
404: z.object({ error: z.string().describe("Error message") }),
|
||||
500: z.object({ error: z.string().describe("Error message") }),
|
||||
},
|
||||
},
|
||||
},
|
||||
folderController.deleteFolder.bind(folderController)
|
||||
);
|
||||
}
|
||||
84
apps/server/src/modules/folder/service.ts
Normal file
84
apps/server/src/modules/folder/service.ts
Normal file
@@ -0,0 +1,84 @@
|
||||
import { S3StorageProvider } from "../../providers/s3-storage.provider";
|
||||
import { prisma } from "../../shared/prisma";
|
||||
import { StorageProvider } from "../../types/storage";
|
||||
|
||||
export class FolderService {
|
||||
private storageProvider: StorageProvider;
|
||||
|
||||
constructor() {
|
||||
// Always use S3 (Garage internal or external S3)
|
||||
this.storageProvider = new S3StorageProvider();
|
||||
}
|
||||
|
||||
async getPresignedPutUrl(objectName: string, expires: number): Promise<string> {
|
||||
try {
|
||||
return await this.storageProvider.getPresignedPutUrl(objectName, expires);
|
||||
} catch (err) {
|
||||
console.error("Erro no presignedPutObject:", err);
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
|
||||
async getPresignedGetUrl(objectName: string, expires: number, folderName?: string): Promise<string> {
|
||||
try {
|
||||
return await this.storageProvider.getPresignedGetUrl(objectName, expires, folderName);
|
||||
} catch (err) {
|
||||
console.error("Erro no presignedGetObject:", err);
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
|
||||
async deleteObject(objectName: string): Promise<void> {
|
||||
try {
|
||||
await this.storageProvider.deleteObject(objectName);
|
||||
} catch (err) {
|
||||
console.error("Erro no removeObject:", err);
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
|
||||
async getAllFilesInFolder(folderId: string, userId: string, basePath: string = ""): Promise<any[]> {
|
||||
const files = await prisma.file.findMany({
|
||||
where: { folderId, userId },
|
||||
});
|
||||
|
||||
const subfolders = await prisma.folder.findMany({
|
||||
where: { parentId: folderId, userId },
|
||||
select: { id: true, name: true },
|
||||
});
|
||||
|
||||
let allFiles = files.map((file: any) => ({
|
||||
...file,
|
||||
relativePath: basePath + file.name,
|
||||
}));
|
||||
|
||||
for (const subfolder of subfolders) {
|
||||
const subfolderPath = basePath + subfolder.name + "/";
|
||||
const subfolderFiles = await this.getAllFilesInFolder(subfolder.id, userId, subfolderPath);
|
||||
allFiles = [...allFiles, ...subfolderFiles];
|
||||
}
|
||||
|
||||
return allFiles;
|
||||
}
|
||||
|
||||
async calculateFolderSize(folderId: string, userId: string): Promise<bigint> {
|
||||
const files = await prisma.file.findMany({
|
||||
where: { folderId, userId },
|
||||
select: { size: true },
|
||||
});
|
||||
|
||||
const subfolders = await prisma.folder.findMany({
|
||||
where: { parentId: folderId, userId },
|
||||
select: { id: true },
|
||||
});
|
||||
|
||||
let totalSize = files.reduce((sum, file) => sum + file.size, BigInt(0));
|
||||
|
||||
for (const subfolder of subfolders) {
|
||||
const subfolderSize = await this.calculateFolderSize(subfolder.id, userId);
|
||||
totalSize += subfolderSize;
|
||||
}
|
||||
|
||||
return totalSize;
|
||||
}
|
||||
}
|
||||
@@ -318,7 +318,12 @@ export class ReverseShareController {
|
||||
}
|
||||
|
||||
const { fileId } = request.params as { fileId: string };
|
||||
const result = await this.reverseShareService.downloadReverseShareFile(fileId, userId);
|
||||
|
||||
// Pass request context for internal storage proxy URLs
|
||||
const requestContext = { protocol: "https", host: "localhost" }; // Simplified - frontend will handle the real URL
|
||||
|
||||
const result = await this.reverseShareService.downloadReverseShareFile(fileId, userId, requestContext);
|
||||
|
||||
return reply.send(result);
|
||||
} catch (error: any) {
|
||||
if (error.message === "File not found") {
|
||||
@@ -461,12 +466,8 @@ export class ReverseShareController {
|
||||
return reply.status(401).send({ error: "Unauthorized" });
|
||||
}
|
||||
|
||||
console.log(`Copy to my files: User ${userId} copying file ${fileId}`);
|
||||
|
||||
const file = await this.reverseShareService.copyReverseShareFileToUserFiles(fileId, userId);
|
||||
|
||||
console.log(`Copy to my files: Successfully copied file ${fileId}`);
|
||||
|
||||
return reply.send({ file, message: "File copied to your files successfully" });
|
||||
} catch (error: any) {
|
||||
console.error(`Copy to my files: Error:`, error.message);
|
||||
@@ -484,4 +485,17 @@ export class ReverseShareController {
|
||||
return reply.status(500).send({ error: "Internal server error" });
|
||||
}
|
||||
}
|
||||
|
||||
async getReverseShareMetadataByAlias(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
const { alias } = request.params as { alias: string };
|
||||
const metadata = await this.reverseShareService.getReverseShareMetadataByAlias(alias);
|
||||
return reply.send(metadata);
|
||||
} catch (error: any) {
|
||||
if (error.message === "Reverse share not found") {
|
||||
return reply.status(404).send({ error: error.message });
|
||||
}
|
||||
return reply.status(400).send({ error: error.message });
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -401,6 +401,12 @@ export async function reverseShareRoutes(app: FastifyInstance) {
|
||||
url: z.string().describe("Presigned download URL - expires after 1 hour"),
|
||||
expiresIn: z.number().describe("URL expiration time in seconds (3600 = 1 hour)"),
|
||||
}),
|
||||
202: z.object({
|
||||
queued: z.boolean().describe("Download was queued due to memory constraints"),
|
||||
downloadId: z.string().describe("Download identifier for tracking"),
|
||||
message: z.string().describe("Queue status message"),
|
||||
estimatedWaitTime: z.number().describe("Estimated wait time in seconds"),
|
||||
}),
|
||||
401: z.object({ error: z.string() }),
|
||||
404: z.object({ error: z.string() }),
|
||||
},
|
||||
@@ -586,4 +592,32 @@ export async function reverseShareRoutes(app: FastifyInstance) {
|
||||
},
|
||||
reverseShareController.copyFileToUserFiles.bind(reverseShareController)
|
||||
);
|
||||
|
||||
app.get(
|
||||
"/reverse-shares/alias/:alias/metadata",
|
||||
{
|
||||
schema: {
|
||||
tags: ["Reverse Share"],
|
||||
operationId: "getReverseShareMetadataByAlias",
|
||||
summary: "Get reverse share metadata by alias for Open Graph",
|
||||
description: "Get lightweight metadata for a reverse share by alias, used for social media previews",
|
||||
params: z.object({
|
||||
alias: z.string().describe("Alias of the reverse share"),
|
||||
}),
|
||||
response: {
|
||||
200: z.object({
|
||||
name: z.string().nullable(),
|
||||
description: z.string().nullable(),
|
||||
totalFiles: z.number(),
|
||||
hasPassword: z.boolean(),
|
||||
isExpired: z.boolean(),
|
||||
isInactive: z.boolean(),
|
||||
maxFiles: z.number().nullable(),
|
||||
}),
|
||||
404: z.object({ error: z.string() }),
|
||||
},
|
||||
},
|
||||
},
|
||||
reverseShareController.getReverseShareMetadataByAlias.bind(reverseShareController)
|
||||
);
|
||||
}
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
import { PrismaClient } from "@prisma/client";
|
||||
|
||||
import { env } from "../../env";
|
||||
import { EmailService } from "../email/service";
|
||||
import { FileService } from "../file/service";
|
||||
import { UserService } from "../user/service";
|
||||
import {
|
||||
CreateReverseShareInput,
|
||||
ReverseShareResponseSchema,
|
||||
@@ -41,6 +43,19 @@ const prisma = new PrismaClient();
|
||||
export class ReverseShareService {
|
||||
private reverseShareRepository = new ReverseShareRepository();
|
||||
private fileService = new FileService();
|
||||
private emailService = new EmailService();
|
||||
private userService = new UserService();
|
||||
|
||||
private uploadSessions = new Map<
|
||||
string,
|
||||
{
|
||||
reverseShareId: string;
|
||||
uploaderName: string;
|
||||
uploaderEmail?: string;
|
||||
files: string[];
|
||||
timeout: NodeJS.Timeout;
|
||||
}
|
||||
>();
|
||||
|
||||
async createReverseShare(data: CreateReverseShareInput, creatorId: string) {
|
||||
const reverseShare = await this.reverseShareRepository.create(data, creatorId);
|
||||
@@ -212,10 +227,22 @@ export class ReverseShareService {
|
||||
}
|
||||
}
|
||||
|
||||
const expires = 3600; // 1 hour
|
||||
const url = await this.fileService.getPresignedPutUrl(objectName, expires);
|
||||
const expires = parseInt(env.PRESIGNED_URL_EXPIRATION);
|
||||
|
||||
return { url, expiresIn: expires };
|
||||
// Import storage config to check if using internal or external S3
|
||||
const { isInternalStorage } = await import("../../config/storage.config.js");
|
||||
|
||||
if (isInternalStorage) {
|
||||
// Internal storage: Use backend proxy for uploads (127.0.0.1 not accessible from client)
|
||||
// Note: This would need request context, but reverse-shares are typically used by external users
|
||||
// For now, we'll use presigned URLs and handle the error on the client side
|
||||
const url = await this.fileService.getPresignedPutUrl(objectName, expires);
|
||||
return { url, expiresIn: expires };
|
||||
} else {
|
||||
// External S3: Use presigned URLs directly (more efficient)
|
||||
const url = await this.fileService.getPresignedPutUrl(objectName, expires);
|
||||
return { url, expiresIn: expires };
|
||||
}
|
||||
}
|
||||
|
||||
async getPresignedUrlByAlias(alias: string, objectName: string, password?: string) {
|
||||
@@ -242,10 +269,22 @@ export class ReverseShareService {
|
||||
}
|
||||
}
|
||||
|
||||
const expires = 3600; // 1 hour
|
||||
const url = await this.fileService.getPresignedPutUrl(objectName, expires);
|
||||
const expires = parseInt(env.PRESIGNED_URL_EXPIRATION);
|
||||
|
||||
return { url, expiresIn: expires };
|
||||
// Import storage config to check if using internal or external S3
|
||||
const { isInternalStorage } = await import("../../config/storage.config.js");
|
||||
|
||||
if (isInternalStorage) {
|
||||
// Internal storage: Use backend proxy for uploads (127.0.0.1 not accessible from client)
|
||||
// Note: This would need request context, but reverse-shares are typically used by external users
|
||||
// For now, we'll use presigned URLs and handle the error on the client side
|
||||
const url = await this.fileService.getPresignedPutUrl(objectName, expires);
|
||||
return { url, expiresIn: expires };
|
||||
} else {
|
||||
// External S3: Use presigned URLs directly (more efficient)
|
||||
const url = await this.fileService.getPresignedPutUrl(objectName, expires);
|
||||
return { url, expiresIn: expires };
|
||||
}
|
||||
}
|
||||
|
||||
async registerFileUpload(reverseShareId: string, fileData: UploadToReverseShareInput, password?: string) {
|
||||
@@ -295,6 +334,8 @@ export class ReverseShareService {
|
||||
size: BigInt(fileData.size),
|
||||
});
|
||||
|
||||
this.addFileToUploadSession(reverseShare, fileData);
|
||||
|
||||
return this.formatFileResponse(file);
|
||||
}
|
||||
|
||||
@@ -345,10 +386,35 @@ export class ReverseShareService {
|
||||
size: BigInt(fileData.size),
|
||||
});
|
||||
|
||||
this.addFileToUploadSession(reverseShare, fileData);
|
||||
|
||||
return this.formatFileResponse(file);
|
||||
}
|
||||
|
||||
async downloadReverseShareFile(fileId: string, creatorId: string) {
|
||||
async getFileInfo(fileId: string, creatorId: string) {
|
||||
const file = await this.reverseShareRepository.findFileById(fileId);
|
||||
if (!file) {
|
||||
throw new Error("File not found");
|
||||
}
|
||||
|
||||
if (file.reverseShare.creatorId !== creatorId) {
|
||||
throw new Error("Unauthorized to access this file");
|
||||
}
|
||||
|
||||
return {
|
||||
id: file.id,
|
||||
name: file.name,
|
||||
size: file.size,
|
||||
objectName: file.objectName,
|
||||
extension: file.extension,
|
||||
};
|
||||
}
|
||||
|
||||
async downloadReverseShareFile(
|
||||
fileId: string,
|
||||
creatorId: string,
|
||||
requestContext?: { protocol: string; host: string }
|
||||
) {
|
||||
const file = await this.reverseShareRepository.findFileById(fileId);
|
||||
if (!file) {
|
||||
throw new Error("File not found");
|
||||
@@ -359,9 +425,20 @@ export class ReverseShareService {
|
||||
}
|
||||
|
||||
const fileName = file.name;
|
||||
const expires = 3600; // 1 hour
|
||||
const url = await this.fileService.getPresignedGetUrl(file.objectName, expires, fileName);
|
||||
return { url, expiresIn: expires };
|
||||
const expires = parseInt(env.PRESIGNED_URL_EXPIRATION);
|
||||
|
||||
// Import storage config to check if using internal or external S3
|
||||
const { isInternalStorage } = await import("../../config/storage.config.js");
|
||||
|
||||
if (isInternalStorage) {
|
||||
// Internal storage: Use frontend proxy (much simpler!)
|
||||
const url = `/api/files/download?objectName=${encodeURIComponent(file.objectName)}`;
|
||||
return { url, expiresIn: expires };
|
||||
} else {
|
||||
// External S3: Use presigned URLs directly (more efficient, no backend proxy)
|
||||
const url = await this.fileService.getPresignedGetUrl(file.objectName, expires, fileName);
|
||||
return { url, expiresIn: expires };
|
||||
}
|
||||
}
|
||||
|
||||
async deleteReverseShareFile(fileId: string, creatorId: string) {
|
||||
@@ -514,17 +591,7 @@ export class ReverseShareService {
|
||||
throw new Error(`File size exceeds the maximum allowed size of ${maxSizeMB}MB`);
|
||||
}
|
||||
|
||||
// Check if DEMO_MODE is enabled
|
||||
const isDemoMode = env.DEMO_MODE === "true";
|
||||
|
||||
let maxTotalStorage: bigint;
|
||||
if (isDemoMode) {
|
||||
// In demo mode, limit all users to 200MB
|
||||
maxTotalStorage = BigInt(200 * 1024 * 1024); // 200MB in bytes
|
||||
} else {
|
||||
// Normal behavior - use maxTotalStoragePerUser configuration
|
||||
maxTotalStorage = BigInt(await configService.getValue("maxTotalStoragePerUser"));
|
||||
}
|
||||
const maxTotalStorage = BigInt(await configService.getValue("maxTotalStoragePerUser"));
|
||||
|
||||
const userFiles = await prisma.file.findMany({
|
||||
where: { userId: creatorId },
|
||||
@@ -540,76 +607,59 @@ export class ReverseShareService {
|
||||
|
||||
const newObjectName = `${creatorId}/${Date.now()}-${file.name}`;
|
||||
|
||||
if (this.fileService.isFilesystemMode()) {
|
||||
const { FilesystemStorageProvider } = await import("../../providers/filesystem-storage.provider.js");
|
||||
const provider = FilesystemStorageProvider.getInstance();
|
||||
// Copy file using S3 presigned URLs
|
||||
const fileSizeMB = Number(file.size) / (1024 * 1024);
|
||||
const needsStreaming = fileSizeMB > 100;
|
||||
|
||||
const sourcePath = provider.getFilePath(file.objectName);
|
||||
const fs = await import("fs");
|
||||
const downloadUrl = await this.fileService.getPresignedGetUrl(file.objectName, 300);
|
||||
const uploadUrl = await this.fileService.getPresignedPutUrl(newObjectName, 300);
|
||||
|
||||
const targetPath = provider.getFilePath(newObjectName);
|
||||
let retries = 0;
|
||||
const maxRetries = 3;
|
||||
let success = false;
|
||||
|
||||
const path = await import("path");
|
||||
const targetDir = path.dirname(targetPath);
|
||||
if (!fs.existsSync(targetDir)) {
|
||||
fs.mkdirSync(targetDir, { recursive: true });
|
||||
}
|
||||
while (retries < maxRetries && !success) {
|
||||
try {
|
||||
const response = await fetch(downloadUrl, {
|
||||
signal: AbortSignal.timeout(600000), // 10 minutes timeout
|
||||
});
|
||||
|
||||
const { copyFile } = await import("fs/promises");
|
||||
await copyFile(sourcePath, targetPath);
|
||||
} else {
|
||||
const fileSizeMB = Number(file.size) / (1024 * 1024);
|
||||
const needsStreaming = fileSizeMB > 100;
|
||||
|
||||
const downloadUrl = await this.fileService.getPresignedGetUrl(file.objectName, 300);
|
||||
const uploadUrl = await this.fileService.getPresignedPutUrl(newObjectName, 300);
|
||||
|
||||
let retries = 0;
|
||||
const maxRetries = 3;
|
||||
let success = false;
|
||||
|
||||
while (retries < maxRetries && !success) {
|
||||
try {
|
||||
const response = await fetch(downloadUrl, {
|
||||
signal: AbortSignal.timeout(600000), // 10 minutes timeout
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to download file: ${response.statusText}`);
|
||||
}
|
||||
|
||||
if (!response.body) {
|
||||
throw new Error("No response body received");
|
||||
}
|
||||
|
||||
const uploadOptions: any = {
|
||||
method: "PUT",
|
||||
body: response.body,
|
||||
headers: {
|
||||
"Content-Type": "application/octet-stream",
|
||||
"Content-Length": file.size.toString(),
|
||||
},
|
||||
signal: AbortSignal.timeout(600000), // 10 minutes timeout
|
||||
};
|
||||
|
||||
const uploadResponse = await fetch(uploadUrl, uploadOptions);
|
||||
|
||||
if (!uploadResponse.ok) {
|
||||
const errorText = await uploadResponse.text();
|
||||
throw new Error(`Failed to upload file: ${uploadResponse.statusText} - ${errorText}`);
|
||||
}
|
||||
|
||||
success = true;
|
||||
} catch (error: any) {
|
||||
retries++;
|
||||
|
||||
if (retries >= maxRetries) {
|
||||
throw new Error(`Failed to copy file after ${maxRetries} attempts: ${error.message}`);
|
||||
}
|
||||
|
||||
const delay = Math.min(1000 * Math.pow(2, retries - 1), 10000);
|
||||
await new Promise((resolve) => setTimeout(resolve, delay));
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to download file: ${response.statusText}`);
|
||||
}
|
||||
|
||||
if (!response.body) {
|
||||
throw new Error("No response body received");
|
||||
}
|
||||
|
||||
const uploadOptions: any = {
|
||||
method: "PUT",
|
||||
body: response.body,
|
||||
duplex: "half",
|
||||
headers: {
|
||||
"Content-Type": "application/octet-stream",
|
||||
"Content-Length": file.size.toString(),
|
||||
},
|
||||
signal: AbortSignal.timeout(9600000), // 160 minutes timeout
|
||||
};
|
||||
|
||||
const uploadResponse = await fetch(uploadUrl, uploadOptions);
|
||||
|
||||
if (!uploadResponse.ok) {
|
||||
const errorText = await uploadResponse.text();
|
||||
throw new Error(`Failed to upload file: ${uploadResponse.statusText} - ${errorText}`);
|
||||
}
|
||||
|
||||
success = true;
|
||||
} catch (error: any) {
|
||||
retries++;
|
||||
|
||||
if (retries >= maxRetries) {
|
||||
throw new Error(`Failed to copy file after ${maxRetries} attempts: ${error.message}`);
|
||||
}
|
||||
|
||||
const delay = Math.min(1000 * Math.pow(2, retries - 1), 10000);
|
||||
await new Promise((resolve) => setTimeout(resolve, delay));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -637,6 +687,55 @@ export class ReverseShareService {
|
||||
};
|
||||
}
|
||||
|
||||
private generateSessionKey(reverseShareId: string, uploaderIdentifier: string): string {
|
||||
return `${reverseShareId}-${uploaderIdentifier}`;
|
||||
}
|
||||
|
||||
private async sendBatchFileUploadNotification(reverseShare: any, uploaderName: string, fileNames: string[]) {
|
||||
try {
|
||||
const creator = await this.userService.getUserById(reverseShare.creatorId);
|
||||
const reverseShareName = reverseShare.name || "Unnamed Reverse Share";
|
||||
const fileCount = fileNames.length;
|
||||
const fileList = fileNames.join(", ");
|
||||
|
||||
await this.emailService.sendReverseShareBatchFileNotification(
|
||||
creator.email,
|
||||
reverseShareName,
|
||||
fileCount,
|
||||
fileList,
|
||||
uploaderName
|
||||
);
|
||||
} catch (error) {
|
||||
console.error("Failed to send reverse share batch file notification:", error);
|
||||
}
|
||||
}
|
||||
|
||||
private addFileToUploadSession(reverseShare: any, fileData: UploadToReverseShareInput) {
|
||||
const uploaderIdentifier = fileData.uploaderEmail || fileData.uploaderName || "anonymous";
|
||||
const sessionKey = this.generateSessionKey(reverseShare.id, uploaderIdentifier);
|
||||
const uploaderName = fileData.uploaderName || "Someone";
|
||||
|
||||
const existingSession = this.uploadSessions.get(sessionKey);
|
||||
if (existingSession) {
|
||||
clearTimeout(existingSession.timeout);
|
||||
existingSession.files.push(fileData.name);
|
||||
} else {
|
||||
this.uploadSessions.set(sessionKey, {
|
||||
reverseShareId: reverseShare.id,
|
||||
uploaderName,
|
||||
uploaderEmail: fileData.uploaderEmail,
|
||||
files: [fileData.name],
|
||||
timeout: null as any,
|
||||
});
|
||||
}
|
||||
|
||||
const session = this.uploadSessions.get(sessionKey)!;
|
||||
session.timeout = setTimeout(async () => {
|
||||
await this.sendBatchFileUploadNotification(reverseShare, session.uploaderName, session.files);
|
||||
this.uploadSessions.delete(sessionKey);
|
||||
}, 5000);
|
||||
}
|
||||
|
||||
private formatReverseShareResponse(reverseShare: ReverseShareData) {
|
||||
const result = {
|
||||
id: reverseShare.id,
|
||||
@@ -696,4 +795,30 @@ export class ReverseShareService {
|
||||
updatedAt: file.updatedAt.toISOString(),
|
||||
};
|
||||
}
|
||||
|
||||
async getReverseShareMetadataByAlias(alias: string) {
|
||||
const reverseShare = await this.reverseShareRepository.findByAlias(alias);
|
||||
if (!reverseShare) {
|
||||
throw new Error("Reverse share not found");
|
||||
}
|
||||
|
||||
// Check if reverse share is expired
|
||||
const isExpired = reverseShare.expiration ? new Date(reverseShare.expiration) < new Date() : false;
|
||||
|
||||
// Check if inactive
|
||||
const isInactive = !reverseShare.isActive;
|
||||
|
||||
const totalFiles = reverseShare.files?.length || 0;
|
||||
const hasPassword = !!reverseShare.password;
|
||||
|
||||
return {
|
||||
name: reverseShare.name,
|
||||
description: reverseShare.description,
|
||||
totalFiles,
|
||||
hasPassword,
|
||||
isExpired,
|
||||
isInactive,
|
||||
maxFiles: reverseShare.maxFiles,
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
174
apps/server/src/modules/s3-storage/controller.ts
Normal file
174
apps/server/src/modules/s3-storage/controller.ts
Normal file
@@ -0,0 +1,174 @@
|
||||
/**
|
||||
* S3 Storage Controller (Simplified)
|
||||
*
|
||||
* This controller handles uploads/downloads using S3-compatible storage (Garage).
|
||||
* It's much simpler than the filesystem controller because:
|
||||
* - Uses S3 multipart uploads (no chunk management needed)
|
||||
* - Uses presigned URLs (no streaming through Node.js)
|
||||
* - No memory management needed (Garage handles it)
|
||||
* - No encryption needed (Garage handles it)
|
||||
*
|
||||
* Replaces ~800 lines of complex code with ~100 lines of simple code.
|
||||
*/
|
||||
|
||||
import { FastifyReply, FastifyRequest } from "fastify";
|
||||
|
||||
import { S3StorageProvider } from "../../providers/s3-storage.provider";
|
||||
|
||||
export class S3StorageController {
|
||||
private storageProvider = new S3StorageProvider();
|
||||
|
||||
/**
|
||||
* Generate presigned upload URL
|
||||
* Client uploads directly to S3 (Garage)
|
||||
*/
|
||||
async getUploadUrl(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
const { objectName, expires } = request.body as { objectName: string; expires?: number };
|
||||
|
||||
if (!objectName) {
|
||||
return reply.status(400).send({ error: "objectName is required" });
|
||||
}
|
||||
|
||||
const expiresIn = expires || 3600; // 1 hour default
|
||||
|
||||
// Import storage config to check if using internal or external S3
|
||||
const { isInternalStorage } = await import("../../config/storage.config.js");
|
||||
|
||||
let uploadUrl: string;
|
||||
|
||||
if (isInternalStorage) {
|
||||
// Internal storage: Use frontend proxy (much simpler!)
|
||||
uploadUrl = `/api/files/upload?objectName=${encodeURIComponent(objectName)}`;
|
||||
} else {
|
||||
// External S3: Use presigned URLs directly (more efficient)
|
||||
uploadUrl = await this.storageProvider.getPresignedPutUrl(objectName, expiresIn);
|
||||
}
|
||||
|
||||
return reply.status(200).send({
|
||||
uploadUrl,
|
||||
objectName,
|
||||
expiresIn,
|
||||
message: isInternalStorage ? "Upload via backend proxy" : "Upload directly to this URL using PUT request",
|
||||
});
|
||||
} catch (error) {
|
||||
console.error("[S3] Error generating upload URL:", error);
|
||||
return reply.status(500).send({ error: "Failed to generate upload URL" });
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate presigned download URL
|
||||
* For internal storage: Uses backend proxy
|
||||
* For external S3: Uses presigned URLs directly
|
||||
*/
|
||||
async getDownloadUrl(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
const { objectName, expires, fileName } = request.query as {
|
||||
objectName: string;
|
||||
expires?: string;
|
||||
fileName?: string;
|
||||
};
|
||||
|
||||
if (!objectName) {
|
||||
return reply.status(400).send({ error: "objectName is required" });
|
||||
}
|
||||
|
||||
// Check if file exists
|
||||
const exists = await this.storageProvider.fileExists(objectName);
|
||||
if (!exists) {
|
||||
return reply.status(404).send({ error: "File not found" });
|
||||
}
|
||||
|
||||
const expiresIn = expires ? parseInt(expires, 10) : 3600;
|
||||
|
||||
// Import storage config to check if using internal or external S3
|
||||
const { isInternalStorage } = await import("../../config/storage.config.js");
|
||||
|
||||
let downloadUrl: string;
|
||||
|
||||
if (isInternalStorage) {
|
||||
// Internal storage: Use frontend proxy (much simpler!)
|
||||
downloadUrl = `/api/files/download?objectName=${encodeURIComponent(objectName)}`;
|
||||
} else {
|
||||
// External S3: Use presigned URLs directly (more efficient)
|
||||
downloadUrl = await this.storageProvider.getPresignedGetUrl(objectName, expiresIn, fileName);
|
||||
}
|
||||
|
||||
return reply.status(200).send({
|
||||
downloadUrl,
|
||||
objectName,
|
||||
expiresIn,
|
||||
message: isInternalStorage ? "Download via backend proxy" : "Download directly from this URL",
|
||||
});
|
||||
} catch (error) {
|
||||
console.error("[S3] Error generating download URL:", error);
|
||||
return reply.status(500).send({ error: "Failed to generate download URL" });
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Upload directly (for small files)
|
||||
* Receives file and uploads to S3
|
||||
*/
|
||||
async upload(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
// For large files, clients should use presigned URLs
|
||||
// This is just for backward compatibility or small files
|
||||
|
||||
return reply.status(501).send({
|
||||
error: "Not implemented",
|
||||
message: "Use getUploadUrl endpoint for efficient uploads",
|
||||
});
|
||||
} catch (error) {
|
||||
console.error("[S3] Error in upload:", error);
|
||||
return reply.status(500).send({ error: "Upload failed" });
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Delete object from S3
|
||||
*/
|
||||
async deleteObject(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
const { objectName } = request.params as { objectName: string };
|
||||
|
||||
if (!objectName) {
|
||||
return reply.status(400).send({ error: "objectName is required" });
|
||||
}
|
||||
|
||||
await this.storageProvider.deleteObject(objectName);
|
||||
|
||||
return reply.status(200).send({
|
||||
message: "Object deleted successfully",
|
||||
objectName,
|
||||
});
|
||||
} catch (error) {
|
||||
console.error("[S3] Error deleting object:", error);
|
||||
return reply.status(500).send({ error: "Failed to delete object" });
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if object exists
|
||||
*/
|
||||
async checkExists(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
const { objectName } = request.query as { objectName: string };
|
||||
|
||||
if (!objectName) {
|
||||
return reply.status(400).send({ error: "objectName is required" });
|
||||
}
|
||||
|
||||
const exists = await this.storageProvider.fileExists(objectName);
|
||||
|
||||
return reply.status(200).send({
|
||||
exists,
|
||||
objectName,
|
||||
});
|
||||
} catch (error) {
|
||||
console.error("[S3] Error checking existence:", error);
|
||||
return reply.status(500).send({ error: "Failed to check existence" });
|
||||
}
|
||||
}
|
||||
}
|
||||
112
apps/server/src/modules/s3-storage/routes.ts
Normal file
112
apps/server/src/modules/s3-storage/routes.ts
Normal file
@@ -0,0 +1,112 @@
|
||||
/**
|
||||
* S3 Storage Routes
|
||||
*
|
||||
* Simple routes for S3-based storage using presigned URLs.
|
||||
* Much simpler than filesystem routes - no chunk management, no streaming.
|
||||
*/
|
||||
|
||||
import { FastifyInstance } from "fastify";
|
||||
import { z } from "zod";
|
||||
|
||||
import { S3StorageController } from "./controller";
|
||||
|
||||
export async function s3StorageRoutes(app: FastifyInstance) {
|
||||
const controller = new S3StorageController();
|
||||
|
||||
// Get presigned upload URL
|
||||
app.post(
|
||||
"/s3/upload-url",
|
||||
{
|
||||
schema: {
|
||||
tags: ["S3 Storage"],
|
||||
operationId: "getS3UploadUrl",
|
||||
summary: "Get presigned URL for upload",
|
||||
description: "Returns a presigned URL that clients can use to upload directly to S3",
|
||||
body: z.object({
|
||||
objectName: z.string().describe("Object name/path in S3"),
|
||||
expires: z.number().optional().describe("URL expiration in seconds (default: 3600)"),
|
||||
}),
|
||||
response: {
|
||||
200: z.object({
|
||||
uploadUrl: z.string(),
|
||||
objectName: z.string(),
|
||||
expiresIn: z.number(),
|
||||
message: z.string(),
|
||||
}),
|
||||
},
|
||||
},
|
||||
},
|
||||
controller.getUploadUrl.bind(controller)
|
||||
);
|
||||
|
||||
// Get presigned download URL
|
||||
app.get(
|
||||
"/s3/download-url",
|
||||
{
|
||||
schema: {
|
||||
tags: ["S3 Storage"],
|
||||
operationId: "getS3DownloadUrl",
|
||||
summary: "Get presigned URL for download",
|
||||
description: "Returns a presigned URL that clients can use to download directly from S3",
|
||||
querystring: z.object({
|
||||
objectName: z.string().describe("Object name/path in S3"),
|
||||
expires: z.string().optional().describe("URL expiration in seconds (default: 3600)"),
|
||||
fileName: z.string().optional().describe("Optional filename for download"),
|
||||
}),
|
||||
response: {
|
||||
200: z.object({
|
||||
downloadUrl: z.string(),
|
||||
objectName: z.string(),
|
||||
expiresIn: z.number(),
|
||||
message: z.string(),
|
||||
}),
|
||||
},
|
||||
},
|
||||
},
|
||||
controller.getDownloadUrl.bind(controller)
|
||||
);
|
||||
|
||||
// Delete object
|
||||
app.delete(
|
||||
"/s3/object/:objectName",
|
||||
{
|
||||
schema: {
|
||||
tags: ["S3 Storage"],
|
||||
operationId: "deleteS3Object",
|
||||
summary: "Delete object from S3",
|
||||
params: z.object({
|
||||
objectName: z.string().describe("Object name/path in S3"),
|
||||
}),
|
||||
response: {
|
||||
200: z.object({
|
||||
message: z.string(),
|
||||
objectName: z.string(),
|
||||
}),
|
||||
},
|
||||
},
|
||||
},
|
||||
controller.deleteObject.bind(controller)
|
||||
);
|
||||
|
||||
// Check if object exists
|
||||
app.get(
|
||||
"/s3/exists",
|
||||
{
|
||||
schema: {
|
||||
tags: ["S3 Storage"],
|
||||
operationId: "checkS3ObjectExists",
|
||||
summary: "Check if object exists in S3",
|
||||
querystring: z.object({
|
||||
objectName: z.string().describe("Object name/path in S3"),
|
||||
}),
|
||||
response: {
|
||||
200: z.object({
|
||||
exists: z.boolean(),
|
||||
objectName: z.string(),
|
||||
}),
|
||||
},
|
||||
},
|
||||
},
|
||||
controller.checkExists.bind(controller)
|
||||
);
|
||||
}
|
||||
@@ -2,7 +2,7 @@ import { FastifyReply, FastifyRequest } from "fastify";
|
||||
|
||||
import {
|
||||
CreateShareSchema,
|
||||
UpdateShareFilesSchema,
|
||||
UpdateShareItemsSchema,
|
||||
UpdateSharePasswordSchema,
|
||||
UpdateShareRecipientsSchema,
|
||||
UpdateShareSchema,
|
||||
@@ -116,7 +116,7 @@ export class ShareController {
|
||||
}
|
||||
}
|
||||
|
||||
async addFiles(request: FastifyRequest, reply: FastifyReply) {
|
||||
async addItems(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
await request.jwtVerify();
|
||||
const userId = (request as any).user?.userId;
|
||||
@@ -125,9 +125,9 @@ export class ShareController {
|
||||
}
|
||||
|
||||
const { shareId } = request.params as { shareId: string };
|
||||
const { files } = UpdateShareFilesSchema.parse(request.body);
|
||||
const { files, folders } = UpdateShareItemsSchema.parse(request.body);
|
||||
|
||||
const share = await this.shareService.addFilesToShare(shareId, userId, files);
|
||||
const share = await this.shareService.addItemsToShare(shareId, userId, files || [], folders || []);
|
||||
return reply.send({ share });
|
||||
} catch (error: any) {
|
||||
if (error.message === "Share not found") {
|
||||
@@ -136,14 +136,14 @@ export class ShareController {
|
||||
if (error.message === "Unauthorized to update this share") {
|
||||
return reply.status(401).send({ error: error.message });
|
||||
}
|
||||
if (error.message.startsWith("Files not found:")) {
|
||||
if (error.message.startsWith("Files not found:") || error.message.startsWith("Folders not found:")) {
|
||||
return reply.status(404).send({ error: error.message });
|
||||
}
|
||||
return reply.status(400).send({ error: error.message });
|
||||
}
|
||||
}
|
||||
|
||||
async removeFiles(request: FastifyRequest, reply: FastifyReply) {
|
||||
async removeItems(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
await request.jwtVerify();
|
||||
const userId = (request as any).user?.userId;
|
||||
@@ -152,9 +152,9 @@ export class ShareController {
|
||||
}
|
||||
|
||||
const { shareId } = request.params as { shareId: string };
|
||||
const { files } = UpdateShareFilesSchema.parse(request.body);
|
||||
const { files, folders } = UpdateShareItemsSchema.parse(request.body);
|
||||
|
||||
const share = await this.shareService.removeFilesFromShare(shareId, userId, files);
|
||||
const share = await this.shareService.removeItemsFromShare(shareId, userId, files || [], folders || []);
|
||||
return reply.send({ share });
|
||||
} catch (error: any) {
|
||||
if (error.message === "Share not found") {
|
||||
@@ -295,4 +295,17 @@ export class ShareController {
|
||||
return reply.status(400).send({ error: error.message });
|
||||
}
|
||||
}
|
||||
|
||||
async getShareMetadataByAlias(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
const { alias } = request.params as { alias: string };
|
||||
const metadata = await this.shareService.getShareMetadataByAlias(alias);
|
||||
return reply.send(metadata);
|
||||
} catch (error: any) {
|
||||
if (error.message === "Share not found") {
|
||||
return reply.status(404).send({ error: error.message });
|
||||
}
|
||||
return reply.status(400).send({ error: error.message });
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,19 +1,31 @@
|
||||
import { z } from "zod";
|
||||
|
||||
export const CreateShareSchema = z.object({
|
||||
name: z.string().optional().describe("The share name"),
|
||||
description: z.string().optional().describe("The share description"),
|
||||
expiration: z
|
||||
.string()
|
||||
.datetime({
|
||||
message: "Data de expiração deve estar no formato ISO 8601 (ex: 2025-02-06T13:20:49Z)",
|
||||
})
|
||||
.optional(),
|
||||
files: z.array(z.string()).describe("The file IDs"),
|
||||
password: z.string().optional().describe("The share password"),
|
||||
maxViews: z.number().optional().nullable().describe("The maximum number of views"),
|
||||
recipients: z.array(z.string().email()).optional().describe("The recipient emails"),
|
||||
});
|
||||
export const CreateShareSchema = z
|
||||
.object({
|
||||
name: z.string().optional().describe("The share name"),
|
||||
description: z.string().optional().describe("The share description"),
|
||||
expiration: z
|
||||
.string()
|
||||
.datetime({
|
||||
message: "Data de expiração deve estar no formato ISO 8601 (ex: 2025-02-06T13:20:49Z)",
|
||||
})
|
||||
.optional(),
|
||||
files: z.array(z.string()).optional().describe("The file IDs"),
|
||||
folders: z.array(z.string()).optional().describe("The folder IDs"),
|
||||
password: z.string().optional().describe("The share password"),
|
||||
maxViews: z.number().optional().nullable().describe("The maximum number of views"),
|
||||
recipients: z.array(z.string().email()).optional().describe("The recipient emails"),
|
||||
})
|
||||
.refine(
|
||||
(data) => {
|
||||
const hasFiles = data.files && data.files.length > 0;
|
||||
const hasFolders = data.folders && data.folders.length > 0;
|
||||
return hasFiles || hasFolders;
|
||||
},
|
||||
{
|
||||
message: "At least one file or folder must be selected to create a share",
|
||||
}
|
||||
);
|
||||
|
||||
export const UpdateShareSchema = z.object({
|
||||
id: z.string(),
|
||||
@@ -55,10 +67,30 @@ export const ShareResponseSchema = z.object({
|
||||
size: z.string().describe("The file size"),
|
||||
objectName: z.string().describe("The file object name"),
|
||||
userId: z.string().describe("The user ID"),
|
||||
folderId: z.string().nullable().describe("The folder ID containing this file"),
|
||||
createdAt: z.string().describe("The file creation date"),
|
||||
updatedAt: z.string().describe("The file update date"),
|
||||
})
|
||||
),
|
||||
folders: z.array(
|
||||
z.object({
|
||||
id: z.string().describe("The folder ID"),
|
||||
name: z.string().describe("The folder name"),
|
||||
description: z.string().nullable().describe("The folder description"),
|
||||
objectName: z.string().describe("The folder object name"),
|
||||
parentId: z.string().nullable().describe("The parent folder ID"),
|
||||
userId: z.string().describe("The user ID"),
|
||||
totalSize: z.string().nullable().describe("The total size of folder contents"),
|
||||
createdAt: z.string().describe("The folder creation date"),
|
||||
updatedAt: z.string().describe("The folder update date"),
|
||||
_count: z
|
||||
.object({
|
||||
files: z.number().describe("Number of files in folder"),
|
||||
children: z.number().describe("Number of subfolders"),
|
||||
})
|
||||
.optional(),
|
||||
})
|
||||
),
|
||||
recipients: z.array(
|
||||
z.object({
|
||||
id: z.string().describe("The recipient ID"),
|
||||
@@ -74,9 +106,21 @@ export const UpdateSharePasswordSchema = z.object({
|
||||
password: z.string().nullable().describe("The new password. Send null to remove password"),
|
||||
});
|
||||
|
||||
export const UpdateShareFilesSchema = z.object({
|
||||
files: z.array(z.string().min(1, "File ID is required").describe("The file IDs")),
|
||||
});
|
||||
export const UpdateShareItemsSchema = z
|
||||
.object({
|
||||
files: z.array(z.string().min(1, "File ID is required").describe("The file IDs")).optional(),
|
||||
folders: z.array(z.string().min(1, "Folder ID is required").describe("The folder IDs")).optional(),
|
||||
})
|
||||
.refine(
|
||||
(data) => {
|
||||
const hasFiles = data.files && data.files.length > 0;
|
||||
const hasFolders = data.folders && data.folders.length > 0;
|
||||
return hasFiles || hasFolders;
|
||||
},
|
||||
{
|
||||
message: "At least one file or folder must be provided",
|
||||
}
|
||||
);
|
||||
|
||||
export const UpdateShareRecipientsSchema = z.object({
|
||||
emails: z.array(z.string().email("Invalid email format").describe("The recipient emails")),
|
||||
|
||||
@@ -9,30 +9,42 @@ export interface IShareRepository {
|
||||
| (Share & {
|
||||
security: ShareSecurity;
|
||||
files: any[];
|
||||
folders: any[];
|
||||
recipients: { email: string }[];
|
||||
})
|
||||
| null
|
||||
>;
|
||||
findShareBySecurityId(securityId: string): Promise<(Share & { security: ShareSecurity; files: any[] }) | null>;
|
||||
findShareBySecurityId(
|
||||
securityId: string
|
||||
): Promise<(Share & { security: ShareSecurity; files: any[]; folders: any[] }) | null>;
|
||||
findShareByAlias(
|
||||
alias: string
|
||||
): Promise<(Share & { security: ShareSecurity; files: any[]; folders: any[]; recipients: any[] }) | null>;
|
||||
updateShare(id: string, data: Partial<Share>): Promise<Share>;
|
||||
updateShareSecurity(id: string, data: Partial<ShareSecurity>): Promise<ShareSecurity>;
|
||||
deleteShare(id: string): Promise<Share>;
|
||||
incrementViews(id: string): Promise<Share>;
|
||||
addFilesToShare(shareId: string, fileIds: string[]): Promise<void>;
|
||||
removeFilesFromShare(shareId: string, fileIds: string[]): Promise<void>;
|
||||
addFoldersToShare(shareId: string, folderIds: string[]): Promise<void>;
|
||||
removeFoldersFromShare(shareId: string, folderIds: string[]): Promise<void>;
|
||||
findFilesByIds(fileIds: string[]): Promise<any[]>;
|
||||
findFoldersByIds(folderIds: string[]): Promise<any[]>;
|
||||
addRecipients(shareId: string, emails: string[]): Promise<void>;
|
||||
removeRecipients(shareId: string, emails: string[]): Promise<void>;
|
||||
findSharesByUserId(userId: string): Promise<Share[]>;
|
||||
findSharesByUserId(
|
||||
userId: string
|
||||
): Promise<(Share & { security: ShareSecurity; files: any[]; folders: any[]; recipients: any[]; alias: any })[]>;
|
||||
}
|
||||
|
||||
export class PrismaShareRepository implements IShareRepository {
|
||||
async createShare(
|
||||
data: Omit<CreateShareInput, "password" | "maxViews"> & { securityId: string; creatorId: string }
|
||||
): Promise<Share> {
|
||||
const { files, recipients, expiration, ...shareData } = data;
|
||||
const { files, folders, recipients, expiration, ...shareData } = data;
|
||||
|
||||
const validFiles = (files ?? []).filter((id) => id && id.trim().length > 0);
|
||||
const validFolders = (folders ?? []).filter((id) => id && id.trim().length > 0);
|
||||
const validRecipients = (recipients ?? []).filter((email) => email && email.trim().length > 0);
|
||||
|
||||
return prisma.share.create({
|
||||
@@ -45,6 +57,12 @@ export class PrismaShareRepository implements IShareRepository {
|
||||
connect: validFiles.map((id) => ({ id })),
|
||||
}
|
||||
: undefined,
|
||||
folders:
|
||||
validFolders.length > 0
|
||||
? {
|
||||
connect: validFolders.map((id) => ({ id })),
|
||||
}
|
||||
: undefined,
|
||||
recipients:
|
||||
validRecipients?.length > 0
|
||||
? {
|
||||
@@ -61,10 +79,28 @@ export class PrismaShareRepository implements IShareRepository {
|
||||
return prisma.share.findUnique({
|
||||
where: { id },
|
||||
include: {
|
||||
alias: true,
|
||||
security: true,
|
||||
files: true,
|
||||
folders: {
|
||||
select: {
|
||||
id: true,
|
||||
name: true,
|
||||
description: true,
|
||||
objectName: true,
|
||||
parentId: true,
|
||||
userId: true,
|
||||
createdAt: true,
|
||||
updatedAt: true,
|
||||
_count: {
|
||||
select: {
|
||||
files: true,
|
||||
children: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
recipients: true,
|
||||
alias: true,
|
||||
},
|
||||
});
|
||||
}
|
||||
@@ -75,10 +111,63 @@ export class PrismaShareRepository implements IShareRepository {
|
||||
include: {
|
||||
security: true,
|
||||
files: true,
|
||||
folders: {
|
||||
select: {
|
||||
id: true,
|
||||
name: true,
|
||||
description: true,
|
||||
objectName: true,
|
||||
parentId: true,
|
||||
userId: true,
|
||||
createdAt: true,
|
||||
updatedAt: true,
|
||||
_count: {
|
||||
select: {
|
||||
files: true,
|
||||
children: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
async findShareByAlias(alias: string) {
|
||||
const shareAlias = await prisma.shareAlias.findUnique({
|
||||
where: { alias },
|
||||
include: {
|
||||
share: {
|
||||
include: {
|
||||
security: true,
|
||||
files: true,
|
||||
folders: {
|
||||
select: {
|
||||
id: true,
|
||||
name: true,
|
||||
description: true,
|
||||
objectName: true,
|
||||
parentId: true,
|
||||
userId: true,
|
||||
createdAt: true,
|
||||
updatedAt: true,
|
||||
_count: {
|
||||
select: {
|
||||
files: true,
|
||||
children: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
recipients: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
return shareAlias?.share || null;
|
||||
}
|
||||
|
||||
async updateShare(id: string, data: Partial<Share>): Promise<Share> {
|
||||
return prisma.share.update({
|
||||
where: { id },
|
||||
@@ -121,6 +210,17 @@ export class PrismaShareRepository implements IShareRepository {
|
||||
});
|
||||
}
|
||||
|
||||
async addFoldersToShare(shareId: string, folderIds: string[]): Promise<void> {
|
||||
await prisma.share.update({
|
||||
where: { id: shareId },
|
||||
data: {
|
||||
folders: {
|
||||
connect: folderIds.map((id) => ({ id })),
|
||||
},
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
async removeFilesFromShare(shareId: string, fileIds: string[]): Promise<void> {
|
||||
await prisma.share.update({
|
||||
where: { id: shareId },
|
||||
@@ -132,6 +232,17 @@ export class PrismaShareRepository implements IShareRepository {
|
||||
});
|
||||
}
|
||||
|
||||
async removeFoldersFromShare(shareId: string, folderIds: string[]): Promise<void> {
|
||||
await prisma.share.update({
|
||||
where: { id: shareId },
|
||||
data: {
|
||||
folders: {
|
||||
disconnect: folderIds.map((id) => ({ id })),
|
||||
},
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
async findFilesByIds(fileIds: string[]): Promise<any[]> {
|
||||
return prisma.file.findMany({
|
||||
where: {
|
||||
@@ -142,6 +253,16 @@ export class PrismaShareRepository implements IShareRepository {
|
||||
});
|
||||
}
|
||||
|
||||
async findFoldersByIds(folderIds: string[]): Promise<any[]> {
|
||||
return prisma.folder.findMany({
|
||||
where: {
|
||||
id: {
|
||||
in: folderIds,
|
||||
},
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
async addRecipients(shareId: string, emails: string[]): Promise<void> {
|
||||
await prisma.share.update({
|
||||
where: { id: shareId },
|
||||
@@ -178,6 +299,24 @@ export class PrismaShareRepository implements IShareRepository {
|
||||
include: {
|
||||
security: true,
|
||||
files: true,
|
||||
folders: {
|
||||
select: {
|
||||
id: true,
|
||||
name: true,
|
||||
description: true,
|
||||
objectName: true,
|
||||
parentId: true,
|
||||
userId: true,
|
||||
createdAt: true,
|
||||
updatedAt: true,
|
||||
_count: {
|
||||
select: {
|
||||
files: true,
|
||||
children: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
recipients: true,
|
||||
alias: true,
|
||||
},
|
||||
|
||||
@@ -6,7 +6,7 @@ import {
|
||||
CreateShareSchema,
|
||||
ShareAliasResponseSchema,
|
||||
ShareResponseSchema,
|
||||
UpdateShareFilesSchema,
|
||||
UpdateShareItemsSchema,
|
||||
UpdateSharePasswordSchema,
|
||||
UpdateShareRecipientsSchema,
|
||||
UpdateShareSchema,
|
||||
@@ -32,7 +32,7 @@ export async function shareRoutes(app: FastifyInstance) {
|
||||
tags: ["Share"],
|
||||
operationId: "createShare",
|
||||
summary: "Create a new share",
|
||||
description: "Create a new share",
|
||||
description: "Create a new share with files and/or folders",
|
||||
body: CreateShareSchema,
|
||||
response: {
|
||||
201: z.object({
|
||||
@@ -164,17 +164,17 @@ export async function shareRoutes(app: FastifyInstance) {
|
||||
);
|
||||
|
||||
app.post(
|
||||
"/shares/:shareId/files",
|
||||
"/shares/:shareId/items",
|
||||
{
|
||||
preValidation,
|
||||
schema: {
|
||||
tags: ["Share"],
|
||||
operationId: "addFiles",
|
||||
summary: "Add files to share",
|
||||
operationId: "addItems",
|
||||
summary: "Add files and/or folders to share",
|
||||
params: z.object({
|
||||
shareId: z.string().describe("The share ID"),
|
||||
}),
|
||||
body: UpdateShareFilesSchema,
|
||||
body: UpdateShareItemsSchema,
|
||||
response: {
|
||||
200: z.object({
|
||||
share: ShareResponseSchema,
|
||||
@@ -185,21 +185,21 @@ export async function shareRoutes(app: FastifyInstance) {
|
||||
},
|
||||
},
|
||||
},
|
||||
shareController.addFiles.bind(shareController)
|
||||
shareController.addItems.bind(shareController)
|
||||
);
|
||||
|
||||
app.delete(
|
||||
"/shares/:shareId/files",
|
||||
"/shares/:shareId/items",
|
||||
{
|
||||
preValidation,
|
||||
schema: {
|
||||
tags: ["Share"],
|
||||
operationId: "removeFiles",
|
||||
summary: "Remove files from share",
|
||||
operationId: "removeItems",
|
||||
summary: "Remove files and/or folders from share",
|
||||
params: z.object({
|
||||
shareId: z.string().describe("The share ID"),
|
||||
}),
|
||||
body: UpdateShareFilesSchema,
|
||||
body: UpdateShareItemsSchema,
|
||||
response: {
|
||||
200: z.object({
|
||||
share: ShareResponseSchema,
|
||||
@@ -210,7 +210,7 @@ export async function shareRoutes(app: FastifyInstance) {
|
||||
},
|
||||
},
|
||||
},
|
||||
shareController.removeFiles.bind(shareController)
|
||||
shareController.removeItems.bind(shareController)
|
||||
);
|
||||
|
||||
app.post(
|
||||
@@ -347,4 +347,32 @@ export async function shareRoutes(app: FastifyInstance) {
|
||||
},
|
||||
shareController.notifyRecipients.bind(shareController)
|
||||
);
|
||||
|
||||
app.get(
|
||||
"/shares/alias/:alias/metadata",
|
||||
{
|
||||
schema: {
|
||||
tags: ["Share"],
|
||||
operationId: "getShareMetadataByAlias",
|
||||
summary: "Get share metadata by alias for Open Graph",
|
||||
description: "Get lightweight metadata for a share by alias, used for social media previews",
|
||||
params: z.object({
|
||||
alias: z.string().describe("The share alias"),
|
||||
}),
|
||||
response: {
|
||||
200: z.object({
|
||||
name: z.string().nullable(),
|
||||
description: z.string().nullable(),
|
||||
totalFiles: z.number(),
|
||||
totalFolders: z.number(),
|
||||
hasPassword: z.boolean(),
|
||||
isExpired: z.boolean(),
|
||||
isMaxViewsReached: z.boolean(),
|
||||
}),
|
||||
404: z.object({ error: z.string() }),
|
||||
},
|
||||
},
|
||||
},
|
||||
shareController.getShareMetadataByAlias.bind(shareController)
|
||||
);
|
||||
}
|
||||
|
||||
@@ -2,6 +2,8 @@ import bcrypt from "bcryptjs";
|
||||
|
||||
import { prisma } from "../../shared/prisma";
|
||||
import { EmailService } from "../email/service";
|
||||
import { FolderService } from "../folder/service";
|
||||
import { UserService } from "../user/service";
|
||||
import { CreateShareInput, ShareResponseSchema, UpdateShareInput } from "./dto";
|
||||
import { IShareRepository, PrismaShareRepository } from "./repository";
|
||||
|
||||
@@ -9,8 +11,10 @@ export class ShareService {
|
||||
constructor(private readonly shareRepository: IShareRepository = new PrismaShareRepository()) {}
|
||||
|
||||
private emailService = new EmailService();
|
||||
private userService = new UserService();
|
||||
private folderService = new FolderService();
|
||||
|
||||
private formatShareResponse(share: any) {
|
||||
private async formatShareResponse(share: any) {
|
||||
return {
|
||||
...share,
|
||||
createdAt: share.createdAt.toISOString(),
|
||||
@@ -34,6 +38,20 @@ export class ShareService {
|
||||
createdAt: file.createdAt.toISOString(),
|
||||
updatedAt: file.updatedAt.toISOString(),
|
||||
})) || [],
|
||||
folders:
|
||||
share.folders && share.folders.length > 0
|
||||
? await Promise.all(
|
||||
share.folders.map(async (folder: any) => {
|
||||
const totalSize = await this.folderService.calculateFolderSize(folder.id, folder.userId);
|
||||
return {
|
||||
...folder,
|
||||
totalSize: totalSize.toString(),
|
||||
createdAt: folder.createdAt.toISOString(),
|
||||
updatedAt: folder.updatedAt.toISOString(),
|
||||
};
|
||||
})
|
||||
)
|
||||
: [],
|
||||
recipients:
|
||||
share.recipients?.map((recipient: any) => ({
|
||||
...recipient,
|
||||
@@ -44,7 +62,37 @@ export class ShareService {
|
||||
}
|
||||
|
||||
async createShare(data: CreateShareInput, userId: string) {
|
||||
const { password, maxViews, ...shareData } = data;
|
||||
const { password, maxViews, files, folders, ...shareData } = data;
|
||||
|
||||
if (files && files.length > 0) {
|
||||
const existingFiles = await prisma.file.findMany({
|
||||
where: {
|
||||
id: { in: files },
|
||||
userId: userId,
|
||||
},
|
||||
});
|
||||
const notFoundFiles = files.filter((id) => !existingFiles.some((file) => file.id === id));
|
||||
if (notFoundFiles.length > 0) {
|
||||
throw new Error(`Files not found or access denied: ${notFoundFiles.join(", ")}`);
|
||||
}
|
||||
}
|
||||
|
||||
if (folders && folders.length > 0) {
|
||||
const existingFolders = await prisma.folder.findMany({
|
||||
where: {
|
||||
id: { in: folders },
|
||||
userId: userId,
|
||||
},
|
||||
});
|
||||
const notFoundFolders = folders.filter((id) => !existingFolders.some((folder) => folder.id === id));
|
||||
if (notFoundFolders.length > 0) {
|
||||
throw new Error(`Folders not found or access denied: ${notFoundFolders.join(", ")}`);
|
||||
}
|
||||
}
|
||||
|
||||
if ((!files || files.length === 0) && (!folders || folders.length === 0)) {
|
||||
throw new Error("At least one file or folder must be selected to create a share");
|
||||
}
|
||||
|
||||
const security = await prisma.shareSecurity.create({
|
||||
data: {
|
||||
@@ -55,12 +103,14 @@ export class ShareService {
|
||||
|
||||
const share = await this.shareRepository.createShare({
|
||||
...shareData,
|
||||
files,
|
||||
folders,
|
||||
securityId: security.id,
|
||||
creatorId: userId,
|
||||
});
|
||||
|
||||
const shareWithRelations = await this.shareRepository.findShareById(share.id);
|
||||
return ShareResponseSchema.parse(this.formatShareResponse(shareWithRelations));
|
||||
return ShareResponseSchema.parse(await this.formatShareResponse(shareWithRelations));
|
||||
}
|
||||
|
||||
async getShare(shareId: string, password?: string, userId?: string) {
|
||||
@@ -71,7 +121,7 @@ export class ShareService {
|
||||
}
|
||||
|
||||
if (userId && share.creatorId === userId) {
|
||||
return ShareResponseSchema.parse(this.formatShareResponse(share));
|
||||
return ShareResponseSchema.parse(await this.formatShareResponse(share));
|
||||
}
|
||||
|
||||
if (share.expiration && new Date() > new Date(share.expiration)) {
|
||||
@@ -96,7 +146,7 @@ export class ShareService {
|
||||
await this.shareRepository.incrementViews(shareId);
|
||||
|
||||
const updatedShare = await this.shareRepository.findShareById(shareId);
|
||||
return ShareResponseSchema.parse(this.formatShareResponse(updatedShare));
|
||||
return ShareResponseSchema.parse(await this.formatShareResponse(updatedShare));
|
||||
}
|
||||
|
||||
async updateShare(shareId: string, data: Omit<UpdateShareInput, "id">, userId: string) {
|
||||
@@ -134,7 +184,7 @@ export class ShareService {
|
||||
});
|
||||
const shareWithRelations = await this.shareRepository.findShareById(shareId);
|
||||
|
||||
return this.formatShareResponse(shareWithRelations);
|
||||
return await this.formatShareResponse(shareWithRelations);
|
||||
}
|
||||
|
||||
async deleteShare(id: string) {
|
||||
@@ -170,12 +220,12 @@ export class ShareService {
|
||||
return deletedShare;
|
||||
});
|
||||
|
||||
return ShareResponseSchema.parse(this.formatShareResponse(deleted));
|
||||
return ShareResponseSchema.parse(await this.formatShareResponse(deleted));
|
||||
}
|
||||
|
||||
async listUserShares(userId: string) {
|
||||
const shares = await this.shareRepository.findSharesByUserId(userId);
|
||||
return shares.map((share) => this.formatShareResponse(share));
|
||||
return await Promise.all(shares.map(async (share) => await this.formatShareResponse(share)));
|
||||
}
|
||||
|
||||
async updateSharePassword(shareId: string, userId: string, password: string | null) {
|
||||
@@ -193,10 +243,10 @@ export class ShareService {
|
||||
});
|
||||
|
||||
const updated = await this.shareRepository.findShareById(shareId);
|
||||
return ShareResponseSchema.parse(this.formatShareResponse(updated));
|
||||
return ShareResponseSchema.parse(await this.formatShareResponse(updated));
|
||||
}
|
||||
|
||||
async addFilesToShare(shareId: string, userId: string, fileIds: string[]) {
|
||||
async addItemsToShare(shareId: string, userId: string, fileIds: string[], folderIds: string[]) {
|
||||
const share = await this.shareRepository.findShareById(shareId);
|
||||
if (!share) {
|
||||
throw new Error("Share not found");
|
||||
@@ -206,19 +256,33 @@ export class ShareService {
|
||||
throw new Error("Unauthorized to update this share");
|
||||
}
|
||||
|
||||
const existingFiles = await this.shareRepository.findFilesByIds(fileIds);
|
||||
const notFoundFiles = fileIds.filter((id) => !existingFiles.some((file) => file.id === id));
|
||||
if (fileIds.length > 0) {
|
||||
const existingFiles = await this.shareRepository.findFilesByIds(fileIds);
|
||||
const notFoundFiles = fileIds.filter((id) => !existingFiles.some((file) => file.id === id));
|
||||
|
||||
if (notFoundFiles.length > 0) {
|
||||
throw new Error(`Files not found: ${notFoundFiles.join(", ")}`);
|
||||
if (notFoundFiles.length > 0) {
|
||||
throw new Error(`Files not found: ${notFoundFiles.join(", ")}`);
|
||||
}
|
||||
|
||||
await this.shareRepository.addFilesToShare(shareId, fileIds);
|
||||
}
|
||||
|
||||
if (folderIds.length > 0) {
|
||||
const existingFolders = await this.shareRepository.findFoldersByIds(folderIds);
|
||||
const notFoundFolders = folderIds.filter((id) => !existingFolders.some((folder) => folder.id === id));
|
||||
|
||||
if (notFoundFolders.length > 0) {
|
||||
throw new Error(`Folders not found: ${notFoundFolders.join(", ")}`);
|
||||
}
|
||||
|
||||
await this.shareRepository.addFoldersToShare(shareId, folderIds);
|
||||
}
|
||||
|
||||
await this.shareRepository.addFilesToShare(shareId, fileIds);
|
||||
const updated = await this.shareRepository.findShareById(shareId);
|
||||
return ShareResponseSchema.parse(this.formatShareResponse(updated));
|
||||
return ShareResponseSchema.parse(await this.formatShareResponse(updated));
|
||||
}
|
||||
|
||||
async removeFilesFromShare(shareId: string, userId: string, fileIds: string[]) {
|
||||
async removeItemsFromShare(shareId: string, userId: string, fileIds: string[], folderIds: string[]) {
|
||||
const share = await this.shareRepository.findShareById(shareId);
|
||||
if (!share) {
|
||||
throw new Error("Share not found");
|
||||
@@ -228,9 +292,16 @@ export class ShareService {
|
||||
throw new Error("Unauthorized to update this share");
|
||||
}
|
||||
|
||||
await this.shareRepository.removeFilesFromShare(shareId, fileIds);
|
||||
if (fileIds.length > 0) {
|
||||
await this.shareRepository.removeFilesFromShare(shareId, fileIds);
|
||||
}
|
||||
|
||||
if (folderIds.length > 0) {
|
||||
await this.shareRepository.removeFoldersFromShare(shareId, folderIds);
|
||||
}
|
||||
|
||||
const updated = await this.shareRepository.findShareById(shareId);
|
||||
return ShareResponseSchema.parse(this.formatShareResponse(updated));
|
||||
return ShareResponseSchema.parse(await this.formatShareResponse(updated));
|
||||
}
|
||||
|
||||
async findShareById(id: string) {
|
||||
@@ -253,7 +324,7 @@ export class ShareService {
|
||||
|
||||
await this.shareRepository.addRecipients(shareId, emails);
|
||||
const updated = await this.shareRepository.findShareById(shareId);
|
||||
return ShareResponseSchema.parse(this.formatShareResponse(updated));
|
||||
return ShareResponseSchema.parse(await this.formatShareResponse(updated));
|
||||
}
|
||||
|
||||
async removeRecipients(shareId: string, userId: string, emails: string[]) {
|
||||
@@ -268,7 +339,7 @@ export class ShareService {
|
||||
|
||||
await this.shareRepository.removeRecipients(shareId, emails);
|
||||
const updated = await this.shareRepository.findShareById(shareId);
|
||||
return ShareResponseSchema.parse(this.formatShareResponse(updated));
|
||||
return ShareResponseSchema.parse(await this.formatShareResponse(updated));
|
||||
}
|
||||
|
||||
async createOrUpdateAlias(shareId: string, alias: string, userId: string) {
|
||||
@@ -339,11 +410,25 @@ export class ShareService {
|
||||
throw new Error("No recipients found for this share");
|
||||
}
|
||||
|
||||
let senderName = "Someone";
|
||||
try {
|
||||
const sender = await this.userService.getUserById(userId);
|
||||
if (sender.firstName && sender.lastName) {
|
||||
senderName = `${sender.firstName} ${sender.lastName}`;
|
||||
} else if (sender.firstName) {
|
||||
senderName = sender.firstName;
|
||||
} else if (sender.username) {
|
||||
senderName = sender.username;
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(`Failed to get sender information for user ${userId}:`, error);
|
||||
}
|
||||
|
||||
const notifiedRecipients: string[] = [];
|
||||
|
||||
for (const recipient of share.recipients) {
|
||||
try {
|
||||
await this.emailService.sendShareNotification(recipient.email, shareLink, share.name || undefined);
|
||||
await this.emailService.sendShareNotification(recipient.email, shareLink, share.name || undefined, senderName);
|
||||
notifiedRecipients.push(recipient.email);
|
||||
} catch (error) {
|
||||
console.error(`Failed to send email to ${recipient.email}:`, error);
|
||||
@@ -355,4 +440,31 @@ export class ShareService {
|
||||
notifiedRecipients,
|
||||
};
|
||||
}
|
||||
|
||||
async getShareMetadataByAlias(alias: string) {
|
||||
const share = await this.shareRepository.findShareByAlias(alias);
|
||||
if (!share) {
|
||||
throw new Error("Share not found");
|
||||
}
|
||||
|
||||
// Check if share is expired
|
||||
const isExpired = share.expiration ? new Date(share.expiration) < new Date() : false;
|
||||
|
||||
// Check if max views reached
|
||||
const isMaxViewsReached = share.security.maxViews !== null ? share.views >= share.security.maxViews : false;
|
||||
|
||||
const totalFiles = share.files?.length || 0;
|
||||
const totalFolders = share.folders?.length || 0;
|
||||
const hasPassword = !!share.security.password;
|
||||
|
||||
return {
|
||||
name: share.name,
|
||||
description: share.description,
|
||||
totalFiles,
|
||||
totalFolders,
|
||||
hasPassword,
|
||||
isExpired,
|
||||
isMaxViewsReached,
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
@@ -284,7 +284,7 @@ export class StorageService {
|
||||
private async _getDiskSpaceMultiplePaths(): Promise<{ total: number; available: number } | null> {
|
||||
const basePaths = IS_RUNNING_IN_CONTAINER
|
||||
? ["/app/server/uploads", "/app/server/temp-uploads", "/app/server/temp-chunks", "/app/server", "/app", "/"]
|
||||
: [".", "./uploads", process.cwd()];
|
||||
: [env.CUSTOM_PATH || ".", "./uploads", process.cwd()];
|
||||
|
||||
const synologyPaths = await this._detectSynologyVolumes();
|
||||
|
||||
@@ -324,89 +324,45 @@ export class StorageService {
|
||||
uploadAllowed: boolean;
|
||||
}> {
|
||||
try {
|
||||
const isDemoMode = env.DEMO_MODE === "true";
|
||||
|
||||
if (isAdmin) {
|
||||
if (isDemoMode) {
|
||||
const demoMaxStorage = 200 * 1024 * 1024;
|
||||
const demoMaxStorageGB = this._ensureNumber(demoMaxStorage / (1024 * 1024 * 1024), 0);
|
||||
const diskInfo = await this._getDiskSpaceMultiplePaths();
|
||||
|
||||
const userFiles = await prisma.file.findMany({
|
||||
where: { userId },
|
||||
select: { size: true },
|
||||
});
|
||||
|
||||
const totalUsedStorage = userFiles.reduce((acc, file) => acc + file.size, BigInt(0));
|
||||
const usedStorageGB = this._ensureNumber(Number(totalUsedStorage) / (1024 * 1024 * 1024), 0);
|
||||
const availableStorageGB = this._ensureNumber(demoMaxStorageGB - usedStorageGB, 0);
|
||||
|
||||
return {
|
||||
diskSizeGB: Number(demoMaxStorageGB.toFixed(2)),
|
||||
diskUsedGB: Number(usedStorageGB.toFixed(2)),
|
||||
diskAvailableGB: Number(availableStorageGB.toFixed(2)),
|
||||
uploadAllowed: availableStorageGB > 0,
|
||||
};
|
||||
} else {
|
||||
const diskInfo = await this._getDiskSpaceMultiplePaths();
|
||||
|
||||
if (!diskInfo) {
|
||||
throw new Error("Unable to determine actual disk space - system configuration issue");
|
||||
}
|
||||
|
||||
const { total, available } = diskInfo;
|
||||
const used = total - available;
|
||||
|
||||
const diskSizeGB = this._ensureNumber(total / (1024 * 1024 * 1024), 0);
|
||||
const diskUsedGB = this._ensureNumber(used / (1024 * 1024 * 1024), 0);
|
||||
const diskAvailableGB = this._ensureNumber(available / (1024 * 1024 * 1024), 0);
|
||||
|
||||
return {
|
||||
diskSizeGB: Number(diskSizeGB.toFixed(2)),
|
||||
diskUsedGB: Number(diskUsedGB.toFixed(2)),
|
||||
diskAvailableGB: Number(diskAvailableGB.toFixed(2)),
|
||||
uploadAllowed: diskAvailableGB > 0.1,
|
||||
};
|
||||
if (!diskInfo) {
|
||||
throw new Error("Unable to determine actual disk space - system configuration issue");
|
||||
}
|
||||
|
||||
const { total, available } = diskInfo;
|
||||
const used = total - available;
|
||||
|
||||
const diskSizeGB = this._ensureNumber(total / (1024 * 1024 * 1024), 0);
|
||||
const diskUsedGB = this._ensureNumber(used / (1024 * 1024 * 1024), 0);
|
||||
const diskAvailableGB = this._ensureNumber(available / (1024 * 1024 * 1024), 0);
|
||||
|
||||
return {
|
||||
diskSizeGB: Number(diskSizeGB.toFixed(2)),
|
||||
diskUsedGB: Number(diskUsedGB.toFixed(2)),
|
||||
diskAvailableGB: Number(diskAvailableGB.toFixed(2)),
|
||||
uploadAllowed: diskAvailableGB > 0.1,
|
||||
};
|
||||
} else if (userId) {
|
||||
if (isDemoMode) {
|
||||
const demoMaxStorage = 200 * 1024 * 1024;
|
||||
const demoMaxStorageGB = this._ensureNumber(demoMaxStorage / (1024 * 1024 * 1024), 0);
|
||||
const maxTotalStorage = BigInt(await this.configService.getValue("maxTotalStoragePerUser"));
|
||||
const maxStorageGB = this._ensureNumber(Number(maxTotalStorage) / (1024 * 1024 * 1024), 10);
|
||||
|
||||
const userFiles = await prisma.file.findMany({
|
||||
where: { userId },
|
||||
select: { size: true },
|
||||
});
|
||||
const userFiles = await prisma.file.findMany({
|
||||
where: { userId },
|
||||
select: { size: true },
|
||||
});
|
||||
|
||||
const totalUsedStorage = userFiles.reduce((acc, file) => acc + file.size, BigInt(0));
|
||||
const usedStorageGB = this._ensureNumber(Number(totalUsedStorage) / (1024 * 1024 * 1024), 0);
|
||||
const availableStorageGB = this._ensureNumber(demoMaxStorageGB - usedStorageGB, 0);
|
||||
const totalUsedStorage = userFiles.reduce((acc, file) => acc + file.size, BigInt(0));
|
||||
const usedStorageGB = this._ensureNumber(Number(totalUsedStorage) / (1024 * 1024 * 1024), 0);
|
||||
const availableStorageGB = this._ensureNumber(maxStorageGB - usedStorageGB, 0);
|
||||
|
||||
return {
|
||||
diskSizeGB: Number(demoMaxStorageGB.toFixed(2)),
|
||||
diskUsedGB: Number(usedStorageGB.toFixed(2)),
|
||||
diskAvailableGB: Number(availableStorageGB.toFixed(2)),
|
||||
uploadAllowed: availableStorageGB > 0,
|
||||
};
|
||||
} else {
|
||||
const maxTotalStorage = BigInt(await this.configService.getValue("maxTotalStoragePerUser"));
|
||||
const maxStorageGB = this._ensureNumber(Number(maxTotalStorage) / (1024 * 1024 * 1024), 10);
|
||||
|
||||
const userFiles = await prisma.file.findMany({
|
||||
where: { userId },
|
||||
select: { size: true },
|
||||
});
|
||||
|
||||
const totalUsedStorage = userFiles.reduce((acc, file) => acc + file.size, BigInt(0));
|
||||
const usedStorageGB = this._ensureNumber(Number(totalUsedStorage) / (1024 * 1024 * 1024), 0);
|
||||
const availableStorageGB = this._ensureNumber(maxStorageGB - usedStorageGB, 0);
|
||||
|
||||
return {
|
||||
diskSizeGB: Number(maxStorageGB.toFixed(2)),
|
||||
diskUsedGB: Number(usedStorageGB.toFixed(2)),
|
||||
diskAvailableGB: Number(availableStorageGB.toFixed(2)),
|
||||
uploadAllowed: availableStorageGB > 0,
|
||||
};
|
||||
}
|
||||
return {
|
||||
diskSizeGB: Number(maxStorageGB.toFixed(2)),
|
||||
diskUsedGB: Number(usedStorageGB.toFixed(2)),
|
||||
diskAvailableGB: Number(availableStorageGB.toFixed(2)),
|
||||
uploadAllowed: availableStorageGB > 0,
|
||||
};
|
||||
}
|
||||
|
||||
throw new Error("User ID is required for non-admin users");
|
||||
|
||||
@@ -1,385 +0,0 @@
|
||||
import * as crypto from "crypto";
|
||||
import * as fsSync from "fs";
|
||||
import * as fs from "fs/promises";
|
||||
import * as path from "path";
|
||||
import { Transform } from "stream";
|
||||
import { pipeline } from "stream/promises";
|
||||
|
||||
import { directoriesConfig, getTempFilePath } from "../config/directories.config";
|
||||
import { env } from "../env";
|
||||
import { StorageProvider } from "../types/storage";
|
||||
|
||||
export class FilesystemStorageProvider implements StorageProvider {
|
||||
private static instance: FilesystemStorageProvider;
|
||||
private uploadsDir: string;
|
||||
private encryptionKey = env.ENCRYPTION_KEY;
|
||||
private isEncryptionDisabled = env.DISABLE_FILESYSTEM_ENCRYPTION === "true";
|
||||
private uploadTokens = new Map<string, { objectName: string; expiresAt: number }>();
|
||||
private downloadTokens = new Map<string, { objectName: string; expiresAt: number; fileName?: string }>();
|
||||
|
||||
private constructor() {
|
||||
this.uploadsDir = directoriesConfig.uploads;
|
||||
|
||||
this.ensureUploadsDir();
|
||||
setInterval(() => this.cleanExpiredTokens(), 5 * 60 * 1000);
|
||||
setInterval(() => this.cleanupEmptyTempDirs(), 10 * 60 * 1000); // Every 10 minutes
|
||||
}
|
||||
|
||||
public static getInstance(): FilesystemStorageProvider {
|
||||
if (!FilesystemStorageProvider.instance) {
|
||||
FilesystemStorageProvider.instance = new FilesystemStorageProvider();
|
||||
}
|
||||
return FilesystemStorageProvider.instance;
|
||||
}
|
||||
|
||||
private async ensureUploadsDir(): Promise<void> {
|
||||
try {
|
||||
await fs.access(this.uploadsDir);
|
||||
} catch {
|
||||
await fs.mkdir(this.uploadsDir, { recursive: true });
|
||||
}
|
||||
}
|
||||
|
||||
private cleanExpiredTokens(): void {
|
||||
const now = Date.now();
|
||||
|
||||
for (const [token, data] of this.uploadTokens.entries()) {
|
||||
if (now > data.expiresAt) {
|
||||
this.uploadTokens.delete(token);
|
||||
}
|
||||
}
|
||||
|
||||
for (const [token, data] of this.downloadTokens.entries()) {
|
||||
if (now > data.expiresAt) {
|
||||
this.downloadTokens.delete(token);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
public getFilePath(objectName: string): string {
|
||||
const sanitizedName = objectName.replace(/[^a-zA-Z0-9\-_./]/g, "_");
|
||||
return path.join(this.uploadsDir, sanitizedName);
|
||||
}
|
||||
|
||||
private createEncryptionKey(): Buffer {
|
||||
return crypto.scryptSync(this.encryptionKey, "salt", 32);
|
||||
}
|
||||
|
||||
public createEncryptStream(): Transform {
|
||||
if (this.isEncryptionDisabled) {
|
||||
return new Transform({
|
||||
transform(chunk, encoding, callback) {
|
||||
this.push(chunk);
|
||||
callback();
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
const key = this.createEncryptionKey();
|
||||
const iv = crypto.randomBytes(16);
|
||||
const cipher = crypto.createCipheriv("aes-256-cbc", key, iv);
|
||||
|
||||
let isFirstChunk = true;
|
||||
|
||||
return new Transform({
|
||||
transform(chunk, encoding, callback) {
|
||||
try {
|
||||
if (isFirstChunk) {
|
||||
this.push(iv);
|
||||
isFirstChunk = false;
|
||||
}
|
||||
|
||||
const encrypted = cipher.update(chunk);
|
||||
this.push(encrypted);
|
||||
callback();
|
||||
} catch (error) {
|
||||
callback(error as Error);
|
||||
}
|
||||
},
|
||||
|
||||
flush(callback) {
|
||||
try {
|
||||
const final = cipher.final();
|
||||
this.push(final);
|
||||
callback();
|
||||
} catch (error) {
|
||||
callback(error as Error);
|
||||
}
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
public createDecryptStream(): Transform {
|
||||
if (this.isEncryptionDisabled) {
|
||||
return new Transform({
|
||||
transform(chunk, encoding, callback) {
|
||||
this.push(chunk);
|
||||
callback();
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
const key = this.createEncryptionKey();
|
||||
let iv: Buffer | null = null;
|
||||
let decipher: crypto.Decipher | null = null;
|
||||
let ivBuffer = Buffer.alloc(0);
|
||||
|
||||
return new Transform({
|
||||
transform(chunk, encoding, callback) {
|
||||
try {
|
||||
if (!iv) {
|
||||
ivBuffer = Buffer.concat([ivBuffer, chunk]);
|
||||
|
||||
if (ivBuffer.length >= 16) {
|
||||
iv = ivBuffer.slice(0, 16);
|
||||
decipher = crypto.createDecipheriv("aes-256-cbc", key, iv);
|
||||
const remainingData = ivBuffer.slice(16);
|
||||
if (remainingData.length > 0) {
|
||||
const decrypted = decipher.update(remainingData);
|
||||
this.push(decrypted);
|
||||
}
|
||||
}
|
||||
callback();
|
||||
return;
|
||||
}
|
||||
|
||||
if (decipher) {
|
||||
const decrypted = decipher.update(chunk);
|
||||
this.push(decrypted);
|
||||
}
|
||||
callback();
|
||||
} catch (error) {
|
||||
callback(error as Error);
|
||||
}
|
||||
},
|
||||
|
||||
flush(callback) {
|
||||
try {
|
||||
if (decipher) {
|
||||
const final = decipher.final();
|
||||
this.push(final);
|
||||
}
|
||||
callback();
|
||||
} catch (error) {
|
||||
callback(error as Error);
|
||||
}
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
async getPresignedPutUrl(objectName: string, expires: number): Promise<string> {
|
||||
const token = crypto.randomBytes(32).toString("hex");
|
||||
const expiresAt = Date.now() + expires * 1000;
|
||||
|
||||
this.uploadTokens.set(token, { objectName, expiresAt });
|
||||
|
||||
return `/api/filesystem/upload/${token}`;
|
||||
}
|
||||
|
||||
async getPresignedGetUrl(objectName: string, expires: number, fileName?: string): Promise<string> {
|
||||
const token = crypto.randomBytes(32).toString("hex");
|
||||
const expiresAt = Date.now() + expires * 1000;
|
||||
|
||||
this.downloadTokens.set(token, { objectName, expiresAt, fileName });
|
||||
|
||||
return `/api/filesystem/download/${token}`;
|
||||
}
|
||||
|
||||
async deleteObject(objectName: string): Promise<void> {
|
||||
const filePath = this.getFilePath(objectName);
|
||||
try {
|
||||
await fs.unlink(filePath);
|
||||
} catch (error: any) {
|
||||
if (error.code !== "ENOENT") {
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async uploadFile(objectName: string, buffer: Buffer): Promise<void> {
|
||||
const filePath = this.getFilePath(objectName);
|
||||
const dir = path.dirname(filePath);
|
||||
|
||||
await fs.mkdir(dir, { recursive: true });
|
||||
|
||||
const { Readable } = await import("stream");
|
||||
const readable = Readable.from(buffer);
|
||||
|
||||
await this.uploadFileFromStream(objectName, readable);
|
||||
}
|
||||
|
||||
async uploadFileFromStream(objectName: string, inputStream: NodeJS.ReadableStream): Promise<void> {
|
||||
const filePath = this.getFilePath(objectName);
|
||||
const dir = path.dirname(filePath);
|
||||
|
||||
await fs.mkdir(dir, { recursive: true });
|
||||
|
||||
const tempPath = getTempFilePath(objectName);
|
||||
const tempDir = path.dirname(tempPath);
|
||||
|
||||
await fs.mkdir(tempDir, { recursive: true });
|
||||
|
||||
const writeStream = fsSync.createWriteStream(tempPath);
|
||||
const encryptStream = this.createEncryptStream();
|
||||
|
||||
try {
|
||||
await pipeline(inputStream, encryptStream, writeStream);
|
||||
await fs.rename(tempPath, filePath);
|
||||
} catch (error) {
|
||||
await this.cleanupTempFile(tempPath);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
async downloadFile(objectName: string): Promise<Buffer> {
|
||||
const filePath = this.getFilePath(objectName);
|
||||
const fileBuffer = await fs.readFile(filePath);
|
||||
|
||||
if (this.isEncryptionDisabled) {
|
||||
return fileBuffer;
|
||||
}
|
||||
|
||||
if (fileBuffer.length > 16) {
|
||||
try {
|
||||
return this.decryptFileBuffer(fileBuffer);
|
||||
} catch (error: unknown) {
|
||||
if (error instanceof Error) {
|
||||
console.warn("Failed to decrypt with new method, trying legacy format", error.message);
|
||||
}
|
||||
return this.decryptFileLegacy(fileBuffer);
|
||||
}
|
||||
}
|
||||
|
||||
return this.decryptFileLegacy(fileBuffer);
|
||||
}
|
||||
|
||||
private decryptFileBuffer(encryptedBuffer: Buffer): Buffer {
|
||||
const key = this.createEncryptionKey();
|
||||
const iv = encryptedBuffer.slice(0, 16);
|
||||
const encrypted = encryptedBuffer.slice(16);
|
||||
|
||||
const decipher = crypto.createDecipheriv("aes-256-cbc", key, iv);
|
||||
|
||||
return Buffer.concat([decipher.update(encrypted), decipher.final()]);
|
||||
}
|
||||
|
||||
private decryptFileLegacy(encryptedBuffer: Buffer): Buffer {
|
||||
const CryptoJS = require("crypto-js");
|
||||
const decrypted = CryptoJS.AES.decrypt(encryptedBuffer.toString("utf8"), this.encryptionKey);
|
||||
return Buffer.from(decrypted.toString(CryptoJS.enc.Utf8), "base64");
|
||||
}
|
||||
|
||||
async fileExists(objectName: string): Promise<boolean> {
|
||||
const filePath = this.getFilePath(objectName);
|
||||
try {
|
||||
await fs.access(filePath);
|
||||
return true;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
validateUploadToken(token: string): { objectName: string } | null {
|
||||
const data = this.uploadTokens.get(token);
|
||||
if (!data || Date.now() > data.expiresAt) {
|
||||
this.uploadTokens.delete(token);
|
||||
return null;
|
||||
}
|
||||
return { objectName: data.objectName };
|
||||
}
|
||||
|
||||
validateDownloadToken(token: string): { objectName: string; fileName?: string } | null {
|
||||
const data = this.downloadTokens.get(token);
|
||||
|
||||
if (!data) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const now = Date.now();
|
||||
|
||||
if (now > data.expiresAt) {
|
||||
this.downloadTokens.delete(token);
|
||||
return null;
|
||||
}
|
||||
|
||||
return { objectName: data.objectName, fileName: data.fileName };
|
||||
}
|
||||
|
||||
consumeUploadToken(token: string): void {
|
||||
this.uploadTokens.delete(token);
|
||||
}
|
||||
|
||||
consumeDownloadToken(token: string): void {
|
||||
this.downloadTokens.delete(token);
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up temporary file and its parent directory if empty
|
||||
*/
|
||||
private async cleanupTempFile(tempPath: string): Promise<void> {
|
||||
try {
|
||||
await fs.unlink(tempPath);
|
||||
|
||||
const tempDir = path.dirname(tempPath);
|
||||
try {
|
||||
const files = await fs.readdir(tempDir);
|
||||
if (files.length === 0) {
|
||||
await fs.rmdir(tempDir);
|
||||
}
|
||||
} catch (dirError: any) {
|
||||
if (dirError.code !== "ENOTEMPTY" && dirError.code !== "ENOENT") {
|
||||
console.warn("Warning: Could not remove temp directory:", dirError.message);
|
||||
}
|
||||
}
|
||||
} catch (cleanupError: any) {
|
||||
if (cleanupError.code !== "ENOENT") {
|
||||
console.error("Error deleting temp file:", cleanupError);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up empty temporary directories periodically
|
||||
*/
|
||||
private async cleanupEmptyTempDirs(): Promise<void> {
|
||||
try {
|
||||
const tempUploadsDir = directoriesConfig.tempUploads;
|
||||
|
||||
try {
|
||||
await fs.access(tempUploadsDir);
|
||||
} catch {
|
||||
return;
|
||||
}
|
||||
|
||||
const items = await fs.readdir(tempUploadsDir);
|
||||
|
||||
for (const item of items) {
|
||||
const itemPath = path.join(tempUploadsDir, item);
|
||||
|
||||
try {
|
||||
const stat = await fs.stat(itemPath);
|
||||
|
||||
if (stat.isDirectory()) {
|
||||
const dirContents = await fs.readdir(itemPath);
|
||||
if (dirContents.length === 0) {
|
||||
await fs.rmdir(itemPath);
|
||||
console.log(`🧹 Cleaned up empty temp directory: ${itemPath}`);
|
||||
}
|
||||
} else if (stat.isFile()) {
|
||||
const oneHourAgo = Date.now() - 60 * 60 * 1000;
|
||||
if (stat.mtime.getTime() < oneHourAgo) {
|
||||
await fs.unlink(itemPath);
|
||||
console.log(`🧹 Cleaned up stale temp file: ${itemPath}`);
|
||||
}
|
||||
}
|
||||
} catch (error: any) {
|
||||
if (error.code !== "ENOENT") {
|
||||
console.warn(`Warning: Could not process temp item ${itemPath}:`, error.message);
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
console.error("Error during temp directory cleanup:", error);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,16 +1,39 @@
|
||||
import { DeleteObjectCommand, GetObjectCommand, PutObjectCommand } from "@aws-sdk/client-s3";
|
||||
import {
|
||||
AbortMultipartUploadCommand,
|
||||
CompleteMultipartUploadCommand,
|
||||
CreateMultipartUploadCommand,
|
||||
DeleteObjectCommand,
|
||||
GetObjectCommand,
|
||||
HeadObjectCommand,
|
||||
PutObjectCommand,
|
||||
UploadPartCommand,
|
||||
} from "@aws-sdk/client-s3";
|
||||
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
|
||||
|
||||
import { bucketName, s3Client } from "../config/storage.config";
|
||||
import { bucketName, createPublicS3Client, s3Client } from "../config/storage.config";
|
||||
import { StorageProvider } from "../types/storage";
|
||||
import { getContentType } from "../utils/mime-types";
|
||||
|
||||
export class S3StorageProvider implements StorageProvider {
|
||||
constructor() {
|
||||
private ensureClient() {
|
||||
if (!s3Client) {
|
||||
throw new Error(
|
||||
"S3 client is not configured. Make sure ENABLE_S3=true and all S3 environment variables are set."
|
||||
);
|
||||
throw new Error("S3 client is not configured. Storage is initializing, please wait...");
|
||||
}
|
||||
return s3Client;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a character is valid in an HTTP token (RFC 2616)
|
||||
* Tokens can contain: alphanumeric and !#$%&'*+-.^_`|~
|
||||
* Must exclude separators: ()<>@,;:\"/[]?={} and space/tab
|
||||
*/
|
||||
private isTokenChar(char: string): boolean {
|
||||
const code = char.charCodeAt(0);
|
||||
// Basic ASCII range check
|
||||
if (code < 33 || code > 126) return false;
|
||||
// Exclude separator characters per RFC 2616
|
||||
const separators = '()<>@,;:\\"/[]?={} \t';
|
||||
return !separators.includes(char);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -40,12 +63,10 @@ export class S3StorageProvider implements StorageProvider {
|
||||
return 'attachment; filename="download"';
|
||||
}
|
||||
|
||||
// Create ASCII-safe version with only valid token characters
|
||||
const asciiSafe = sanitized
|
||||
.split("")
|
||||
.filter((char) => {
|
||||
const code = char.charCodeAt(0);
|
||||
return code >= 32 && code <= 126;
|
||||
})
|
||||
.filter((char) => this.isTokenChar(char))
|
||||
.join("");
|
||||
|
||||
if (asciiSafe && asciiSafe.trim()) {
|
||||
@@ -58,8 +79,10 @@ export class S3StorageProvider implements StorageProvider {
|
||||
}
|
||||
|
||||
async getPresignedPutUrl(objectName: string, expires: number): Promise<string> {
|
||||
if (!s3Client) {
|
||||
throw new Error("S3 client is not available");
|
||||
// Always use public S3 client for presigned URLs (uses SERVER_IP)
|
||||
const client = createPublicS3Client();
|
||||
if (!client) {
|
||||
throw new Error("S3 client could not be created");
|
||||
}
|
||||
|
||||
const command = new PutObjectCommand({
|
||||
@@ -67,12 +90,14 @@ export class S3StorageProvider implements StorageProvider {
|
||||
Key: objectName,
|
||||
});
|
||||
|
||||
return await getSignedUrl(s3Client, command, { expiresIn: expires });
|
||||
return await getSignedUrl(client, command, { expiresIn: expires });
|
||||
}
|
||||
|
||||
async getPresignedGetUrl(objectName: string, expires: number, fileName?: string): Promise<string> {
|
||||
if (!s3Client) {
|
||||
throw new Error("S3 client is not available");
|
||||
// Always use public S3 client for presigned URLs (uses SERVER_IP)
|
||||
const client = createPublicS3Client();
|
||||
if (!client) {
|
||||
throw new Error("S3 client could not be created");
|
||||
}
|
||||
|
||||
let rcdFileName: string;
|
||||
@@ -91,21 +116,150 @@ export class S3StorageProvider implements StorageProvider {
|
||||
Bucket: bucketName,
|
||||
Key: objectName,
|
||||
ResponseContentDisposition: this.encodeFilenameForHeader(rcdFileName),
|
||||
ResponseContentType: getContentType(rcdFileName),
|
||||
});
|
||||
|
||||
return await getSignedUrl(s3Client, command, { expiresIn: expires });
|
||||
return await getSignedUrl(client, command, { expiresIn: expires });
|
||||
}
|
||||
|
||||
async deleteObject(objectName: string): Promise<void> {
|
||||
if (!s3Client) {
|
||||
throw new Error("S3 client is not available");
|
||||
}
|
||||
const client = this.ensureClient();
|
||||
|
||||
const command = new DeleteObjectCommand({
|
||||
Bucket: bucketName,
|
||||
Key: objectName,
|
||||
});
|
||||
|
||||
await s3Client.send(command);
|
||||
await client.send(command);
|
||||
}
|
||||
|
||||
async fileExists(objectName: string): Promise<boolean> {
|
||||
const client = this.ensureClient();
|
||||
|
||||
try {
|
||||
const command = new HeadObjectCommand({
|
||||
Bucket: bucketName,
|
||||
Key: objectName,
|
||||
});
|
||||
|
||||
await client.send(command);
|
||||
return true;
|
||||
} catch (error: any) {
|
||||
if (error.name === "NotFound" || error.$metadata?.httpStatusCode === 404) {
|
||||
return false;
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get a readable stream for downloading an object
|
||||
* Used for proxying downloads through the backend
|
||||
*/
|
||||
async getObjectStream(objectName: string): Promise<NodeJS.ReadableStream> {
|
||||
const client = this.ensureClient();
|
||||
|
||||
const command = new GetObjectCommand({
|
||||
Bucket: bucketName,
|
||||
Key: objectName,
|
||||
});
|
||||
|
||||
const response = await client.send(command);
|
||||
|
||||
if (!response.Body) {
|
||||
throw new Error("No body in S3 response");
|
||||
}
|
||||
|
||||
// AWS SDK v3 returns a readable stream
|
||||
return response.Body as NodeJS.ReadableStream;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize a multipart upload
|
||||
* Returns uploadId for subsequent part uploads
|
||||
*/
|
||||
async createMultipartUpload(objectName: string): Promise<string> {
|
||||
const client = createPublicS3Client();
|
||||
if (!client) {
|
||||
throw new Error("S3 client could not be created");
|
||||
}
|
||||
|
||||
const command = new CreateMultipartUploadCommand({
|
||||
Bucket: bucketName,
|
||||
Key: objectName,
|
||||
});
|
||||
|
||||
const response = await client.send(command);
|
||||
|
||||
if (!response.UploadId) {
|
||||
throw new Error("Failed to create multipart upload - no UploadId returned");
|
||||
}
|
||||
|
||||
return response.UploadId;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get presigned URL for uploading a specific part
|
||||
*/
|
||||
async getPresignedPartUrl(
|
||||
objectName: string,
|
||||
uploadId: string,
|
||||
partNumber: number,
|
||||
expires: number
|
||||
): Promise<string> {
|
||||
const client = createPublicS3Client();
|
||||
if (!client) {
|
||||
throw new Error("S3 client could not be created");
|
||||
}
|
||||
|
||||
const command = new UploadPartCommand({
|
||||
Bucket: bucketName,
|
||||
Key: objectName,
|
||||
UploadId: uploadId,
|
||||
PartNumber: partNumber,
|
||||
});
|
||||
|
||||
const url = await getSignedUrl(client, command, { expiresIn: expires });
|
||||
return url;
|
||||
}
|
||||
|
||||
/**
|
||||
* Complete a multipart upload
|
||||
*/
|
||||
async completeMultipartUpload(
|
||||
objectName: string,
|
||||
uploadId: string,
|
||||
parts: Array<{ PartNumber: number; ETag: string }>
|
||||
): Promise<void> {
|
||||
const client = this.ensureClient();
|
||||
|
||||
const command = new CompleteMultipartUploadCommand({
|
||||
Bucket: bucketName,
|
||||
Key: objectName,
|
||||
UploadId: uploadId,
|
||||
MultipartUpload: {
|
||||
Parts: parts.map((part) => ({
|
||||
PartNumber: part.PartNumber,
|
||||
ETag: part.ETag,
|
||||
})),
|
||||
},
|
||||
});
|
||||
|
||||
await client.send(command);
|
||||
}
|
||||
|
||||
/**
|
||||
* Abort a multipart upload
|
||||
*/
|
||||
async abortMultipartUpload(objectName: string, uploadId: string): Promise<void> {
|
||||
const client = this.ensureClient();
|
||||
|
||||
const command = new AbortMultipartUploadCommand({
|
||||
Bucket: bucketName,
|
||||
Key: objectName,
|
||||
UploadId: uploadId,
|
||||
});
|
||||
|
||||
await client.send(command);
|
||||
}
|
||||
}
|
||||
|
||||
96
apps/server/src/scripts/cleanup-orphan-files.ts
Normal file
96
apps/server/src/scripts/cleanup-orphan-files.ts
Normal file
@@ -0,0 +1,96 @@
|
||||
import { S3StorageProvider } from "../providers/s3-storage.provider";
|
||||
import { prisma } from "../shared/prisma";
|
||||
import { StorageProvider } from "../types/storage";
|
||||
|
||||
/**
|
||||
* Script to clean up orphan file records in the database
|
||||
* (files that are registered in DB but don't exist in storage)
|
||||
*/
|
||||
async function cleanupOrphanFiles() {
|
||||
console.log("Starting orphan file cleanup...");
|
||||
console.log(`Storage mode: S3 (Garage or External)`);
|
||||
|
||||
// Always use S3 storage provider
|
||||
const storageProvider: StorageProvider = new S3StorageProvider();
|
||||
|
||||
// Get all files from database
|
||||
const allFiles = await prisma.file.findMany({
|
||||
select: {
|
||||
id: true,
|
||||
name: true,
|
||||
objectName: true,
|
||||
userId: true,
|
||||
size: true,
|
||||
},
|
||||
});
|
||||
|
||||
console.log(`Found ${allFiles.length} files in database`);
|
||||
|
||||
const orphanFiles: typeof allFiles = [];
|
||||
const existingFiles: typeof allFiles = [];
|
||||
|
||||
// Check each file
|
||||
for (const file of allFiles) {
|
||||
const exists = await storageProvider.fileExists(file.objectName);
|
||||
if (!exists) {
|
||||
orphanFiles.push(file);
|
||||
console.log(`❌ Orphan: ${file.name} (${file.objectName})`);
|
||||
} else {
|
||||
existingFiles.push(file);
|
||||
}
|
||||
}
|
||||
|
||||
console.log(`\n📊 Summary:`);
|
||||
console.log(` Total files in DB: ${allFiles.length}`);
|
||||
console.log(` ✅ Files with storage: ${existingFiles.length}`);
|
||||
console.log(` ❌ Orphan files: ${orphanFiles.length}`);
|
||||
|
||||
if (orphanFiles.length === 0) {
|
||||
console.log("\n✨ No orphan files found!");
|
||||
return;
|
||||
}
|
||||
|
||||
console.log(`\n🗑️ Orphan files to be deleted:`);
|
||||
orphanFiles.forEach((file) => {
|
||||
const sizeMB = Number(file.size) / (1024 * 1024);
|
||||
console.log(` - ${file.name} (${sizeMB.toFixed(2)} MB) - ${file.objectName}`);
|
||||
});
|
||||
|
||||
// Ask for confirmation (if running interactively)
|
||||
const shouldDelete = process.argv.includes("--confirm");
|
||||
|
||||
if (!shouldDelete) {
|
||||
console.log(`\n⚠️ Dry run mode. To actually delete orphan records, run with --confirm flag:`);
|
||||
console.log(` node dist/scripts/cleanup-orphan-files.js --confirm`);
|
||||
return;
|
||||
}
|
||||
|
||||
console.log(`\n🗑️ Deleting orphan file records...`);
|
||||
|
||||
let deletedCount = 0;
|
||||
for (const file of orphanFiles) {
|
||||
try {
|
||||
await prisma.file.delete({
|
||||
where: { id: file.id },
|
||||
});
|
||||
deletedCount++;
|
||||
console.log(` ✓ Deleted: ${file.name}`);
|
||||
} catch (error) {
|
||||
console.error(` ✗ Failed to delete ${file.name}:`, error);
|
||||
}
|
||||
}
|
||||
|
||||
console.log(`\n✅ Cleanup complete!`);
|
||||
console.log(` Deleted ${deletedCount} orphan file records`);
|
||||
}
|
||||
|
||||
// Run the cleanup
|
||||
cleanupOrphanFiles()
|
||||
.then(() => {
|
||||
console.log("\nScript completed successfully");
|
||||
process.exit(0);
|
||||
})
|
||||
.catch((error) => {
|
||||
console.error("\n❌ Script failed:", error);
|
||||
process.exit(1);
|
||||
});
|
||||
305
apps/server/src/scripts/migrate-filesystem-to-s3.ts
Normal file
305
apps/server/src/scripts/migrate-filesystem-to-s3.ts
Normal file
@@ -0,0 +1,305 @@
|
||||
/**
|
||||
* Automatic Migration Script: Filesystem → S3 (Garage)
|
||||
*
|
||||
* This script runs automatically on server start and:
|
||||
* 1. Detects existing filesystem files
|
||||
* 2. Migrates them to S3 in background
|
||||
* 3. Updates database references
|
||||
* 4. Keeps filesystem as fallback during migration
|
||||
* 5. Zero downtime, zero user intervention
|
||||
*/
|
||||
|
||||
import { createReadStream } from "fs";
|
||||
import * as fs from "fs/promises";
|
||||
import * as path from "path";
|
||||
import { PutObjectCommand } from "@aws-sdk/client-s3";
|
||||
|
||||
import { directoriesConfig } from "../config/directories.config";
|
||||
import { bucketName, s3Client } from "../config/storage.config";
|
||||
import { prisma } from "../shared/prisma";
|
||||
|
||||
interface MigrationStats {
|
||||
totalFiles: number;
|
||||
migratedFiles: number;
|
||||
failedFiles: number;
|
||||
skippedFiles: number;
|
||||
totalSizeBytes: number;
|
||||
startTime: number;
|
||||
endTime?: number;
|
||||
}
|
||||
|
||||
const MIGRATION_STATE_FILE = path.join(directoriesConfig.uploads, ".migration-state.json");
|
||||
const MIGRATION_BATCH_SIZE = 10; // Migrate 10 files at a time
|
||||
const MIGRATION_DELAY_MS = 100; // Small delay between batches to avoid overwhelming
|
||||
|
||||
export class FilesystemToS3Migrator {
|
||||
private stats: MigrationStats = {
|
||||
totalFiles: 0,
|
||||
migratedFiles: 0,
|
||||
failedFiles: 0,
|
||||
skippedFiles: 0,
|
||||
totalSizeBytes: 0,
|
||||
startTime: Date.now(),
|
||||
};
|
||||
|
||||
/**
|
||||
* Check if migration is needed and should run
|
||||
*/
|
||||
async shouldMigrate(): Promise<boolean> {
|
||||
// Only migrate if S3 client is available
|
||||
if (!s3Client) {
|
||||
console.log("[MIGRATION] S3 not configured, skipping migration");
|
||||
return false;
|
||||
}
|
||||
|
||||
// Check if migration already completed
|
||||
try {
|
||||
const stateExists = await fs
|
||||
.access(MIGRATION_STATE_FILE)
|
||||
.then(() => true)
|
||||
.catch(() => false);
|
||||
|
||||
if (stateExists) {
|
||||
const state = JSON.parse(await fs.readFile(MIGRATION_STATE_FILE, "utf-8"));
|
||||
|
||||
if (state.completed) {
|
||||
console.log("[MIGRATION] Migration already completed");
|
||||
return false;
|
||||
}
|
||||
|
||||
console.log("[MIGRATION] Previous migration incomplete, resuming...");
|
||||
this.stats = { ...state, startTime: Date.now() };
|
||||
return true;
|
||||
}
|
||||
} catch (error) {
|
||||
console.warn("[MIGRATION] Could not read migration state:", error);
|
||||
}
|
||||
|
||||
// Check if there are files to migrate
|
||||
try {
|
||||
const uploadsDir = directoriesConfig.uploads;
|
||||
const files = await this.scanDirectory(uploadsDir);
|
||||
|
||||
if (files.length === 0) {
|
||||
console.log("[MIGRATION] No filesystem files found, nothing to migrate");
|
||||
await this.markMigrationComplete();
|
||||
return false;
|
||||
}
|
||||
|
||||
console.log(`[MIGRATION] Found ${files.length} files to migrate`);
|
||||
this.stats.totalFiles = files.length;
|
||||
return true;
|
||||
} catch (error) {
|
||||
console.error("[MIGRATION] Error scanning files:", error);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Run the migration process
|
||||
*/
|
||||
async migrate(): Promise<void> {
|
||||
console.log("[MIGRATION] Starting automatic filesystem → S3 migration");
|
||||
console.log("[MIGRATION] This runs in background, zero downtime");
|
||||
|
||||
try {
|
||||
const uploadsDir = directoriesConfig.uploads;
|
||||
const files = await this.scanDirectory(uploadsDir);
|
||||
|
||||
// Process in batches
|
||||
for (let i = 0; i < files.length; i += MIGRATION_BATCH_SIZE) {
|
||||
const batch = files.slice(i, i + MIGRATION_BATCH_SIZE);
|
||||
|
||||
await Promise.all(
|
||||
batch.map((file) =>
|
||||
this.migrateFile(file).catch((error) => {
|
||||
console.error(`[MIGRATION] Failed to migrate ${file}:`, error);
|
||||
this.stats.failedFiles++;
|
||||
})
|
||||
)
|
||||
);
|
||||
|
||||
// Save progress
|
||||
await this.saveState();
|
||||
|
||||
// Small delay between batches
|
||||
if (i + MIGRATION_BATCH_SIZE < files.length) {
|
||||
await new Promise((resolve) => setTimeout(resolve, MIGRATION_DELAY_MS));
|
||||
}
|
||||
|
||||
// Log progress
|
||||
const progress = Math.round(((i + batch.length) / files.length) * 100);
|
||||
console.log(`[MIGRATION] Progress: ${progress}% (${this.stats.migratedFiles}/${files.length})`);
|
||||
}
|
||||
|
||||
this.stats.endTime = Date.now();
|
||||
await this.markMigrationComplete();
|
||||
|
||||
const durationSeconds = Math.round((this.stats.endTime - this.stats.startTime) / 1000);
|
||||
const sizeMB = Math.round(this.stats.totalSizeBytes / 1024 / 1024);
|
||||
|
||||
console.log("[MIGRATION] ✓✓✓ Migration completed successfully!");
|
||||
console.log(`[MIGRATION] Stats:`);
|
||||
console.log(` - Total files: ${this.stats.totalFiles}`);
|
||||
console.log(` - Migrated: ${this.stats.migratedFiles}`);
|
||||
console.log(` - Failed: ${this.stats.failedFiles}`);
|
||||
console.log(` - Skipped: ${this.stats.skippedFiles}`);
|
||||
console.log(` - Total size: ${sizeMB}MB`);
|
||||
console.log(` - Duration: ${durationSeconds}s`);
|
||||
} catch (error) {
|
||||
console.error("[MIGRATION] Migration failed:", error);
|
||||
await this.saveState();
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Scan directory recursively for files
|
||||
*/
|
||||
private async scanDirectory(dir: string, baseDir: string = dir): Promise<string[]> {
|
||||
const files: string[] = [];
|
||||
|
||||
try {
|
||||
const entries = await fs.readdir(dir, { withFileTypes: true });
|
||||
|
||||
for (const entry of entries) {
|
||||
const fullPath = path.join(dir, entry.name);
|
||||
|
||||
// Skip special files and directories
|
||||
if (entry.name.startsWith(".") || entry.name === "temp-uploads") {
|
||||
continue;
|
||||
}
|
||||
|
||||
if (entry.isDirectory()) {
|
||||
const subFiles = await this.scanDirectory(fullPath, baseDir);
|
||||
files.push(...subFiles);
|
||||
} else if (entry.isFile()) {
|
||||
// Get relative path for S3 key
|
||||
const relativePath = path.relative(baseDir, fullPath);
|
||||
files.push(relativePath);
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
console.warn(`[MIGRATION] Could not scan directory ${dir}:`, error);
|
||||
}
|
||||
|
||||
return files;
|
||||
}
|
||||
|
||||
/**
|
||||
* Migrate a single file to S3
|
||||
*/
|
||||
private async migrateFile(relativeFilePath: string): Promise<void> {
|
||||
const fullPath = path.join(directoriesConfig.uploads, relativeFilePath);
|
||||
|
||||
try {
|
||||
// Check if file still exists
|
||||
const stats = await fs.stat(fullPath);
|
||||
|
||||
if (!stats.isFile()) {
|
||||
this.stats.skippedFiles++;
|
||||
return;
|
||||
}
|
||||
|
||||
// S3 object name (preserve directory structure)
|
||||
const objectName = relativeFilePath.replace(/\\/g, "/");
|
||||
|
||||
// Check if already exists in S3
|
||||
if (s3Client) {
|
||||
try {
|
||||
const { HeadObjectCommand } = await import("@aws-sdk/client-s3");
|
||||
await s3Client.send(
|
||||
new HeadObjectCommand({
|
||||
Bucket: bucketName,
|
||||
Key: objectName,
|
||||
})
|
||||
);
|
||||
|
||||
// Already exists in S3, skip
|
||||
console.log(`[MIGRATION] Already in S3: ${objectName}`);
|
||||
this.stats.skippedFiles++;
|
||||
return;
|
||||
} catch (error: any) {
|
||||
// Not found, proceed with migration
|
||||
if (error.$metadata?.httpStatusCode !== 404) {
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Upload to S3
|
||||
if (s3Client) {
|
||||
const fileStream = createReadStream(fullPath);
|
||||
|
||||
await s3Client.send(
|
||||
new PutObjectCommand({
|
||||
Bucket: bucketName,
|
||||
Key: objectName,
|
||||
Body: fileStream,
|
||||
})
|
||||
);
|
||||
|
||||
this.stats.migratedFiles++;
|
||||
this.stats.totalSizeBytes += stats.size;
|
||||
|
||||
console.log(`[MIGRATION] ✓ Migrated: ${objectName} (${Math.round(stats.size / 1024)}KB)`);
|
||||
|
||||
// Delete filesystem file after successful migration to free up space
|
||||
try {
|
||||
await fs.unlink(fullPath);
|
||||
console.log(`[MIGRATION] 🗑️ Deleted from filesystem: ${relativeFilePath}`);
|
||||
} catch (unlinkError) {
|
||||
console.warn(`[MIGRATION] Warning: Could not delete ${relativeFilePath}:`, unlinkError);
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(`[MIGRATION] Failed to migrate ${relativeFilePath}:`, error);
|
||||
this.stats.failedFiles++;
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Save migration state
|
||||
*/
|
||||
private async saveState(): Promise<void> {
|
||||
try {
|
||||
await fs.writeFile(MIGRATION_STATE_FILE, JSON.stringify({ ...this.stats, completed: false }, null, 2));
|
||||
} catch (error) {
|
||||
console.warn("[MIGRATION] Could not save state:", error);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Mark migration as complete
|
||||
*/
|
||||
private async markMigrationComplete(): Promise<void> {
|
||||
try {
|
||||
await fs.writeFile(MIGRATION_STATE_FILE, JSON.stringify({ ...this.stats, completed: true }, null, 2));
|
||||
console.log("[MIGRATION] Migration marked as complete");
|
||||
} catch (error) {
|
||||
console.warn("[MIGRATION] Could not mark migration complete:", error);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Auto-run migration on import (called by server.ts)
|
||||
*/
|
||||
export async function runAutoMigration(): Promise<void> {
|
||||
const migrator = new FilesystemToS3Migrator();
|
||||
|
||||
if (await migrator.shouldMigrate()) {
|
||||
// Run in background, don't block server start
|
||||
setTimeout(async () => {
|
||||
try {
|
||||
await migrator.migrate();
|
||||
} catch (error) {
|
||||
console.error("[MIGRATION] Auto-migration failed:", error);
|
||||
console.log("[MIGRATION] Will retry on next server restart");
|
||||
}
|
||||
}, 5000); // Start after 5 seconds
|
||||
|
||||
console.log("[MIGRATION] Background migration scheduled");
|
||||
}
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user