mirror of
https://github.com/kyantech/Palmr.git
synced 2025-10-23 06:11:58 +00:00
Compare commits
6 Commits
copilot/ad
...
f63105c5eb
Author | SHA1 | Date | |
---|---|---|---|
|
f63105c5eb | ||
|
5fe6434027 | ||
|
94e021d8c6 | ||
|
f3aeaf66df | ||
|
331624e2f2 | ||
|
ba512ebe95 |
@@ -1,388 +0,0 @@
|
||||
# File Expiration Feature - Migration Guide
|
||||
|
||||
This guide helps you migrate to the new file expiration feature introduced in Palmr v3.2.5-beta.
|
||||
|
||||
## What's New
|
||||
|
||||
The file expiration feature allows files to have an optional expiration date. When files expire, they can be automatically deleted by a maintenance script, helping with:
|
||||
|
||||
- **Security**: Reducing risk of confidential data exposure
|
||||
- **Storage Management**: Automatically freeing up server space
|
||||
- **Convenience**: Eliminating the need for manual file deletion
|
||||
- **Legal Compliance**: Facilitating adherence to data retention regulations (e.g., GDPR)
|
||||
|
||||
## Database Changes
|
||||
|
||||
A new optional `expiration` field has been added to the `File` model:
|
||||
|
||||
```prisma
|
||||
model File {
|
||||
// ... existing fields
|
||||
expiration DateTime? // NEW: Optional expiration date
|
||||
// ... existing fields
|
||||
}
|
||||
```
|
||||
|
||||
## Migration Steps
|
||||
|
||||
### 1. Backup Your Database
|
||||
|
||||
Before running the migration, **always backup your database**:
|
||||
|
||||
```bash
|
||||
# For SQLite (default)
|
||||
cp apps/server/prisma/palmr.db apps/server/prisma/palmr.db.backup
|
||||
|
||||
# Or use the built-in backup command if available
|
||||
pnpm db:backup
|
||||
```
|
||||
|
||||
### 2. Run the Migration
|
||||
|
||||
The migration will automatically run when you start the server, or you can run it manually:
|
||||
|
||||
```bash
|
||||
cd apps/server
|
||||
pnpm prisma migrate deploy
|
||||
```
|
||||
|
||||
This adds the `expiration` column to the `files` table. **All existing files will have `null` expiration (never expire).**
|
||||
|
||||
### 3. Verify the Migration
|
||||
|
||||
Check that the migration was successful:
|
||||
|
||||
```bash
|
||||
cd apps/server
|
||||
pnpm prisma studio
|
||||
```
|
||||
|
||||
Look at the `files` table and verify the new `expiration` column exists.
|
||||
|
||||
## API Changes
|
||||
|
||||
### File Registration (Upload)
|
||||
|
||||
**Before:**
|
||||
```json
|
||||
{
|
||||
"name": "document.pdf",
|
||||
"description": "My document",
|
||||
"extension": "pdf",
|
||||
"size": 1024000,
|
||||
"objectName": "user123/document.pdf"
|
||||
}
|
||||
```
|
||||
|
||||
**After (optional expiration):**
|
||||
```json
|
||||
{
|
||||
"name": "document.pdf",
|
||||
"description": "My document",
|
||||
"extension": "pdf",
|
||||
"size": 1024000,
|
||||
"objectName": "user123/document.pdf",
|
||||
"expiration": "2025-12-31T23:59:59.000Z"
|
||||
}
|
||||
```
|
||||
|
||||
The `expiration` field is **optional** - omitting it or setting it to `null` means the file never expires.
|
||||
|
||||
### File Update
|
||||
|
||||
You can now update a file's expiration date:
|
||||
|
||||
```bash
|
||||
PATCH /files/:id
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"expiration": "2026-01-31T23:59:59.000Z"
|
||||
}
|
||||
```
|
||||
|
||||
To remove expiration:
|
||||
```json
|
||||
{
|
||||
"expiration": null
|
||||
}
|
||||
```
|
||||
|
||||
### File Listing
|
||||
|
||||
File list responses now include the `expiration` field:
|
||||
|
||||
```json
|
||||
{
|
||||
"files": [
|
||||
{
|
||||
"id": "file123",
|
||||
"name": "document.pdf",
|
||||
// ... other fields
|
||||
"expiration": "2025-12-31T23:59:59.000Z",
|
||||
"createdAt": "2025-10-21T10:00:00.000Z",
|
||||
"updatedAt": "2025-10-21T10:00:00.000Z"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Setting Up Automatic Cleanup
|
||||
|
||||
The file expiration feature includes a maintenance script that automatically deletes expired files.
|
||||
|
||||
### Manual Execution
|
||||
|
||||
**Dry-run mode** (preview what would be deleted):
|
||||
```bash
|
||||
cd apps/server
|
||||
pnpm cleanup:expired-files
|
||||
```
|
||||
|
||||
**Confirm mode** (actually delete):
|
||||
```bash
|
||||
cd apps/server
|
||||
pnpm cleanup:expired-files:confirm
|
||||
```
|
||||
|
||||
### Automated Scheduling
|
||||
|
||||
#### Option 1: Cron Job (Recommended for Linux/Unix)
|
||||
|
||||
Add to crontab to run daily at 2 AM:
|
||||
|
||||
```bash
|
||||
crontab -e
|
||||
```
|
||||
|
||||
Add this line:
|
||||
```
|
||||
0 2 * * * cd /path/to/Palmr/apps/server && /usr/bin/pnpm cleanup:expired-files:confirm >> /var/log/palmr-cleanup.log 2>&1
|
||||
```
|
||||
|
||||
#### Option 2: Systemd Timer (Linux)
|
||||
|
||||
Create `/etc/systemd/system/palmr-cleanup.service`:
|
||||
```ini
|
||||
[Unit]
|
||||
Description=Palmr Expired Files Cleanup
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
User=palmr
|
||||
WorkingDirectory=/path/to/Palmr/apps/server
|
||||
ExecStart=/usr/bin/pnpm cleanup:expired-files:confirm
|
||||
StandardOutput=journal
|
||||
StandardError=journal
|
||||
```
|
||||
|
||||
Create `/etc/systemd/system/palmr-cleanup.timer`:
|
||||
```ini
|
||||
[Unit]
|
||||
Description=Daily Palmr Cleanup
|
||||
Requires=palmr-cleanup.service
|
||||
|
||||
[Timer]
|
||||
OnCalendar=daily
|
||||
OnCalendar=02:00
|
||||
Persistent=true
|
||||
|
||||
[Install]
|
||||
WantedBy=timers.target
|
||||
```
|
||||
|
||||
Enable:
|
||||
```bash
|
||||
sudo systemctl enable palmr-cleanup.timer
|
||||
sudo systemctl start palmr-cleanup.timer
|
||||
```
|
||||
|
||||
#### Option 3: Docker Compose
|
||||
|
||||
Add a scheduled service to your `docker-compose.yml`:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
palmr-cleanup:
|
||||
image: palmr:latest
|
||||
command: sh -c "while true; do sleep 86400; pnpm cleanup:expired-files:confirm; done"
|
||||
environment:
|
||||
- DATABASE_URL=file:/data/palmr.db
|
||||
volumes:
|
||||
- ./data:/data
|
||||
- ./uploads:/uploads
|
||||
restart: unless-stopped
|
||||
```
|
||||
|
||||
Or use an external scheduler with a one-shot container:
|
||||
```yaml
|
||||
services:
|
||||
palmr-cleanup:
|
||||
image: palmr:latest
|
||||
command: pnpm cleanup:expired-files:confirm
|
||||
environment:
|
||||
- DATABASE_URL=file:/data/palmr.db
|
||||
volumes:
|
||||
- ./data:/data
|
||||
- ./uploads:/uploads
|
||||
restart: "no"
|
||||
```
|
||||
|
||||
## Backward Compatibility
|
||||
|
||||
This feature is **fully backward compatible**:
|
||||
|
||||
- Existing files automatically have `expiration = null` (never expire)
|
||||
- The `expiration` field is optional in all API endpoints
|
||||
- No changes required to existing client code
|
||||
- Files without expiration dates continue to work exactly as before
|
||||
|
||||
## Client Implementation Examples
|
||||
|
||||
### JavaScript/TypeScript
|
||||
|
||||
```typescript
|
||||
// Upload file with expiration
|
||||
const uploadWithExpiration = async (file: File) => {
|
||||
// Set expiration to 30 days from now
|
||||
const expiration = new Date();
|
||||
expiration.setDate(expiration.getDate() + 30);
|
||||
|
||||
const response = await fetch('/api/files', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
name: file.name,
|
||||
extension: file.name.split('.').pop(),
|
||||
size: file.size,
|
||||
objectName: `user/${Date.now()}-${file.name}`,
|
||||
expiration: expiration.toISOString(),
|
||||
}),
|
||||
});
|
||||
|
||||
return response.json();
|
||||
};
|
||||
|
||||
// Update file expiration
|
||||
const updateExpiration = async (fileId: string, days: number) => {
|
||||
const expiration = new Date();
|
||||
expiration.setDate(expiration.getDate() + days);
|
||||
|
||||
const response = await fetch(`/api/files/${fileId}`, {
|
||||
method: 'PATCH',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
expiration: expiration.toISOString(),
|
||||
}),
|
||||
});
|
||||
|
||||
return response.json();
|
||||
};
|
||||
|
||||
// Remove expiration (make file permanent)
|
||||
const removExpiration = async (fileId: string) => {
|
||||
const response = await fetch(`/api/files/${fileId}`, {
|
||||
method: 'PATCH',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
expiration: null,
|
||||
}),
|
||||
});
|
||||
|
||||
return response.json();
|
||||
};
|
||||
```
|
||||
|
||||
### Python
|
||||
|
||||
```python
|
||||
from datetime import datetime, timedelta
|
||||
import requests
|
||||
|
||||
# Upload file with expiration
|
||||
def upload_with_expiration(file_data):
|
||||
expiration = datetime.utcnow() + timedelta(days=30)
|
||||
|
||||
response = requests.post('http://localhost:3333/files', json={
|
||||
'name': file_data['name'],
|
||||
'extension': file_data['extension'],
|
||||
'size': file_data['size'],
|
||||
'objectName': file_data['objectName'],
|
||||
'expiration': expiration.isoformat() + 'Z'
|
||||
})
|
||||
|
||||
return response.json()
|
||||
|
||||
# Update expiration
|
||||
def update_expiration(file_id, days):
|
||||
expiration = datetime.utcnow() + timedelta(days=days)
|
||||
|
||||
response = requests.patch(f'http://localhost:3333/files/{file_id}', json={
|
||||
'expiration': expiration.isoformat() + 'Z'
|
||||
})
|
||||
|
||||
return response.json()
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Start with dry-run**: Always test the cleanup script in dry-run mode first
|
||||
2. **Monitor logs**: Keep track of what files are being deleted
|
||||
3. **User notifications**: Consider notifying users before their files expire
|
||||
4. **Grace period**: Set expiration dates with a buffer for important files
|
||||
5. **Backup strategy**: Maintain backups before enabling automatic deletion
|
||||
6. **Documentation**: Document your expiration policies for users
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Migration Fails
|
||||
|
||||
If the migration fails:
|
||||
|
||||
1. Check database connectivity
|
||||
2. Ensure you have write permissions
|
||||
3. Verify the database file isn't locked
|
||||
4. Try running `pnpm prisma migrate reset` (WARNING: this will delete all data)
|
||||
|
||||
### Cleanup Script Not Deleting Files
|
||||
|
||||
1. Verify files have expiration dates set and are in the past
|
||||
2. Check script is running with `--confirm` flag
|
||||
3. Review logs for specific errors
|
||||
3. Ensure script has permissions to delete from storage
|
||||
|
||||
### Need to Rollback
|
||||
|
||||
If you need to rollback the migration:
|
||||
|
||||
```bash
|
||||
cd apps/server
|
||||
|
||||
# View migration history
|
||||
pnpm prisma migrate status
|
||||
|
||||
# Rollback (requires manual SQL for production)
|
||||
# SQLite example:
|
||||
sqlite3 prisma/palmr.db "ALTER TABLE files DROP COLUMN expiration;"
|
||||
```
|
||||
|
||||
Note: Prisma doesn't support automatic rollback. You must manually reverse the migration or restore from backup.
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions:
|
||||
|
||||
- Create an issue on GitHub
|
||||
- Check the documentation at https://palmr.kyantech.com.br
|
||||
- Review the scripts README at `apps/server/src/scripts/README.md`
|
||||
|
||||
## Changelog
|
||||
|
||||
### Version 3.2.5-beta
|
||||
|
||||
- Added optional `expiration` field to File model
|
||||
- Created `cleanup-expired-files` maintenance script
|
||||
- Updated File DTOs to support expiration in create/update operations
|
||||
- Added API documentation for expiration field
|
||||
- Created comprehensive documentation for setup and usage
|
@@ -27,9 +27,7 @@
|
||||
"validate": "pnpm lint && pnpm type-check",
|
||||
"db:seed": "ts-node prisma/seed.js",
|
||||
"cleanup:orphan-files": "tsx src/scripts/cleanup-orphan-files.ts",
|
||||
"cleanup:orphan-files:confirm": "tsx src/scripts/cleanup-orphan-files.ts --confirm",
|
||||
"cleanup:expired-files": "tsx src/scripts/cleanup-expired-files.ts",
|
||||
"cleanup:expired-files:confirm": "tsx src/scripts/cleanup-expired-files.ts --confirm"
|
||||
"cleanup:orphan-files:confirm": "tsx src/scripts/cleanup-orphan-files.ts --confirm"
|
||||
},
|
||||
"prisma": {
|
||||
"seed": "node prisma/seed.js"
|
||||
@@ -80,14 +78,5 @@
|
||||
"ts-node": "^10.9.2",
|
||||
"tsx": "^4.19.2",
|
||||
"typescript": "^5.7.3"
|
||||
},
|
||||
"pnpm": {
|
||||
"onlyBuiltDependencies": [
|
||||
"@prisma/client",
|
||||
"@prisma/engines",
|
||||
"esbuild",
|
||||
"prisma",
|
||||
"sharp"
|
||||
]
|
||||
}
|
||||
}
|
@@ -1,304 +0,0 @@
|
||||
-- CreateTable
|
||||
CREATE TABLE "users" (
|
||||
"id" TEXT NOT NULL PRIMARY KEY,
|
||||
"firstName" TEXT NOT NULL,
|
||||
"lastName" TEXT NOT NULL,
|
||||
"username" TEXT NOT NULL,
|
||||
"email" TEXT NOT NULL,
|
||||
"password" TEXT,
|
||||
"image" TEXT,
|
||||
"isAdmin" BOOLEAN NOT NULL DEFAULT false,
|
||||
"isActive" BOOLEAN NOT NULL DEFAULT true,
|
||||
"createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
"updatedAt" DATETIME NOT NULL,
|
||||
"twoFactorEnabled" BOOLEAN NOT NULL DEFAULT false,
|
||||
"twoFactorSecret" TEXT,
|
||||
"twoFactorBackupCodes" TEXT,
|
||||
"twoFactorVerified" BOOLEAN NOT NULL DEFAULT false
|
||||
);
|
||||
|
||||
-- CreateTable
|
||||
CREATE TABLE "files" (
|
||||
"id" TEXT NOT NULL PRIMARY KEY,
|
||||
"name" TEXT NOT NULL,
|
||||
"description" TEXT,
|
||||
"extension" TEXT NOT NULL,
|
||||
"size" BIGINT NOT NULL,
|
||||
"objectName" TEXT NOT NULL,
|
||||
"expiration" DATETIME,
|
||||
"userId" TEXT NOT NULL,
|
||||
"folderId" TEXT,
|
||||
"createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
"updatedAt" DATETIME NOT NULL,
|
||||
CONSTRAINT "files_userId_fkey" FOREIGN KEY ("userId") REFERENCES "users" ("id") ON DELETE CASCADE ON UPDATE CASCADE,
|
||||
CONSTRAINT "files_folderId_fkey" FOREIGN KEY ("folderId") REFERENCES "folders" ("id") ON DELETE CASCADE ON UPDATE CASCADE
|
||||
);
|
||||
|
||||
-- CreateTable
|
||||
CREATE TABLE "shares" (
|
||||
"id" TEXT NOT NULL PRIMARY KEY,
|
||||
"name" TEXT,
|
||||
"views" INTEGER NOT NULL DEFAULT 0,
|
||||
"expiration" DATETIME,
|
||||
"description" TEXT,
|
||||
"createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
"updatedAt" DATETIME NOT NULL,
|
||||
"creatorId" TEXT,
|
||||
"securityId" TEXT NOT NULL,
|
||||
CONSTRAINT "shares_creatorId_fkey" FOREIGN KEY ("creatorId") REFERENCES "users" ("id") ON DELETE SET NULL ON UPDATE CASCADE,
|
||||
CONSTRAINT "shares_securityId_fkey" FOREIGN KEY ("securityId") REFERENCES "share_security" ("id") ON DELETE RESTRICT ON UPDATE CASCADE
|
||||
);
|
||||
|
||||
-- CreateTable
|
||||
CREATE TABLE "share_security" (
|
||||
"id" TEXT NOT NULL PRIMARY KEY,
|
||||
"password" TEXT,
|
||||
"maxViews" INTEGER,
|
||||
"createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
"updatedAt" DATETIME NOT NULL
|
||||
);
|
||||
|
||||
-- CreateTable
|
||||
CREATE TABLE "share_recipients" (
|
||||
"id" TEXT NOT NULL PRIMARY KEY,
|
||||
"email" TEXT NOT NULL,
|
||||
"createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
"updatedAt" DATETIME NOT NULL,
|
||||
"shareId" TEXT NOT NULL,
|
||||
CONSTRAINT "share_recipients_shareId_fkey" FOREIGN KEY ("shareId") REFERENCES "shares" ("id") ON DELETE CASCADE ON UPDATE CASCADE
|
||||
);
|
||||
|
||||
-- CreateTable
|
||||
CREATE TABLE "app_configs" (
|
||||
"id" TEXT NOT NULL PRIMARY KEY,
|
||||
"key" TEXT NOT NULL,
|
||||
"value" TEXT NOT NULL,
|
||||
"type" TEXT NOT NULL,
|
||||
"group" TEXT NOT NULL,
|
||||
"isSystem" BOOLEAN NOT NULL DEFAULT true,
|
||||
"createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
"updatedAt" DATETIME NOT NULL
|
||||
);
|
||||
|
||||
-- CreateTable
|
||||
CREATE TABLE "login_attempts" (
|
||||
"id" TEXT NOT NULL PRIMARY KEY,
|
||||
"userId" TEXT NOT NULL,
|
||||
"attempts" INTEGER NOT NULL DEFAULT 1,
|
||||
"lastAttempt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
CONSTRAINT "login_attempts_userId_fkey" FOREIGN KEY ("userId") REFERENCES "users" ("id") ON DELETE CASCADE ON UPDATE CASCADE
|
||||
);
|
||||
|
||||
-- CreateTable
|
||||
CREATE TABLE "password_resets" (
|
||||
"id" TEXT NOT NULL PRIMARY KEY,
|
||||
"userId" TEXT NOT NULL,
|
||||
"token" TEXT NOT NULL,
|
||||
"expiresAt" DATETIME NOT NULL,
|
||||
"used" BOOLEAN NOT NULL DEFAULT false,
|
||||
"createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
"updatedAt" DATETIME NOT NULL,
|
||||
CONSTRAINT "password_resets_userId_fkey" FOREIGN KEY ("userId") REFERENCES "users" ("id") ON DELETE CASCADE ON UPDATE CASCADE
|
||||
);
|
||||
|
||||
-- CreateTable
|
||||
CREATE TABLE "share_aliases" (
|
||||
"id" TEXT NOT NULL PRIMARY KEY,
|
||||
"alias" TEXT NOT NULL,
|
||||
"shareId" TEXT NOT NULL,
|
||||
"createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
"updatedAt" DATETIME NOT NULL,
|
||||
CONSTRAINT "share_aliases_shareId_fkey" FOREIGN KEY ("shareId") REFERENCES "shares" ("id") ON DELETE CASCADE ON UPDATE CASCADE
|
||||
);
|
||||
|
||||
-- CreateTable
|
||||
CREATE TABLE "auth_providers" (
|
||||
"id" TEXT NOT NULL PRIMARY KEY,
|
||||
"name" TEXT NOT NULL,
|
||||
"displayName" TEXT NOT NULL,
|
||||
"type" TEXT NOT NULL,
|
||||
"icon" TEXT,
|
||||
"enabled" BOOLEAN NOT NULL DEFAULT false,
|
||||
"issuerUrl" TEXT,
|
||||
"clientId" TEXT,
|
||||
"clientSecret" TEXT,
|
||||
"redirectUri" TEXT,
|
||||
"scope" TEXT DEFAULT 'openid profile email',
|
||||
"authorizationEndpoint" TEXT,
|
||||
"tokenEndpoint" TEXT,
|
||||
"userInfoEndpoint" TEXT,
|
||||
"metadata" TEXT,
|
||||
"autoRegister" BOOLEAN NOT NULL DEFAULT true,
|
||||
"adminEmailDomains" TEXT,
|
||||
"sortOrder" INTEGER NOT NULL DEFAULT 0,
|
||||
"createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
"updatedAt" DATETIME NOT NULL
|
||||
);
|
||||
|
||||
-- CreateTable
|
||||
CREATE TABLE "user_auth_providers" (
|
||||
"id" TEXT NOT NULL PRIMARY KEY,
|
||||
"userId" TEXT NOT NULL,
|
||||
"providerId" TEXT NOT NULL,
|
||||
"provider" TEXT,
|
||||
"externalId" TEXT NOT NULL,
|
||||
"metadata" TEXT,
|
||||
"createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
"updatedAt" DATETIME NOT NULL,
|
||||
CONSTRAINT "user_auth_providers_userId_fkey" FOREIGN KEY ("userId") REFERENCES "users" ("id") ON DELETE CASCADE ON UPDATE CASCADE,
|
||||
CONSTRAINT "user_auth_providers_providerId_fkey" FOREIGN KEY ("providerId") REFERENCES "auth_providers" ("id") ON DELETE CASCADE ON UPDATE CASCADE
|
||||
);
|
||||
|
||||
-- CreateTable
|
||||
CREATE TABLE "reverse_shares" (
|
||||
"id" TEXT NOT NULL PRIMARY KEY,
|
||||
"name" TEXT,
|
||||
"description" TEXT,
|
||||
"expiration" DATETIME,
|
||||
"maxFiles" INTEGER,
|
||||
"maxFileSize" BIGINT,
|
||||
"allowedFileTypes" TEXT,
|
||||
"password" TEXT,
|
||||
"pageLayout" TEXT NOT NULL DEFAULT 'DEFAULT',
|
||||
"isActive" BOOLEAN NOT NULL DEFAULT true,
|
||||
"nameFieldRequired" TEXT NOT NULL DEFAULT 'OPTIONAL',
|
||||
"emailFieldRequired" TEXT NOT NULL DEFAULT 'OPTIONAL',
|
||||
"createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
"updatedAt" DATETIME NOT NULL,
|
||||
"creatorId" TEXT NOT NULL,
|
||||
CONSTRAINT "reverse_shares_creatorId_fkey" FOREIGN KEY ("creatorId") REFERENCES "users" ("id") ON DELETE CASCADE ON UPDATE CASCADE
|
||||
);
|
||||
|
||||
-- CreateTable
|
||||
CREATE TABLE "reverse_share_files" (
|
||||
"id" TEXT NOT NULL PRIMARY KEY,
|
||||
"name" TEXT NOT NULL,
|
||||
"description" TEXT,
|
||||
"extension" TEXT NOT NULL,
|
||||
"size" BIGINT NOT NULL,
|
||||
"objectName" TEXT NOT NULL,
|
||||
"uploaderEmail" TEXT,
|
||||
"uploaderName" TEXT,
|
||||
"createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
"updatedAt" DATETIME NOT NULL,
|
||||
"reverseShareId" TEXT NOT NULL,
|
||||
CONSTRAINT "reverse_share_files_reverseShareId_fkey" FOREIGN KEY ("reverseShareId") REFERENCES "reverse_shares" ("id") ON DELETE CASCADE ON UPDATE CASCADE
|
||||
);
|
||||
|
||||
-- CreateTable
|
||||
CREATE TABLE "reverse_share_aliases" (
|
||||
"id" TEXT NOT NULL PRIMARY KEY,
|
||||
"alias" TEXT NOT NULL,
|
||||
"reverseShareId" TEXT NOT NULL,
|
||||
"createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
"updatedAt" DATETIME NOT NULL,
|
||||
CONSTRAINT "reverse_share_aliases_reverseShareId_fkey" FOREIGN KEY ("reverseShareId") REFERENCES "reverse_shares" ("id") ON DELETE CASCADE ON UPDATE CASCADE
|
||||
);
|
||||
|
||||
-- CreateTable
|
||||
CREATE TABLE "trusted_devices" (
|
||||
"id" TEXT NOT NULL PRIMARY KEY,
|
||||
"userId" TEXT NOT NULL,
|
||||
"deviceHash" TEXT NOT NULL,
|
||||
"deviceName" TEXT,
|
||||
"userAgent" TEXT,
|
||||
"ipAddress" TEXT,
|
||||
"lastUsedAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
"expiresAt" DATETIME NOT NULL,
|
||||
"createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
"updatedAt" DATETIME NOT NULL,
|
||||
CONSTRAINT "trusted_devices_userId_fkey" FOREIGN KEY ("userId") REFERENCES "users" ("id") ON DELETE CASCADE ON UPDATE CASCADE
|
||||
);
|
||||
|
||||
-- CreateTable
|
||||
CREATE TABLE "folders" (
|
||||
"id" TEXT NOT NULL PRIMARY KEY,
|
||||
"name" TEXT NOT NULL,
|
||||
"description" TEXT,
|
||||
"objectName" TEXT NOT NULL,
|
||||
"parentId" TEXT,
|
||||
"userId" TEXT NOT NULL,
|
||||
"createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
"updatedAt" DATETIME NOT NULL,
|
||||
CONSTRAINT "folders_parentId_fkey" FOREIGN KEY ("parentId") REFERENCES "folders" ("id") ON DELETE CASCADE ON UPDATE CASCADE,
|
||||
CONSTRAINT "folders_userId_fkey" FOREIGN KEY ("userId") REFERENCES "users" ("id") ON DELETE CASCADE ON UPDATE CASCADE
|
||||
);
|
||||
|
||||
-- CreateTable
|
||||
CREATE TABLE "_ShareFiles" (
|
||||
"A" TEXT NOT NULL,
|
||||
"B" TEXT NOT NULL,
|
||||
CONSTRAINT "_ShareFiles_A_fkey" FOREIGN KEY ("A") REFERENCES "files" ("id") ON DELETE CASCADE ON UPDATE CASCADE,
|
||||
CONSTRAINT "_ShareFiles_B_fkey" FOREIGN KEY ("B") REFERENCES "shares" ("id") ON DELETE CASCADE ON UPDATE CASCADE
|
||||
);
|
||||
|
||||
-- CreateTable
|
||||
CREATE TABLE "_ShareFolders" (
|
||||
"A" TEXT NOT NULL,
|
||||
"B" TEXT NOT NULL,
|
||||
CONSTRAINT "_ShareFolders_A_fkey" FOREIGN KEY ("A") REFERENCES "folders" ("id") ON DELETE CASCADE ON UPDATE CASCADE,
|
||||
CONSTRAINT "_ShareFolders_B_fkey" FOREIGN KEY ("B") REFERENCES "shares" ("id") ON DELETE CASCADE ON UPDATE CASCADE
|
||||
);
|
||||
|
||||
-- CreateIndex
|
||||
CREATE UNIQUE INDEX "users_username_key" ON "users"("username");
|
||||
|
||||
-- CreateIndex
|
||||
CREATE UNIQUE INDEX "users_email_key" ON "users"("email");
|
||||
|
||||
-- CreateIndex
|
||||
CREATE INDEX "files_folderId_idx" ON "files"("folderId");
|
||||
|
||||
-- CreateIndex
|
||||
CREATE UNIQUE INDEX "shares_securityId_key" ON "shares"("securityId");
|
||||
|
||||
-- CreateIndex
|
||||
CREATE UNIQUE INDEX "app_configs_key_key" ON "app_configs"("key");
|
||||
|
||||
-- CreateIndex
|
||||
CREATE UNIQUE INDEX "login_attempts_userId_key" ON "login_attempts"("userId");
|
||||
|
||||
-- CreateIndex
|
||||
CREATE UNIQUE INDEX "password_resets_token_key" ON "password_resets"("token");
|
||||
|
||||
-- CreateIndex
|
||||
CREATE UNIQUE INDEX "share_aliases_alias_key" ON "share_aliases"("alias");
|
||||
|
||||
-- CreateIndex
|
||||
CREATE UNIQUE INDEX "share_aliases_shareId_key" ON "share_aliases"("shareId");
|
||||
|
||||
-- CreateIndex
|
||||
CREATE UNIQUE INDEX "auth_providers_name_key" ON "auth_providers"("name");
|
||||
|
||||
-- CreateIndex
|
||||
CREATE UNIQUE INDEX "user_auth_providers_userId_providerId_key" ON "user_auth_providers"("userId", "providerId");
|
||||
|
||||
-- CreateIndex
|
||||
CREATE UNIQUE INDEX "user_auth_providers_providerId_externalId_key" ON "user_auth_providers"("providerId", "externalId");
|
||||
|
||||
-- CreateIndex
|
||||
CREATE UNIQUE INDEX "reverse_share_aliases_alias_key" ON "reverse_share_aliases"("alias");
|
||||
|
||||
-- CreateIndex
|
||||
CREATE UNIQUE INDEX "reverse_share_aliases_reverseShareId_key" ON "reverse_share_aliases"("reverseShareId");
|
||||
|
||||
-- CreateIndex
|
||||
CREATE UNIQUE INDEX "trusted_devices_deviceHash_key" ON "trusted_devices"("deviceHash");
|
||||
|
||||
-- CreateIndex
|
||||
CREATE INDEX "folders_userId_idx" ON "folders"("userId");
|
||||
|
||||
-- CreateIndex
|
||||
CREATE INDEX "folders_parentId_idx" ON "folders"("parentId");
|
||||
|
||||
-- CreateIndex
|
||||
CREATE UNIQUE INDEX "_ShareFiles_AB_unique" ON "_ShareFiles"("A", "B");
|
||||
|
||||
-- CreateIndex
|
||||
CREATE INDEX "_ShareFiles_B_index" ON "_ShareFiles"("B");
|
||||
|
||||
-- CreateIndex
|
||||
CREATE UNIQUE INDEX "_ShareFolders_AB_unique" ON "_ShareFolders"("A", "B");
|
||||
|
||||
-- CreateIndex
|
||||
CREATE INDEX "_ShareFolders_B_index" ON "_ShareFolders"("B");
|
@@ -1,3 +0,0 @@
|
||||
# Please do not edit this file manually
|
||||
# It should be added in your version-control system (e.g., Git)
|
||||
provider = "sqlite"
|
@@ -40,13 +40,12 @@ model User {
|
||||
}
|
||||
|
||||
model File {
|
||||
id String @id @default(cuid())
|
||||
id String @id @default(cuid())
|
||||
name String
|
||||
description String?
|
||||
extension String
|
||||
size BigInt
|
||||
objectName String
|
||||
expiration DateTime?
|
||||
|
||||
userId String
|
||||
user User @relation(fields: [userId], references: [id], onDelete: Cascade)
|
||||
|
@@ -113,7 +113,6 @@ export class FileController {
|
||||
objectName: input.objectName,
|
||||
userId,
|
||||
folderId: input.folderId,
|
||||
expiration: input.expiration ? new Date(input.expiration) : null,
|
||||
},
|
||||
});
|
||||
|
||||
@@ -126,7 +125,6 @@ export class FileController {
|
||||
objectName: fileRecord.objectName,
|
||||
userId: fileRecord.userId,
|
||||
folderId: fileRecord.folderId,
|
||||
expiration: fileRecord.expiration?.toISOString() || null,
|
||||
createdAt: fileRecord.createdAt,
|
||||
updatedAt: fileRecord.updatedAt,
|
||||
};
|
||||
@@ -431,11 +429,6 @@ export class FileController {
|
||||
userId: file.userId,
|
||||
folderId: file.folderId,
|
||||
relativePath: file.relativePath || null,
|
||||
expiration: file.expiration
|
||||
? file.expiration instanceof Date
|
||||
? file.expiration.toISOString()
|
||||
: file.expiration
|
||||
: null,
|
||||
createdAt: file.createdAt,
|
||||
updatedAt: file.updatedAt,
|
||||
}));
|
||||
@@ -509,14 +502,7 @@ export class FileController {
|
||||
|
||||
const updatedFile = await prisma.file.update({
|
||||
where: { id },
|
||||
data: {
|
||||
...updateData,
|
||||
expiration: updateData.expiration
|
||||
? new Date(updateData.expiration)
|
||||
: updateData.expiration === null
|
||||
? null
|
||||
: undefined,
|
||||
},
|
||||
data: updateData,
|
||||
});
|
||||
|
||||
const fileResponse = {
|
||||
@@ -528,7 +514,6 @@ export class FileController {
|
||||
objectName: updatedFile.objectName,
|
||||
userId: updatedFile.userId,
|
||||
folderId: updatedFile.folderId,
|
||||
expiration: updatedFile.expiration?.toISOString() || null,
|
||||
createdAt: updatedFile.createdAt,
|
||||
updatedAt: updatedFile.updatedAt,
|
||||
};
|
||||
@@ -586,7 +571,6 @@ export class FileController {
|
||||
objectName: updatedFile.objectName,
|
||||
userId: updatedFile.userId,
|
||||
folderId: updatedFile.folderId,
|
||||
expiration: updatedFile.expiration?.toISOString() || null,
|
||||
createdAt: updatedFile.createdAt,
|
||||
updatedAt: updatedFile.updatedAt,
|
||||
};
|
||||
|
@@ -10,7 +10,6 @@ export const RegisterFileSchema = z.object({
|
||||
}),
|
||||
objectName: z.string().min(1, "O objectName é obrigatório"),
|
||||
folderId: z.string().optional(),
|
||||
expiration: z.string().datetime().optional(),
|
||||
});
|
||||
|
||||
export const CheckFileSchema = z.object({
|
||||
@@ -23,7 +22,6 @@ export const CheckFileSchema = z.object({
|
||||
}),
|
||||
objectName: z.string().min(1, "O objectName é obrigatório"),
|
||||
folderId: z.string().optional(),
|
||||
expiration: z.string().datetime().optional(),
|
||||
});
|
||||
|
||||
export type RegisterFileInput = z.infer<typeof RegisterFileSchema>;
|
||||
@@ -32,7 +30,6 @@ export type CheckFileInput = z.infer<typeof CheckFileSchema>;
|
||||
export const UpdateFileSchema = z.object({
|
||||
name: z.string().optional().describe("The file name"),
|
||||
description: z.string().optional().nullable().describe("The file description"),
|
||||
expiration: z.string().datetime().optional().nullable().describe("The file expiration date"),
|
||||
});
|
||||
|
||||
export const MoveFileSchema = z.object({
|
||||
|
@@ -63,7 +63,6 @@ export async function fileRoutes(app: FastifyInstance) {
|
||||
objectName: z.string().describe("The object name of the file"),
|
||||
userId: z.string().describe("The user ID"),
|
||||
folderId: z.string().nullable().describe("The folder ID"),
|
||||
expiration: z.string().nullable().describe("The file expiration date"),
|
||||
createdAt: z.date().describe("The file creation date"),
|
||||
updatedAt: z.date().describe("The file last update date"),
|
||||
}),
|
||||
@@ -195,7 +194,6 @@ export async function fileRoutes(app: FastifyInstance) {
|
||||
userId: z.string().describe("The user ID"),
|
||||
folderId: z.string().nullable().describe("The folder ID"),
|
||||
relativePath: z.string().nullable().describe("The relative path (only for recursive listing)"),
|
||||
expiration: z.string().nullable().describe("The file expiration date"),
|
||||
createdAt: z.date().describe("The file creation date"),
|
||||
updatedAt: z.date().describe("The file last update date"),
|
||||
})
|
||||
@@ -232,7 +230,6 @@ export async function fileRoutes(app: FastifyInstance) {
|
||||
objectName: z.string().describe("The object name of the file"),
|
||||
userId: z.string().describe("The user ID"),
|
||||
folderId: z.string().nullable().describe("The folder ID"),
|
||||
expiration: z.string().nullable().describe("The file expiration date"),
|
||||
createdAt: z.date().describe("The file creation date"),
|
||||
updatedAt: z.date().describe("The file last update date"),
|
||||
}),
|
||||
@@ -272,7 +269,6 @@ export async function fileRoutes(app: FastifyInstance) {
|
||||
objectName: z.string().describe("The object name of the file"),
|
||||
userId: z.string().describe("The user ID"),
|
||||
folderId: z.string().nullable().describe("The folder ID"),
|
||||
expiration: z.string().nullable().describe("The file expiration date"),
|
||||
createdAt: z.date().describe("The file creation date"),
|
||||
updatedAt: z.date().describe("The file last update date"),
|
||||
}),
|
||||
|
@@ -1,236 +0,0 @@
|
||||
# Palmr Maintenance Scripts
|
||||
|
||||
This directory contains maintenance scripts for the Palmr server application.
|
||||
|
||||
## Available Scripts
|
||||
|
||||
### 1. Cleanup Expired Files (`cleanup-expired-files.ts`)
|
||||
|
||||
Automatically deletes files that have reached their expiration date. This script is designed to be run periodically (e.g., via cron job) to maintain storage hygiene and comply with data retention policies.
|
||||
|
||||
#### Features
|
||||
|
||||
- **Automatic Deletion**: Removes both the file metadata from the database and the actual file from storage
|
||||
- **Dry-Run Mode**: Preview what would be deleted without actually removing files
|
||||
- **Storage Agnostic**: Works with both filesystem and S3-compatible storage
|
||||
- **Detailed Logging**: Provides clear output about what files were found and deleted
|
||||
- **Error Handling**: Continues processing even if individual files fail to delete
|
||||
|
||||
#### Usage
|
||||
|
||||
**Dry-run mode** (preview without deleting):
|
||||
```bash
|
||||
pnpm cleanup:expired-files
|
||||
```
|
||||
|
||||
**Confirm mode** (actually delete expired files):
|
||||
```bash
|
||||
pnpm cleanup:expired-files:confirm
|
||||
```
|
||||
|
||||
Or directly with tsx:
|
||||
```bash
|
||||
tsx src/scripts/cleanup-expired-files.ts --confirm
|
||||
```
|
||||
|
||||
#### Output Example
|
||||
|
||||
```
|
||||
🧹 Starting expired files cleanup...
|
||||
📦 Storage mode: Filesystem
|
||||
📊 Found 2 expired files
|
||||
|
||||
🗑️ Expired files to be deleted:
|
||||
- document.pdf (2.45 MB) - Expired: 2025-10-20T10:30:00.000Z
|
||||
- image.jpg (1.23 MB) - Expired: 2025-10-21T08:15:00.000Z
|
||||
|
||||
🗑️ Deleting expired files...
|
||||
✓ Deleted: document.pdf
|
||||
✓ Deleted: image.jpg
|
||||
|
||||
✅ Cleanup complete!
|
||||
Deleted: 2 files (3.68 MB)
|
||||
```
|
||||
|
||||
#### Setting Up Automated Cleanup
|
||||
|
||||
To run this script automatically, you can set up a cron job:
|
||||
|
||||
##### Using crontab (Linux/Unix)
|
||||
|
||||
1. Edit your crontab:
|
||||
```bash
|
||||
crontab -e
|
||||
```
|
||||
|
||||
2. Add a line to run the cleanup daily at 2 AM:
|
||||
```
|
||||
0 2 * * * cd /path/to/Palmr/apps/server && pnpm cleanup:expired-files:confirm >> /var/log/palmr-cleanup.log 2>&1
|
||||
```
|
||||
|
||||
##### Using systemd timer (Linux)
|
||||
|
||||
1. Create a service file `/etc/systemd/system/palmr-cleanup-expired.service`:
|
||||
```ini
|
||||
[Unit]
|
||||
Description=Palmr Expired Files Cleanup
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
User=palmr
|
||||
WorkingDirectory=/path/to/Palmr/apps/server
|
||||
ExecStart=/usr/bin/pnpm cleanup:expired-files:confirm
|
||||
StandardOutput=journal
|
||||
StandardError=journal
|
||||
```
|
||||
|
||||
2. Create a timer file `/etc/systemd/system/palmr-cleanup-expired.timer`:
|
||||
```ini
|
||||
[Unit]
|
||||
Description=Run Palmr Expired Files Cleanup Daily
|
||||
Requires=palmr-cleanup-expired.service
|
||||
|
||||
[Timer]
|
||||
OnCalendar=daily
|
||||
OnCalendar=02:00
|
||||
Persistent=true
|
||||
|
||||
[Install]
|
||||
WantedBy=timers.target
|
||||
```
|
||||
|
||||
3. Enable and start the timer:
|
||||
```bash
|
||||
sudo systemctl enable palmr-cleanup-expired.timer
|
||||
sudo systemctl start palmr-cleanup-expired.timer
|
||||
```
|
||||
|
||||
##### Using Docker
|
||||
|
||||
If running Palmr in Docker, you can add the cleanup command to your compose file or create a separate service:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
palmr-cleanup:
|
||||
image: palmr:latest
|
||||
command: pnpm cleanup:expired-files:confirm
|
||||
environment:
|
||||
- DATABASE_URL=file:/data/palmr.db
|
||||
volumes:
|
||||
- ./data:/data
|
||||
- ./uploads:/uploads
|
||||
restart: "no"
|
||||
```
|
||||
|
||||
Then schedule it with your host's cron or a container orchestration tool.
|
||||
|
||||
#### Best Practices
|
||||
|
||||
1. **Test First**: Always run in dry-run mode first to preview what will be deleted
|
||||
2. **Monitor Logs**: Keep track of cleanup operations by logging output
|
||||
3. **Regular Schedule**: Run the cleanup at least daily to prevent storage bloat
|
||||
4. **Off-Peak Hours**: Schedule cleanup during low-traffic periods
|
||||
5. **Backup Strategy**: Ensure you have backups before enabling automatic deletion
|
||||
|
||||
### 2. Cleanup Orphan Files (`cleanup-orphan-files.ts`)
|
||||
|
||||
Removes file records from the database that no longer have corresponding files in storage. This can happen if files are manually deleted from storage or if an upload fails partway through.
|
||||
|
||||
#### Usage
|
||||
|
||||
**Dry-run mode**:
|
||||
```bash
|
||||
pnpm cleanup:orphan-files
|
||||
```
|
||||
|
||||
**Confirm mode**:
|
||||
```bash
|
||||
pnpm cleanup:orphan-files:confirm
|
||||
```
|
||||
|
||||
## File Expiration Feature
|
||||
|
||||
Files in Palmr can now have an optional expiration date. When a file expires, it becomes eligible for automatic deletion by the cleanup script.
|
||||
|
||||
### Setting Expiration During Upload
|
||||
|
||||
When registering a file, include the `expiration` field with an ISO 8601 datetime string:
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "document.pdf",
|
||||
"description": "Confidential document",
|
||||
"extension": "pdf",
|
||||
"size": 2048000,
|
||||
"objectName": "user123/document.pdf",
|
||||
"expiration": "2025-12-31T23:59:59.000Z"
|
||||
}
|
||||
```
|
||||
|
||||
### Updating File Expiration
|
||||
|
||||
You can update a file's expiration date at any time:
|
||||
|
||||
```bash
|
||||
PATCH /files/:id
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"expiration": "2026-01-31T23:59:59.000Z"
|
||||
}
|
||||
```
|
||||
|
||||
To remove an expiration date (file never expires):
|
||||
|
||||
```json
|
||||
{
|
||||
"expiration": null
|
||||
}
|
||||
```
|
||||
|
||||
### Use Cases
|
||||
|
||||
- **Temporary Shares**: Share files that automatically delete after a certain period
|
||||
- **Compliance**: Meet data retention requirements (e.g., GDPR)
|
||||
- **Storage Management**: Automatically free up space by removing old files
|
||||
- **Security**: Reduce risk of sensitive data exposure by limiting file lifetime
|
||||
- **Trial Periods**: Automatically clean up files from trial or demo accounts
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- Scripts run with the same permissions as the application
|
||||
- Deleted files cannot be recovered unless backups are in place
|
||||
- Always test scripts in a development environment first
|
||||
- Monitor script execution and review logs regularly
|
||||
- Consider implementing file versioning or soft deletes for critical data
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Script Fails to Connect to Database
|
||||
|
||||
Ensure the `DATABASE_URL` environment variable is set correctly in your `.env` file.
|
||||
|
||||
### Files Not Being Deleted
|
||||
|
||||
1. Check that files actually have an expiration date set
|
||||
2. Verify the expiration date is in the past
|
||||
3. Ensure the script has appropriate permissions to delete files
|
||||
4. Check application logs for specific error messages
|
||||
|
||||
### Storage Provider Issues
|
||||
|
||||
If using S3-compatible storage, ensure:
|
||||
- Credentials are valid and have delete permissions
|
||||
- Network connectivity to the S3 endpoint is working
|
||||
- Bucket exists and is accessible
|
||||
|
||||
## Contributing
|
||||
|
||||
When adding new maintenance scripts:
|
||||
|
||||
1. Follow the existing naming convention
|
||||
2. Include dry-run and confirm modes
|
||||
3. Provide clear logging output
|
||||
4. Handle errors gracefully
|
||||
5. Update this README with usage instructions
|
@@ -1,123 +0,0 @@
|
||||
import { isS3Enabled } from "../config/storage.config";
|
||||
import { FilesystemStorageProvider } from "../providers/filesystem-storage.provider";
|
||||
import { S3StorageProvider } from "../providers/s3-storage.provider";
|
||||
import { prisma } from "../shared/prisma";
|
||||
import { StorageProvider } from "../types/storage";
|
||||
|
||||
/**
|
||||
* Script to automatically delete expired files
|
||||
* This script should be run periodically (e.g., via cron job)
|
||||
*/
|
||||
async function cleanupExpiredFiles() {
|
||||
console.log("🧹 Starting expired files cleanup...");
|
||||
console.log(`📦 Storage mode: ${isS3Enabled ? "S3" : "Filesystem"}`);
|
||||
|
||||
let storageProvider: StorageProvider;
|
||||
if (isS3Enabled) {
|
||||
storageProvider = new S3StorageProvider();
|
||||
} else {
|
||||
storageProvider = FilesystemStorageProvider.getInstance();
|
||||
}
|
||||
|
||||
// Get all expired files
|
||||
const now = new Date();
|
||||
const expiredFiles = await prisma.file.findMany({
|
||||
where: {
|
||||
expiration: {
|
||||
lte: now,
|
||||
},
|
||||
},
|
||||
select: {
|
||||
id: true,
|
||||
name: true,
|
||||
objectName: true,
|
||||
userId: true,
|
||||
size: true,
|
||||
expiration: true,
|
||||
},
|
||||
});
|
||||
|
||||
console.log(`📊 Found ${expiredFiles.length} expired files`);
|
||||
|
||||
if (expiredFiles.length === 0) {
|
||||
console.log("\n✨ No expired files found!");
|
||||
return {
|
||||
deletedCount: 0,
|
||||
failedCount: 0,
|
||||
totalSize: 0,
|
||||
};
|
||||
}
|
||||
|
||||
console.log(`\n🗑️ Expired files to be deleted:`);
|
||||
expiredFiles.forEach((file) => {
|
||||
const sizeMB = Number(file.size) / (1024 * 1024);
|
||||
console.log(` - ${file.name} (${sizeMB.toFixed(2)} MB) - Expired: ${file.expiration?.toISOString()}`);
|
||||
});
|
||||
|
||||
// Ask for confirmation (if running interactively)
|
||||
const shouldDelete = process.argv.includes("--confirm");
|
||||
|
||||
if (!shouldDelete) {
|
||||
console.log(`\n⚠️ Dry run mode. To actually delete expired files, run with --confirm flag:`);
|
||||
console.log(` pnpm cleanup:expired-files:confirm`);
|
||||
return {
|
||||
deletedCount: 0,
|
||||
failedCount: 0,
|
||||
totalSize: 0,
|
||||
dryRun: true,
|
||||
};
|
||||
}
|
||||
|
||||
console.log(`\n🗑️ Deleting expired files...`);
|
||||
|
||||
let deletedCount = 0;
|
||||
let failedCount = 0;
|
||||
let totalSize = BigInt(0);
|
||||
|
||||
for (const file of expiredFiles) {
|
||||
try {
|
||||
// Delete from storage first
|
||||
await storageProvider.deleteObject(file.objectName);
|
||||
|
||||
// Then delete from database
|
||||
await prisma.file.delete({
|
||||
where: { id: file.id },
|
||||
});
|
||||
|
||||
deletedCount++;
|
||||
totalSize += file.size;
|
||||
console.log(` ✓ Deleted: ${file.name}`);
|
||||
} catch (error) {
|
||||
failedCount++;
|
||||
console.error(` ✗ Failed to delete ${file.name}:`, error);
|
||||
}
|
||||
}
|
||||
|
||||
const totalSizeMB = Number(totalSize) / (1024 * 1024);
|
||||
|
||||
console.log(`\n✅ Cleanup complete!`);
|
||||
console.log(` Deleted: ${deletedCount} files (${totalSizeMB.toFixed(2)} MB)`);
|
||||
if (failedCount > 0) {
|
||||
console.log(` Failed: ${failedCount} files`);
|
||||
}
|
||||
|
||||
return {
|
||||
deletedCount,
|
||||
failedCount,
|
||||
totalSize: totalSizeMB,
|
||||
};
|
||||
}
|
||||
|
||||
// Run the cleanup
|
||||
cleanupExpiredFiles()
|
||||
.then((result) => {
|
||||
console.log("\n✨ Script completed successfully");
|
||||
if (result.dryRun) {
|
||||
process.exit(0);
|
||||
}
|
||||
process.exit(result.failedCount > 0 ? 1 : 0);
|
||||
})
|
||||
.catch((error) => {
|
||||
console.error("\n❌ Script failed:", error);
|
||||
process.exit(1);
|
||||
});
|
Reference in New Issue
Block a user