Author: System Documentation / Last Updated: December 19, 2025 / Version: 1.0
Table of Contents
-
System Overview
-
Single File Restore Operation
-
Bulk File Restore Operation
-
Architecture Comparison
-
Error Handling & Recovery
-
Performance Characteristics
-
Known Issues & Limitations
-
Restore vs Delete Comparison
System Overview
Core Principles
- 30-Day Recovery Window: Files can be restored within 30 days of deletion
- New File IDs: Restored files get new Shopify file IDs (can't reuse old ones)
- No Usage Restoration: Files restored as unused (original usage saved in metadata)
- Atomic Operations: Database transactions ensure consistency
- Audit Trail: All restorations logged with original context
Restoration Lifecycle
File Deleted → Backup in R2 → 30 days → Auto-purge
↓
User restores (anytime within 30 days)
↓
File uploaded back to Shopify with new ID
↓
Backup cleaned from R2
↓
File appears in Files page (not auto-linked to products)
Key Differences from Delete
Aspect | Delete | Restore |
|---|---|---|
Data Flow | Shopify → R2 | R2 → Shopify |
File ID | Same ID preserved | New ID generated |
API Calls | 2 (fetch + delete) | 3 (staged upload process) |
Usage Links | Preserved in backup | Lost (manual relink needed) |
Speed | Faster (direct delete) | Slower (3-step upload) |
Complexity | Lower | Higher |
Single File Restore Operation
Use Case
- User restores 1-5 files from trash
- Files < 100MB
- Requires immediate feedback
- Good network conditions
Architecture
┌─────────────────────────────────────────────────────┐
│ HTTP REQUEST │
│ (20-minute timeout limit) │
├─────────────────────────────────────────────────────┤
│ │
│ User clicks Restore │
│ ↓ │
│ Route handler (app.trash.tsx) │
│ ↓ │
│ restoreFileFromTrash() - app/lib/delete-service.ts │
│ ↓ │
│ ┌──────────────────────────────────────────┐ │
│ │ Step 1: Fetch from R2 │ │
│ │ Step 2: Upload to Shopify (staged) │ │
│ │ a) Create staged upload URL │ │
│ │ b) Upload file to S3 │ │
│ │ c) Finalize file creation │ │
│ │ Step 3: Database transaction │ │
│ │ - Create mediaFile record │ │
│ │ - Delete deletedFile record │ │
│ │ - Recalculate stats │ │
│ │ Step 4: Delete from R2 (cleanup) │ │
│ │ Step 5: Log audit entry │ │
│ └──────────────────────────────────────────┘ │
│ ↓ │
│ Return success/failure + usage details to UI │
│ │
└─────────────────────────────────────────────────────┘
Implementation Details
Route Handler
// Location: app/routes/app.trash.tsx
export const action = async ({ request }: ActionFunctionArgs) => {
const { admin, session } = await authenticate.admin(request);
const shop = session.shop;
const permissions = await getPermissions(shop);
const formData = await request.formData();
const intent = formData.get("intent");
const fileId = formData.get("fileId") as string;
if (intent === "restore") {
// Check permissions (Pro plan feature)
if (!permissions.canRestoreFromTrash) {
return json({
success: false,
message: "Restore feature is available in the Pro plan."
});
}
const userInfo = getUserInfo(session);
const result = await restoreFileFromTrash(
shop,
fileId, // This is deletedFile.id, not original file ID
admin,
userInfo.email,
userInfo.name
);
return json(result);
}
};
Core Function Signature
// Location: app/lib/delete-service.server.ts
export async function restoreFileFromTrash(
shop: string,
deletedFileId: string, // Note: NOT the original file ID
admin: AdminApiContext,
userEmail?: string | null,
userName?: string | null
): Promise<{
success: boolean;
message: string;
usageDetails?: any;
}>
Step-by-Step Execution
Step 1: Fetch Metadata & Download from R2
// Query deletedFiles table
const deletedFile = await prisma.deletedFile.findUnique({
where: { id: deletedFileId, shop },
});
if (!deletedFile) {
throw new Error("Deleted file not found");
}
// Get file stream from R2
const getCommand = new GetObjectCommand({
Bucket: R2_BUCKET_NAME,
Key: deletedFile.storageKey, // e.g., "shop.myshopify.com/file-id/image.jpg"
});
const response = await r2Client.send(getCommand);
if (!response.Body) {
throw new Error("No file stream available from R2");
}
// Stream directly to Shopify (no buffering in memory)
const fileStream = response.Body;
Features:
- Streaming download (no memory buffering)
- Direct R2 → Shopify pipe
- Supports files up to 5GB
Timeout Risk: 2-300 seconds for large files
Step 2: Upload to Shopify (3-Phase Staged Upload)
Shopify doesn't allow direct file uploads. Must use staged upload process:
Phase 2a: Request Staged Upload URL
const stagedUploadMutation = `
mutation stagedUploadsCreate($input: [StagedUploadInput!]!) {
stagedUploadsCreate(input: $input) {
stagedTargets {
resourceUrl # Final file URL after upload
url # Temporary S3 upload URL
parameters { # Form data for S3 upload
name
value
}
}
userErrors {
field
message
}
}
}
`;
const stagedResponse = await admin.graphql(stagedUploadMutation, {
variables: {
input: [{
filename: deletedFile.filename,
mimeType: deletedFile.mimeType,
resource: "FILE",
fileSize: deletedFile.size.toString(),
}],
},
});
const stagedData = await stagedResponse.json();
const stagedTarget = stagedData.data.stagedUploadsCreate.stagedTargets[0];
const { url, parameters, resourceUrl } = stagedTarget;
Timeout Risk: 500-2000ms (Shopify API call)
Phase 2b: Upload File to Shopify's S3
// Build multipart form data
const formData = new FormData();
// Add Shopify's required parameters first (order matters!)
for (const param of parameters) {
formData.append(param.name, param.value);
}
// Add file stream last
formData.append('file', fileStream, {
filename: deletedFile.filename,
contentType: deletedFile.mimeType,
});
// Upload to Shopify's S3 bucket
const uploadResponse = await fetch(url, {
method: 'POST',
body: formData,
});
if (!uploadResponse.ok) {
throw new Error(`S3 upload failed: ${uploadResponse.statusText}`);
}
Features:
- Direct upload to Shopify's CDN
- Multipart form encoding
- Streaming support
Timeout Risk: 3-600 seconds (depends on file size)
Phase 2c: Finalize File Creation
const fileCreateMutation = `
mutation fileCreate($files: [FileCreateInput!]!) {
fileCreate(files: $files) {
files {
id # New Shopify file ID (gid://...)
... on MediaImage {
image {
url # Public CDN URL
}
}
... on GenericFile {
url # Public CDN URL
}
alt
}
userErrors {
field
message
}
}
}
`;
const createResponse = await admin.graphql(fileCreateMutation, {
variables: {
files: [{
alt: deletedFile.alt,
contentType: deletedFile.shopifyContentType, // IMAGE, VIDEO, etc.
originalSource: resourceUrl, // From staged upload
}],
},
});
const createData = await createResponse.json();
const newShopifyFile = createData.data.fileCreate.files[0];
const newFileId = newShopifyFile.id; // NEW ID (different from original)
const newFileUrl = newShopifyFile.image?.url || newShopifyFile.url;
Timeout Risk: 500-2000ms (Shopify API call)
Total Step 2 Time: 4-600+ seconds (mostly upload phase)
Step 3: Atomic Database Transaction
// Get current scan result (needed for foreign key)
const scanResult = await prisma.scanResult.findUnique({
where: { shop },
});
await prisma.$transaction(async (tx) => {
// Create new file record with NEW Shopify ID
await tx.mediaFile.create({
data: {
id: newFileId, // NEW ID from Shopify
shop,
scanResultId: scanResult.id,
filename: deletedFile.filename,
url: newFileUrl, // NEW URL from Shopify
size: deletedFile.size,
mimeType: deletedFile.mimeType,
fileType: deletedFile.fileType,
shopifyContentType: deletedFile.shopifyContentType,
alt: deletedFile.alt,
customTags: deletedFile.customTags || "[]",
// Reset all usage (file starts as unused)
isUsed: false,
productIds: "[]",
collectionIds: "[]",
productTitles: "[]",
collectionTitles: "[]",
blogPostIds: "[]",
blogPostTitles: "[]",
pageIds: "[]",
pageTitles: "[]",
themeSettings: "[]",
// Preserve original creation date
createdAt: deletedFile.originalCreatedAt,
},
});
// Delete from trash
await tx.deletedFile.delete({
where: { id: deletedFileId },
});
// Recalculate shop statistics
await recalculateStats(shop, tx);
}, {
maxWait: 30000, // 30 seconds max wait
timeout: 30000, // 30 seconds execution timeout
});
Key Points:
- New file starts with isUsed: false (merchant must manually relink)
- All usage arrays reset to empty
- Original creation date preserved
- New file gets current timestamp for restoration
Transaction Guarantees:
- All-or-nothing execution (ACID)
- 30-second timeout limit
- Automatic rollback on failure
Step 4: R2 Cleanup (Non-Critical)
// Clean up R2 storage (outside transaction - non-critical)
if (deletedFile.storageUrl.startsWith("r2://") && isR2Configured()) {
try {
await r2Client.send(
new DeleteObjectCommand({
Bucket: R2_BUCKET_NAME,
Key: deletedFile.storageKey,
})
);
} catch (r2Error) {
// Log but continue anyway
console.error("R2 cleanup failed:", r2Error);
// Restoration still successful even if cleanup fails
}
}
Important: This happens AFTER the DB transaction succeeds. If this fails:
- File successfully restored in Shopify ✅
- File removed from trash in DB ✅
- Backup still in R2 ❌ (orphaned, minor storage cost)
Step 5: Audit Logging
const usageDetails = JSON.parse(deletedFile.usageDetails);
// Build usage breakdown from original deletion context
const usageBreakdown = [];
if (usageDetails.products?.length > 0)
usageBreakdown.push(`${usageDetails.products.length} product(s)`);
if (usageDetails.collections?.length > 0)
usageBreakdown.push(`${usageDetails.collections.length} collection(s)`);
// ... more usage types ...
await logAudit({
shop,
userEmail,
userName,
action: AuditAction.FILE_RESTORED,
targetName: deletedFile.filename,
details: `Restored ${deletedFile.filename} (20MB)${
deletedFile.wasUsed && usageBreakdown.length > 0
? ` • Was previously used in: ${usageBreakdown.join(", ")}`
: ""
}`,
metadata: {
originalFileId: deletedFile.originalFileId, // OLD ID
newFileId: newFileId, // NEW ID
size: Number(deletedFile.size),
fileType: deletedFile.fileType,
wasUsed: deletedFile.wasUsed,
usageBreakdown: {
products: usageDetails.products?.length || 0,
collections: usageDetails.collections?.length || 0,
// ... more counts ...
},
},
success: true,
});
// Return usage details to UI (so merchant can relink)
return {
success: true,
message: `Restored ${deletedFile.filename}`,
usageDetails, // Shows where file WAS used before deletion
};
Timeout Scenarios
File Size | Network | Expected Time | Timeout Risk |
|---|---|---|---|
< 50MB | Good | 7-20 sec | Very Low (< 1%) |
50-100MB | Good | 20-40 sec | Low (< 5%) |
100-300MB | Good | 40-180 sec | Medium (~15%) |
> 300MB | Good | 180-1200 sec | High (> 40%) |
Any | Poor | Variable | Very High |
HTTP Timeout: 20 minutes (Fly.io grace_period)
Why Restore is Slower than Delete:
- 3 Shopify API calls vs 1
- Staged upload process (S3 upload slower than delete)
- Form encoding overhead
- Upload bandwidth typically < download bandwidth
Failure Recovery
Scenario 1: Timeout during R2 download
- No changes made anywhere
- File still in trash
- Recovery: Retry operation (idempotent)
Scenario 2: Timeout during Shopify upload
- R2 file intact
- No DB changes
- File still in trash
- Recovery: Retry operation (idempotent)
Scenario 3: Database transaction timeout
- File uploaded to Shopify ❌ (orphaned in Shopify)
- Transaction rolled back
- File still in trash
- Recovery: Orphaned Shopify file (storage waste), retry operation
Scenario 4: Timeout after DB, before R2 cleanup
- File restored successfully ✅
- File removed from trash ✅
- R2 backup still exists ❌ (minor storage cost)
- Recovery: Manual R2 cleanup eventually
Bulk File Restore Operation
Use Case
- User restores 10+ files from trash
- Files of any size
- Long-running operation (minutes to hours)
- Resilient to restarts/failures
Architecture
┌─────────────────────────────────────────────────────┐
│ HTTP REQUEST (instant) │
│ Creates job, returns immediately │
└─────────────────────────────────────────────────────┘
│
├─ Creates BulkJob record (PENDING)
├─ Creates 50 BulkJobItem records (PENDING)
└─ Returns job ID to user (~200ms)
┌─────────────────────────────────────────────────────┐
│ BACKGROUND WORKER (no timeout) │
│ Separate Node.js process │
└─────────────────────────────────────────────────────┘
│
├─ Polls database every 5 seconds
├─ Picks oldest PENDING job (FIFO)
├─ Marks job as PROCESSING
│
└─ For each file:
├─ Mark item PROCESSING
├─ Check if still in trash
├─ Call restoreFileFromTrash()
├─ Mark item SUCCESS/FAILED/SKIPPED
├─ Update job progress
└─ Repeat until all done
├─ Mark job COMPLETED
└─ Move to next job
Database Schema
Same as bulk delete - uses BulkJob and BulkJobItem tables with:
- type: BulkJobType.BULK_RESTORE
- fileIds: Array of deletedFile.id values (not original file IDs)
Implementation: Job Creation
// Location: app/routes/app.trash.tsx
if (intent === "bulkRestore") {
// Check permissions (Pro plan feature)
if (!permissions.canRestoreFromTrash) {
return json({
success: false,
message: "Restore feature is available in the Pro plan."
});
}
const fileIds = JSON.parse(formData.get("fileIds") as string);
const userInfo = getUserInfo(session);
// Validate files exist in trash
const files = await prisma.deletedFile.findMany({
where: { id: { in: fileIds }, shop },
select: { id: true, filename: true },
});
if (files.length === 0) {
return json({
success: false,
message: "No valid files found in trash.",
});
}
// Warn if some files were not found
const missingCount = fileIds.length - files.length;
if (missingCount > 0) {
console.log(`⚠️ ${missingCount} file(s) not found in trash`);
}
// Create bulk job
const job = await createBulkJob({
shop,
type: BulkJobType.BULK_RESTORE,
fileIds: files.map(f => f.id),
fileNames: files.map(f => f.filename),
userEmail: userInfo.email,
userName: userInfo.name,
});
// Log audit
await logAudit({
shop,
userEmail: userInfo.email,
userName: userInfo.name,
action: AuditAction.FILE_BULK_RESTORE,
targetName: `${files.length} files`,
details: `Created bulk restore job for ${files.length} files`,
metadata: {
jobId: job.id,
fileCount: files.length,
requestedCount: fileIds.length,
missingCount,
},
success: true,
});
// Return immediately
return json({
success: true,
message: `Job created for ${files.length} files`,
jobId: job.id,
missingCount,
});
}
Implementation: Job Processing
Bulk Restore Worker
// Location: app/lib/jobs/bulk-restore-worker.server.ts
export async function processBulkRestoreJob(params: ProcessBulkRestoreJobParams) {
const { jobId, shop, admin } = params;
try {
// Mark job as processing
const job = await markJobProcessing(jobId);
if (!job) throw new Error("Job not found");
const userEmail = job.userEmail;
const userName = job.userName;
let successCount = 0;
let failCount = 0;
let processedCount = 0;
const results: JobResult[] = [];
// Process files sequentially (one at a time)
while (true) {
// Get next pending item
const pendingItems = await getPendingJobItems(jobId, 1);
if (pendingItems.length === 0) break; // All done
const item = pendingItems[0];
const itemStartTime = Date.now();
try {
await markJobItemProcessing(item.id);
// Check if file still exists in trash
const deletedFile = await prisma.deletedFile.findUnique({
where: { id: item.fileId }, // This is deletedFile.id
select: { id: true },
});
if (!deletedFile) {
// File already restored or permanently deleted - skip it
await markJobItemSkipped(item.id, "File not found in trash");
successCount++; // Count as success (not blocking)
processedCount++;
results.push({
fileId: item.fileId,
filename: item.filename,
success: true,
message: "Not in trash (already restored or deleted)",
retryCount: item.retryCount,
durationMs: Date.now() - itemStartTime,
});
} else {
// Attempt restoration
const result = await restoreFileFromTrash(
shop,
item.fileId,
admin,
userEmail,
userName
);
const duration = Date.now() - itemStartTime;
if (result.success) {
await markJobItemComplete(item.id, true, undefined, duration);
successCount++;
processedCount++;
results.push({
fileId: item.fileId,
filename: item.filename,
success: true,
message: result.message,
retryCount: item.retryCount,
durationMs: duration,
});
} else {
throw new Error(result.message);
}
}
} catch (error: any) {
const duration = Date.now() - itemStartTime;
// Retry logic
if (item.retryCount < item.maxRetries) {
// Retry this item later
const retried = await retryJobItem(item.id);
if (!retried) {
// Item disappeared, mark as failed
failCount++;
processedCount++;
}
} else {
// Final failure after 3 retries
await markJobItemComplete(item.id, false, error.message, duration);
failCount++;
processedCount++;
results.push({
fileId: item.fileId,
filename: item.filename,
success: false,
message: error.message || "Unknown error",
retryCount: item.retryCount,
durationMs: duration,
});
}
}
// Update progress after each file
await updateJobProgress(jobId, {
processedFiles: processedCount,
successCount,
failCount,
results,
});
}
// Mark job as completed
await markJobCompleted(jobId, { successCount, failCount, results });
return { success: true, successCount, failCount };
} catch (error: any) {
await markJobFailed(jobId, error.message);
return { success: false, error: error.message };
}
}
Key Features
Same as Bulk Delete:
- Sequential processing (one at a time)
- Progress saved after each file
- Automatic 3x retry per file
- Crash recovery via resetStaleJobs()
- Real-time progress updates
- No timeout limits
Restore-Specific:
- Files not in trash → Skip (mark SUCCESS)
- All restored files start as unused
- Usage details available in audit log only
Architecture Comparison
Feature | Single Restore | Bulk Restore |
|---|---|---|
Execution | HTTP request | Background worker |
Timeout | 20 minutes | None |
Max Files | 1-5 recommended | Unlimited |
Max File Size | 300MB safe | Any size |
Crash Recovery | None (HTTP dies) | Automatic |
Retry Logic | None | 3 attempts per file |
Progress Tracking | No | Yes (real-time) |
User Experience | Immediate feedback | Async (polling required) |
Scalability | Limited | High |
Complexity | High (3-phase upload) | High (worker + 3-phase) |
Error Handling & Recovery
Single Restore Error Handling
Network Errors:
- R2 download failure → Aborts, no changes made
- Shopify upload failure → Aborts, file still in trash
- Shopify API timeout → Partial upload possible (orphaned)
Database Errors:
- Transaction timeout (30s) → Automatic rollback
- Connection lost → Partial state possible (Shopify has file, DB unchanged)
Shopify API Errors:
- Invalid file format → Fail immediately
- File too large → Fail immediately (Shopify limit: 5GB)
- Quota exceeded → Fail immediately
Recovery: "Restore" button in trash UI
Bulk Restore Error Handling
Item-Level Errors:
- File not in trash → Skip (mark SUCCESS)
- Network error → Retry (up to 3 times)
- Shopify API error → Retry (up to 3 times)
- Database error → Retry (up to 3 times)
- Persistent failure → Mark as FAILED, continue
Job-Level Errors:
- No session found → Fail entire job
- Unexpected exception → Fail entire job
- 5 consecutive worker errors → Worker exits
Recovery:
- Automatic: Retry failed items via "Retry Failed" button
- Manual: Files still in trash, can create new job
Stale Job Recovery
Same as bulk delete - see Delete documentation for details.
Known Issue: 10-minute threshold causes orphaned jobs on quick restarts.
Performance Characteristics
Single Restore Performance
File Size | Network | Time | Bottleneck |
|---|---|---|---|
20MB | Good | 7-15 sec | Shopify staged upload |
100MB | Good | 30-60 sec | Shopify S3 upload |
500MB | Good | 120-300 sec | Shopify S3 upload |
2GB | Good | 360-1200 sec | Shopify S3 upload |
Components:
- R2 download: 2-60 sec
- Shopify staged URL request: 500-1000 ms
- Shopify S3 upload: 3-600 sec (SLOWEST)
- Shopify file create: 500-1000 ms
- DB transaction: 100-500 ms
- R2 cleanup: 200-500 ms
- Audit logging: 50-100 ms
Why Slower than Delete:
- 3 Shopify API calls vs 1
- S3 upload is slow (staging bucket)
- Form multipart encoding overhead
Bulk Restore Performance
Per-file overhead: ~100-200ms (DB queries, status updates)
Throughput:
- Small files (< 10MB): ~6-8 files/minute
- Medium files (50MB): ~3-4 files/minute
- Large files (200MB): ~1-2 files/minute
50 files @ 20MB each:
- Expected time: 5-12 minutes
- With retries: 7-18 minutes
- Worst case (network issues): 20-40 minutes
Comparison to Delete:
- Restore: ~50% slower than delete
- Reason: Shopify upload slower than delete API
Known Issues & Limitations
Issue 1: File IDs Change After Restore
Problem: Restored files get NEW Shopify file IDs, can't reuse original IDs.
Impact:
- All product/collection links broken
- Merchant must manually relink images
- No automated relinking possible (Shopify API limitation)
Workaround: Show original usage in UI, let merchant relink manually
Fix: None (Shopify API limitation)
Issue 2: Usage Information Lost
Problem: Restored files start with isUsed: false, all usage arrays empty.
Impact:
- File appears "unused" even if heavily used before
- Statistics inaccurate after restore
- No way to auto-restore usage
Workaround:
- Store original usage in deletedFile.usageDetails (JSON)
- Show in audit log and restore success message
- Merchant can manually relink
Fix: None (consequence of new file IDs)
Issue 3: Slower than Delete (2x-3x)
Problem: Restore inherently slower due to Shopify's staged upload process.
Impact:
- Single restore: ~2x slower than delete
- Bulk restore: ~2x slower than bulk delete
- Higher timeout risk for large files
Workaround: Prefer bulk operations for > 5 files
Fix: None (Shopify API limitation)
Issue 4: 30-Day Auto-Purge
Problem: Files auto-deleted from trash after 30 days (permanent loss).
Impact:
- No warning before purge
- No grace period
- Permanent data loss
Frequency: Not yet implemented (TODO)
Workaround: Add "expires in X days" warning in UI
Fix: Implement auto-purge job + warning system
Issue 5: Same Global Queue Issues
Problem: Same as delete - all shops share one queue, no priority.
Impact: Same wait time issues (see Delete documentation)
Fix: See Delete documentation for proposed solutions
Restore vs Delete Comparison
Speed Comparison
Operation | 20MB File | 100MB File | 500MB File |
|---|---|---|---|
Delete | 5-10 sec | 15-30 sec | 60-120 sec |
Restore | 7-15 sec | 30-60 sec | 120-300 sec |
Ratio | 1.4x slower | 2x slower | 2-2.5x slower |
Complexity Comparison
Aspect | Delete | Restore |
|---|---|---|
API Calls | 2 (fetch + delete) | 3 (stage + upload + create) |
Data Flow | Shopify → R2 | R2 → Shopify |
File ID | Preserved | New ID |
Usage | Preserved in backup | Lost |
Failure Points | 4 | 5 |
Timeout Risk | Medium | High |
Use Case Recommendations
Scenario | Use Single | Use Bulk |
|---|---|---|
1-3 small files (< 50MB) | ✅ | Optional |
1-3 large files (> 100MB) | ⚠️ Risk | ✅ Recommended |
5-10 files (any size) | ❌ Don't | ✅ Required |
50+ files | ❌ Don't | ✅ Required |
Time-sensitive | ✅ (if small) | ⚠️ (queue wait) |
Large batch cleanup | ❌ | ✅ |