Skip to content

Storage Drivers

Torrin supports multiple storage backends through its driver architecture. This guide covers built-in drivers and how to create custom ones.

Built-in Drivers

Local Filesystem

Store uploaded files on the local filesystem:

bash
npm install @torrin-kit/storage-local
bash
yarn add @torrin-kit/storage-local
bash
pnpm add @torrin-kit/storage-local
bash
bun add @torrin-kit/storage-local
typescript
import { createLocalStorageDriver } from '@torrin-kit/storage-local';

const storage = createLocalStorageDriver({
  baseDir: './uploads',        // Final destination
  tempDir: './uploads/.temp',  // Temporary chunks (optional)
  preserveFileName: false,     // Use uploadId as filename (default)
});

Configuration:

  • baseDir (required): Directory where finalized files are stored
  • tempDir (optional): Directory for temporary chunks. Defaults to ${baseDir}/.temp
  • preserveFileName (optional): If true, uses original filename. If false (default), uses uploadId as filename to avoid conflicts

Example directory structure:

uploads/
├── .temp/
│   ├── u_abc123/
│   │   ├── chunk_0
│   │   ├── chunk_1
│   │   └── chunk_2
│   └── u_def456/
│       └── chunk_0
├── u_abc123.mp4      # Finalized file
└── u_def456.pdf      # Finalized file

AWS S3 and Compatible Storage

Store files in AWS S3, MinIO, Cloudflare R2, or any S3-compatible service:

bash
npm install @torrin-kit/storage-s3 @aws-sdk/client-s3
bash
yarn add @torrin-kit/storage-s3 @aws-sdk/client-s3
bash
pnpm add @torrin-kit/storage-s3 @aws-sdk/client-s3
bash
bun add @torrin-kit/storage-s3 @aws-sdk/client-s3

AWS S3:

typescript
import { createS3StorageDriver } from '@torrin-kit/storage-s3';

const storage = createS3StorageDriver({
  bucket: 'my-uploads-bucket',
  region: 'us-east-1',
  
  // Optional: Credentials (uses AWS SDK default chain if omitted)
  credentials: {
    accessKeyId: process.env.AWS_ACCESS_KEY_ID,
    secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
  },
  
  // Optional: Custom key prefix
  keyPrefix: 'uploads/',
  
  // Optional: Custom key generation
  getObjectKey: (session) => {
    return `users/${session.metadata.userId}/${session.uploadId}/${session.fileName}`;
  },
});

MinIO (Self-hosted S3):

typescript
const storage = createS3StorageDriver({
  bucket: 'uploads',
  region: 'us-east-1',
  endpoint: 'http://localhost:9000',
  forcePathStyle: true, // Required for MinIO
  credentials: {
    accessKeyId: 'minioadmin',
    secretAccessKey: 'minioadmin',
  },
});

Cloudflare R2:

typescript
const storage = createS3StorageDriver({
  bucket: 'my-bucket',
  region: 'auto',
  endpoint: `https://${ACCOUNT_ID}.r2.cloudflarestorage.com`,
  credentials: {
    accessKeyId: process.env.R2_ACCESS_KEY_ID,
    secretAccessKey: process.env.R2_SECRET_ACCESS_KEY,
  },
});

Configuration:

  • bucket (required): S3 bucket name
  • region (required): AWS region
  • endpoint (optional): Custom endpoint for S3-compatible services
  • forcePathStyle (optional): Use path-style URLs (required for MinIO)
  • credentials (optional): AWS credentials (uses default chain if omitted)
  • keyPrefix (optional): Prefix for all object keys (e.g., 'uploads/')
  • getObjectKey (optional): Custom function to generate object keys

Custom Storage Driver

Implement the TorrinStorageDriver interface for custom storage:

Interface

typescript
interface TorrinStorageDriver {
  /**
   * Initialize upload session (create temp directory, validate permissions, etc.)
   */
  initUpload(session: TorrinUploadSession): Promise<void>;

  /**
   * Write a chunk to storage
   */
  writeChunk(
    session: TorrinUploadSession,
    chunkIndex: number,
    stream: Readable,
    expectedSize: number,
    hash?: string
  ): Promise<void>;

  /**
   * Finalize upload (combine chunks, move to final location, etc.)
   */
  finalizeUpload(session: TorrinUploadSession): Promise<TorrinStorageLocation>;

  /**
   * Abort upload (clean up temp files)
   */
  abortUpload(session: TorrinUploadSession): Promise<void>;
}

interface TorrinStorageLocation {
  type: string; // 'local', 's3', 'custom', etc.
  path?: string; // For local storage
  bucket?: string; // For S3
  key?: string; // For S3
  url?: string; // Public URL if available
  [key: string]: any; // Custom fields
}

Example: Google Cloud Storage

typescript
import { Storage } from '@google-cloud/storage';
import { Readable } from 'stream';
import type { TorrinStorageDriver, TorrinUploadSession, TorrinStorageLocation } from '@torrin-kit/server';

export function createGCSStorageDriver(config: {
  bucket: string;
  projectId: string;
  keyFilename?: string;
  prefix?: string;
}) {
  const storage = new Storage({
    projectId: config.projectId,
    keyFilename: config.keyFilename,
  });
  const bucket = storage.bucket(config.bucket);

  const driver: TorrinStorageDriver = {
    async initUpload(session: TorrinUploadSession): Promise<void> {
      // No initialization needed for GCS
      // Could validate bucket access here
    },

    async writeChunk(
      session: TorrinUploadSession,
      chunkIndex: number,
      stream: Readable,
      expectedSize: number,
      hash?: string
    ): Promise<void> {
      const key = `${config.prefix || ''}temp/${session.uploadId}/chunk_${chunkIndex}`;
      const file = bucket.file(key);

      await new Promise((resolve, reject) => {
        stream
          .pipe(file.createWriteStream({
            resumable: false,
            metadata: {
              contentType: 'application/octet-stream',
              metadata: {
                chunkIndex: chunkIndex.toString(),
                uploadId: session.uploadId,
              },
            },
          }))
          .on('finish', resolve)
          .on('error', reject);
      });
    },

    async finalizeUpload(session: TorrinUploadSession): Promise<TorrinStorageLocation> {
      const finalKey = `${config.prefix || ''}${session.uploadId}/${session.fileName}`;
      const finalFile = bucket.file(finalKey);
      const tempPrefix = `${config.prefix || ''}temp/${session.uploadId}/`;

      // Get all chunks
      const [chunks] = await bucket.getFiles({ prefix: tempPrefix });
      chunks.sort((a, b) => {
        const indexA = parseInt(a.name.split('chunk_')[1]);
        const indexB = parseInt(b.name.split('chunk_')[1]);
        return indexA - indexB;
      });

      // Combine chunks into final file
      const writeStream = finalFile.createWriteStream({
        resumable: false,
        metadata: {
          contentType: session.mimeType,
          metadata: {
            originalName: session.fileName,
            uploadId: session.uploadId,
          },
        },
      });

      for (const chunk of chunks) {
        const readStream = chunk.createReadStream();
        await new Promise((resolve, reject) => {
          readStream
            .pipe(writeStream, { end: false })
            .on('finish', resolve)
            .on('error', reject);
        });
      }

      writeStream.end();

      // Clean up temp chunks
      await Promise.all(chunks.map(chunk => chunk.delete()));

      return {
        type: 'gcs',
        bucket: config.bucket,
        key: finalKey,
        url: `https://storage.googleapis.com/${config.bucket}/${finalKey}`,
      };
    },

    async abortUpload(session: TorrinUploadSession): Promise<void> {
      const tempPrefix = `${config.prefix || ''}temp/${session.uploadId}/`;
      const [chunks] = await bucket.getFiles({ prefix: tempPrefix });
      await Promise.all(chunks.map(chunk => chunk.delete()));
    },
  };

  return driver;
}

Usage:

typescript
import { createTorrinExpressRouter } from '@torrin-kit/server-express';
import { createGCSStorageDriver } from './gcs-driver';

const torrinRouter = createTorrinExpressRouter({
  storage: createGCSStorageDriver({
    bucket: 'my-uploads',
    projectId: 'my-project-id',
    keyFilename: './service-account-key.json',
    prefix: 'uploads/',
  }),
  store: createInMemoryStore(),
});

Example: Azure Blob Storage

typescript
import { BlobServiceClient } from '@azure/storage-blob';
import type { TorrinStorageDriver } from '@torrin-kit/server';

export function createAzureBlobDriver(config: {
  connectionString: string;
  containerName: string;
  prefix?: string;
}): TorrinStorageDriver {
  const blobServiceClient = BlobServiceClient.fromConnectionString(config.connectionString);
  const containerClient = blobServiceClient.getContainerClient(config.containerName);

  return {
    async initUpload(session): Promise<void> {
      // Ensure container exists
      await containerClient.createIfNotExists();
    },

    async writeChunk(session, chunkIndex, stream, expectedSize, hash): Promise<void> {
      const blobName = `${config.prefix || ''}temp/${session.uploadId}/chunk_${chunkIndex}`;
      const blockBlobClient = containerClient.getBlockBlobClient(blobName);
      
      await blockBlobClient.uploadStream(stream, expectedSize);
    },

    async finalizeUpload(session): Promise<TorrinStorageLocation> {
      const finalBlobName = `${config.prefix || ''}${session.uploadId}/${session.fileName}`;
      const finalBlobClient = containerClient.getBlockBlobClient(finalBlobName);
      
      // Combine chunks
      const tempPrefix = `${config.prefix || ''}temp/${session.uploadId}/`;
      const chunks = [];
      
      for await (const blob of containerClient.listBlobsFlat({ prefix: tempPrefix })) {
        chunks.push(blob.name);
      }
      
      chunks.sort();
      
      // Combine logic here (Azure Block Blob APIs)
      // ... implementation details ...

      return {
        type: 'azure',
        container: config.containerName,
        blob: finalBlobName,
        url: finalBlobClient.url,
      };
    },

    async abortUpload(session): Promise<void> {
      const tempPrefix = `${config.prefix || ''}temp/${session.uploadId}/`;
      
      for await (const blob of containerClient.listBlobsFlat({ prefix: tempPrefix })) {
        await containerClient.deleteBlob(blob.name);
      }
    },
  };
}

Storage Strategy Patterns

Hybrid Storage

Use different storage based on file size or type:

typescript
class HybridStorageDriver implements TorrinStorageDriver {
  constructor(
    private localDriver: TorrinStorageDriver,
    private s3Driver: TorrinStorageDriver,
    private threshold: number = 100 * 1024 * 1024 // 100MB
  ) {}

  private getDriver(session: TorrinUploadSession): TorrinStorageDriver {
    return session.fileSize > this.threshold ? this.s3Driver : this.localDriver;
  }

  async initUpload(session: TorrinUploadSession): Promise<void> {
    return this.getDriver(session).initUpload(session);
  }

  async writeChunk(session, index, stream, size, hash): Promise<void> {
    return this.getDriver(session).writeChunk(session, index, stream, size, hash);
  }

  async finalizeUpload(session): Promise<TorrinStorageLocation> {
    return this.getDriver(session).finalizeUpload(session);
  }

  async abortUpload(session): Promise<void> {
    return this.getDriver(session).abortUpload(session);
  }
}

// Usage
const storage = new HybridStorageDriver(
  createLocalStorageDriver({ baseDir: './uploads' }),
  createS3StorageDriver({ bucket: 'large-files', region: 'us-east-1' })
);

CDN Integration

Automatically upload to CDN after finalization:

typescript
class CDNStorageDriver implements TorrinStorageDriver {
  constructor(
    private baseDriver: TorrinStorageDriver,
    private cdnUploader: (location: TorrinStorageLocation) => Promise<string>
  ) {}

  // Delegate other methods to baseDriver...

  async finalizeUpload(session): Promise<TorrinStorageLocation> {
    const location = await this.baseDriver.finalizeUpload(session);
    
    // Upload to CDN
    const cdnUrl = await this.cdnUploader(location);
    
    return {
      ...location,
      cdnUrl,
    };
  }
}

Best Practices

  1. Validate configuration in initUpload (check permissions, bucket exists, etc.)
  2. Use streaming for writeChunk to handle large files efficiently
  3. Clean up on errors - Delete temp files in abortUpload
  4. Return complete location info - Include URLs, paths, metadata
  5. Handle concurrent chunks - Ensure thread-safe operations
  6. Test with real data - Verify large files, network issues, permission errors
  7. Log operations - Track uploads for debugging
  8. Consider costs - S3 charges for API calls and storage

Next Steps