Cloud Storage

Store import files directly in Amazon S3.

CSVbox accepts your users’ CSV and Excel uploads, validates the data, and saves the file as CSV or JSON into any S3 bucket you control. Use it as a landing zone for downstream ETL, analytics, or data lake pipelines.

  • Validated data only
  • Column mapping included
  • SOC 2 Type II + GDPR

S3 is the natural landing zone for data pipelines. If your architecture reads from S3 — a Lambda trigger, a Glue job, an Athena query, a Redshift COPY command — routing CSVbox imports to S3 slots directly into that workflow.

Instead of building a custom file upload endpoint and piping it to S3, configure CSVbox as the upload UI and S3 as the destination. Validated files appear in your bucket; your downstream pipeline picks them up.

How It Works

  1. 1
    Connect your S3 bucket

    Add an Amazon S3 destination in the CSVbox dashboard. Provide your AWS access key ID, secret access key, bucket name, and region. CSVbox uses minimum required IAM permissions (s3:PutObject).

  2. 2
    Configure the file path

    Set a folder prefix for stored files. CSVbox generates a unique filename per import using the import ID and timestamp.

  3. 3
    Choose file format

    Select CSV (cleaned and re-encoded) or JSON (rows as a JSON array, useful for Lambda triggers or Athena).

  4. 4
    Embed and ship

    Add the CSVbox widget to your app. Users upload files; validated data lands in your S3 bucket.

Time to first import: ~10 minutes.

Configuration Options

OptionDescription
AWS credentialsAccess key and secret with s3:PutObject minimum
BucketAny S3 bucket you own
RegionAny AWS region
Folder prefixPath prefix for stored objects
File formatCSV or JSON
Object metadataImport ID, user ID, row count, schema ID, timestamp
Server-side encryptionSSE-S3 (AES-256) or SSE-KMS

S3 + Downstream Patterns

PatternHow It Works
Lambda triggerS3 event notification fires on each CSVbox file; Lambda processes the rows
AWS GlueGlue crawler catalogs the prefix; Glue job transforms and loads
Redshift COPYCOPY command loads CSVbox JSON or CSV from S3 into Redshift tables
Amazon AthenaQuery CSVbox JSON files in S3 directly with Athena
Google BigQueryTransfer via GCS to BigQuery using data transfer service

Frequently Asked Questions

What IAM permissions does CSVbox need?

Minimum: s3:PutObject on your bucket. Optionally s3:GetBucketLocation to verify region.

Does CSVbox support S3-compatible storage like MinIO or Cloudflare R2?

Yes for providers that support the AWS SDK. Provide the custom endpoint URL in the destination config.

Can I encrypt objects in S3?

Yes. Configure SSE-S3 or SSE-KMS in the destination config.

Is there a file size limit?

CSVbox supports files up to 500 MB. Multipart upload is handled automatically for large files.

Stop building CSV importers.

Ship ours in 15 minutes. Free forever on the Sandbox plan.

No credit cardEmbed in minutesSecure by default