DynamoDB Export Guide¶
Learn how to export DynamoDB table data to various formats for backup, analysis, and migration.
Quick Start¶
# Export table to CSV
devo dynamodb export my-table
# Export to JSON with compression
devo dynamodb export my-table -f json --compress gzip
Common Use Cases¶
Backup Tables¶
Create regular backups of your DynamoDB tables:
# Daily backup with timestamp
devo dynamodb export production-users \
--output backup-$(date +%Y%m%d).json \
-f json \
--compress gzip
Add to cron for automated backups:
Data Analysis¶
Export data for analysis in Excel, Pandas, or other tools:
# Export to CSV for Excel
devo dynamodb export analytics-data -f csv
# Flatten nested objects for easier analysis
devo dynamodb export analytics-data -f csv --mode flatten
Filtered Exports¶
Export only specific data:
# Export active users only
devo dynamodb export users \
--filter "status = :s" \
--filter-values '{":s": {"S": "active"}}'
# Export specific user's data
devo dynamodb export orders \
--key-condition "userId = :uid" \
--filter-values '{":uid": {"S": "user123"}}'
Data Migration¶
Export from one environment and import to another:
# Export from production
devo --profile prod dynamodb export users -f jsonl
# Import to staging (using AWS CLI)
aws dynamodb batch-write-item --request-items file://users.jsonl --profile staging
Export Formats¶
CSV (Default)¶
Best for Excel and spreadsheet analysis:
Output:
JSON¶
Best for programmatic processing:
Output:
[
{"id": 1, "name": "John", "email": "john@example.com"},
{"id": 2, "name": "Jane", "email": "jane@example.com"}
]
JSONL (JSON Lines)¶
Best for streaming and large datasets:
Output:
{"id": 1, "name": "John", "email": "john@example.com"}
{"id": 2, "name": "Jane", "email": "jane@example.com"}
TSV¶
Tab-separated format:
Handling Nested Data¶
Strings Mode (Default)¶
Serializes nested objects as JSON strings:
Output:
Flatten Mode¶
Flattens nested objects into separate columns:
Output:
Normalize Mode¶
Expands lists into multiple rows:
Output:
Performance Optimization¶
Parallel Scan¶
For large tables, use parallel scanning:
Recommended segments by table size:
- Small (<1GB): 2-4 segments
- Medium (1-10GB): 4-8 segments
- Large (>10GB): 8-16 segments
Limit Results¶
Export only a subset of data:
Using Templates¶
Save Export Configuration¶
Save frequently used export configurations:
devo dynamodb export my-table \
-a "id,name,email,status" \
--filter "status = :s" \
--filter-values '{":s": {"S": "active"}}' \
--save-template active-users
Use Saved Template¶
Reuse saved configurations:
List Templates¶
View all saved templates:
Filtering Data¶
Simple Filter¶
Multiple Conditions¶
devo dynamodb export users \
--filter "status = :s AND age > :age" \
--filter-values '{":s": {"S": "active"}, ":age": {"N": "18"}}'
Query with Key Condition¶
devo dynamodb export orders \
--key-condition "userId = :uid AND orderDate > :date" \
--filter-values '{":uid": {"S": "user123"}, ":date": {"S": "2024-01-01"}}'
Using Index¶
devo dynamodb export users \
--index email-index \
--key-condition "email = :email" \
--filter-values '{":email": {"S": "john@example.com"}}'
Customizing Output¶
Select Specific Attributes¶
Custom Delimiter¶
Boolean Format¶
# Numeric format (1/0)
devo dynamodb export users --bool-format numeric
# Uppercase (True/False)
devo dynamodb export users --bool-format uppercase
Null Values¶
Include Metadata¶
Team Workflows¶
Share Export Templates¶
# Export DynamoDB configuration (includes templates)
devo config export -s dynamodb -o team-dynamodb-config.json
# Share team-dynamodb-config.json with team
# Import on another machine
devo config import team-dynamodb-config.json -s dynamodb
Automated Reports¶
Create scheduled exports:
#!/bin/bash
# daily-report.sh
DATE=$(date +%Y%m%d)
devo dynamodb export analytics \
--use-template daily-report \
--output reports/analytics-$DATE.csv
Troubleshooting¶
Access Denied¶
Verify IAM permissions:
Required permissions:
dynamodb:Scandynamodb:Querydynamodb:DescribeTable
Table Not Found¶
Check region and table name:
Export Timeout¶
For large tables:
- Use parallel scan:
--parallel-scan --segments 8 - Export in chunks:
--limit 10000 - Use filters to reduce data:
--filter
Memory Issues¶
For very large exports:
- Use JSONL format instead of JSON
- Enable compression:
--compress gzip - Export specific attributes:
-a "id,name"
Best Practices¶
- Use compression for large exports:
--compress gzip - Save frequently used configurations:
--save-template - Use parallel scan for large tables:
--parallel-scan - Filter data when possible: Reduce export size with
--filter - Choose appropriate format: CSV for analysis, JSONL for processing
- Test with dry-run:
--dry-runto preview before exporting - Monitor costs: DynamoDB charges for read capacity
Next Steps¶
- DynamoDB Command Reference - Full command options
- AWS Setup - Configure AWS credentials
- Configuration Guide - DynamoDB settings