Performance Optimization
Optimize your Canvelete (opens in a new tab) rendering workflows for speed and efficiency.
Design Optimization
Reduce Element Count
Fewer elements = faster renders. Combine elements where possible:
- Merge overlapping shapes into single elements
- Use background images instead of multiple shape layers
- Group static elements that don't need individual updates
Optimize Images
| Recommendation | Impact |
|---|---|
| Use WebP format | 25-35% smaller files |
| Resize before upload | Faster processing |
| Use CDN URLs | Faster asset loading |
| Compress images | Reduced memory usage |
Font Optimization
- Use web-safe fonts when possible
- Limit font variations (weights, styles)
- Subset fonts to include only needed characters
API Optimization
Batch Requests
Instead of individual requests:
# ❌ Slow: Individual requests
for user in users:
client.render.create(design_id="template", dynamic_data=user)
# ✅ Fast: Batch request
renders = [{"design_id": "template", "dynamic_data": user} for user in users]
client.render.batch(renders)Use Webhooks for Async
For large renders, use webhooks instead of polling:
# ❌ Slow: Polling
result = client.render.create(design_id="template", async=True)
while result.status != "completed":
time.sleep(1)
result = client.render.get_status(result.render_id)
# ✅ Fast: Webhook notification
result = client.render.create(
design_id="template",
async=True,
webhook="https://your-app.com/webhook"
)
# Process continues when webhook is receivedCache Results
Cache rendered images when dynamic data doesn't change:
import hashlib
def get_cached_render(design_id, dynamic_data):
cache_key = hashlib.md5(
f"{design_id}:{json.dumps(dynamic_data, sort_keys=True)}".encode()
).hexdigest()
cached = cache.get(cache_key)
if cached:
return cached
result = client.render.create(
design_id=design_id,
dynamic_data=dynamic_data
)
cache.set(cache_key, result, ttl=3600)
return resultQuality vs Speed
Quality Settings
| Quality | Use Case | Render Time |
|---|---|---|
| 60-70 | Thumbnails, previews | Fastest |
| 80-85 | Web images | Balanced |
| 90-95 | Print, high-quality | Slower |
Format Selection
| Format | Best For | Speed |
|---|---|---|
| WebP | Web images | Fastest |
| JPEG | Photos | Fast |
| PNG | Graphics with transparency | Medium |
| Documents | Slower | |
| SVG | Vector graphics | Variable |
Parallel Processing
Concurrent Requests
import asyncio
from canvelete import CanveleteClient
client = CanveleteClient(api_key="YOUR_API_KEY")
async def render_parallel(renders, max_concurrent=5):
semaphore = asyncio.Semaphore(max_concurrent)
async def render_one(render_config):
async with semaphore:
return await client.render.create_async(**render_config)
return await asyncio.gather(*[
render_one(r) for r in renders
])
# Process 100 renders with max 5 concurrent
results = asyncio.run(render_parallel(render_configs))Worker Pools
For high-volume processing:
from concurrent.futures import ThreadPoolExecutor
def process_renders(render_configs, workers=10):
with ThreadPoolExecutor(max_workers=workers) as executor:
futures = [
executor.submit(client.render.create, **config)
for config in render_configs
]
return [f.result() for f in futures]Monitoring
Track Render Times
import time
def timed_render(design_id, dynamic_data):
start = time.time()
result = client.render.create(
design_id=design_id,
dynamic_data=dynamic_data
)
duration = time.time() - start
# Log metrics
metrics.record("render_time", duration, tags={
"design_id": design_id,
"format": result.get("format")
})
return resultMonitor Usage
Check your usage at canvelete.com/dashboard (opens in a new tab) to:
- Track API calls and credits
- Identify peak usage times
- Plan capacity needs
Rate Limit Handling
import time
from canvelete.exceptions import RateLimitError
def render_with_backoff(config, max_retries=3):
for attempt in range(max_retries):
try:
return client.render.create(**config)
except RateLimitError as e:
if attempt < max_retries - 1:
wait_time = e.retry_after or (2 ** attempt)
time.sleep(wait_time)
else:
raiseBest Practices Summary
- Optimize designs — Fewer elements, compressed images
- Use batch API — Group multiple renders
- Implement caching — Avoid duplicate renders
- Choose appropriate quality — Match quality to use case
- Use webhooks — Async processing for large jobs
- Monitor performance — Track and optimize render times
- Handle rate limits — Implement exponential backoff
Next Steps
- Batch Processing — Large-scale generation
- Troubleshooting — Common issues
- API Reference — Full API documentation