Guides

Best Practices

Best practices for developing with MyoSapiens SDK and API.

Best practices to help you build reliable, production-ready applications with MyoSapiens.

New to MyoSapiens? Start with the Introduction to understand key concepts, then install the SDK and follow the Retargeting Tutorial.

Security

API Key Management

✅ Do:

  • Store API keys in environment variables
  • Use secrets management systems in production (AWS Secrets Manager, HashiCorp Vault, etc.)
  • Rotate API keys regularly
  • Use different keys for development and production

❌ Don't:

  • Commit API keys to version control
  • Hardcode keys in source code
  • Share keys between team members (use separate keys per developer)
# ✅ Good
import os
from myosdk import Client
client = Client(api_key=os.getenv("MYO_API_KEY"))

# ❌ Bad
client = Client(api_key="v2m_live_abc123...")  # Never do this!

Server-Side Integration

The SDK is designed for server-side use. Recommended architecture:

Frontend → Your Backend → MyoSapiens SDK → API

This provides:

  • Better security (no client-side key exposure)
  • No CORS issues
  • Cleaner job management
  • Centralized error handling

Error Handling

Always implement proper error handling for production applications:

from myosdk import Client
from myosdk.exceptions import ApiError, NotFoundError, ValidationError

try:
    asset = client.assets.get(asset_id)
except NotFoundError:
    print("Asset not found")
    # Handle missing asset
except ValidationError as e:
    print(f"Invalid request: {e.message}")
    # Handle validation errors
except ApiError as e:
    print(f"API error: {e.message}")
    # Log and handle error

See the Error Handling guide for complete patterns.

Asset Management

Validate Before Use

Always verify assets are ready before using them in jobs:

# Check asset status (when using asset IDs)
asset = client.assets.get(asset_id)
if asset.status != "completed":
    raise ValueError("Asset upload not completed")

# Verify purpose matches
if asset.purpose != "retarget":
    raise ValueError("Asset has wrong purpose for retarget job")

Clean Up Unused Assets

Periodically clean up unused assets to manage storage:

from myosdk.exceptions import ValidationError

# List assets (filter by purpose if needed)
assets = client.assets.list(purpose="retarget", limit=100)

for asset in assets:
    try:
        client.assets.delete(asset.id)
    except ValidationError:
        # Asset may have been referenced since listing
        pass

Job Management

Use Timeouts

Always set timeouts for long-running operations:

from myosdk.exceptions import JobTimeoutError

# Blocking call with timeout (5 minutes)
try:
    result = client.retarget(
        tracker="motion.c3d",
        markerset="markerset.xml",
        timeout=300,
    )
    result.download_all("out/")
except JobTimeoutError:
    print("Job did not complete within timeout")
    # Handle timeout appropriately

Check Job Status

Don't assume jobs succeed - use exceptions and result attributes:

from myosdk.exceptions import JobFailedError

try:
    result = client.retarget(
        tracker="motion.c3d",
        markerset="markerset.xml",
        timeout=600,
    )
    # Process output
    result.download_all("out/")
except JobFailedError as e:
    print(f"Job failed: {e}")
    # Handle failure

Handle Job Failures

Implement retry logic for transient failures:

import time
from myosdk.exceptions import RateLimitError, ServerError

def retry_with_backoff(func, max_retries=3):
    for attempt in range(max_retries):
        try:
            return func()
        except (RateLimitError, ServerError) as e:
            if attempt == max_retries - 1:
                raise
            delay = e.retry_after if hasattr(e, 'retry_after') and e.retry_after else 2 ** attempt
            time.sleep(delay)

File Handling

Use Appropriate Purposes

Let the SDK auto-detect file purposes when passing file paths to retarget() or create_retarget_job(), or specify explicitly when uploading:

# File paths in retarget - auto-upload and auto-detect purpose
result = client.retarget(tracker="motion.c3d", markerset="markerset.xml")

# Explicit upload with purpose (if needed)
asset = client.assets.upload("motion.c3d", purpose="retarget")

Handle Large Files

Ensure your files are within your plan's size limits. The SDK uploads automatically when you pass file paths:

# File paths - SDK uploads automatically
result = client.retarget(tracker="motion.c3d", markerset="markerset.xml")

Resource Management

Close Clients

Always close clients when done, especially in long-running applications:

# Using context manager (recommended)
with Client(api_key=api_key) as client:
    # Use client
    assets = client.assets.list()

# Or explicitly close
client = Client(api_key=api_key)
try:
    # Use client
    pass
finally:
    client.close()

Use Pagination

When listing resources, use pagination for large datasets:

# Process all assets in batches
offset = 0
limit = 50

while True:
    assets = client.assets.list(limit=limit, offset=offset)

    if not assets:
        break

    # Process batch
    for asset in assets:
        process_asset(asset)

    offset += limit
    if len(assets) < limit:
        break

Performance

Batch Operations

When possible, batch operations to reduce API calls:

# List jobs once, then filter in code
all_jobs = client.jobs.list(limit=100, status=["SUCCEEDED", "FAILED"])
succeeded = [j for j in all_jobs if j.status == "SUCCEEDED"]

Logging and Monitoring

Log Important Operations

Log key operations for debugging and monitoring:

import logging
from myosdk import Client
from myosdk.exceptions import ApiError

logger = logging.getLogger(__name__)

try:
    result = client.retarget(
        tracker="motion.c3d",
        markerset="markerset.xml",
        stream_status=True,
        on_status=lambda e: logger.info(f"Job status: {e.status}"),
    )
    logger.info(f"Job completed: {result.job.id}, time: {result.processing_time_seconds}s")
except ApiError as e:
    logger.error(f"Retarget failed: {e.message}", exc_info=True)
    raise

Monitor Job Progress

For long-running jobs, use stream_status or a non-blocking job handle with logging:

# Option 1: stream_status with callback
result = client.retarget(
    tracker="motion.c3d",
    markerset="markerset.xml",
    stream_status=True,
    on_status=lambda e: logger.info(f"Status: {e.status}"),
)

# Option 2: Non-blocking, manual polling
job = client.create_retarget_job(
    tracker="motion.c3d",
    markerset="markerset.xml",
    block_until_complete=False,
)
logger.info(f"Job {job.id} queued")
for event in job.stream():
    logger.info(f"Job {job.id} status: {event.status}")
result = job.wait(timeout=600)

Next Steps