Building AI-First Applications with Matter & Gas

Building AI-First Applications with Matter & Gas

ai-applicationsworkflow-orchestrationplatformragdocument-intelligenceenterprise

A Platform for Advanced Workflow Creation

For AI-first startups, the challenge isn't just accessing powerful language models — it's orchestrating them into reliable, production-ready workflows that can handle real user interactions. Matter & Gas addresses this challenge with a comprehensive platform that turns AI experimentation into enterprise-grade applications.

The Core Platform: More Than Just Model Access

At its heart, Matter & Gas operates on a graph-based workflow system where AI interactions are built as connected nodes rather than isolated API calls.

Each workflow consists of specialized nodes — from model invocation to document search to conditional routing — connected by edges that define the flow of information.

Nine core node types cover the full spectrum of AI application needs:

  • ModelInvoke → AI inference across providers with prompt resolution and token budgets
  • Router → conditional branching based on user intent or workflow state
  • VectorSearch → retrieval-augmented generation from document collections
  • VectorWrite → embedding generation and direct upserts into Pinecone
  • SlotTracker → multi-turn slot collection for structured inputs
  • ConversationMemory → context persistence across turns
  • IntentClassifier → classification-driven routing
  • Format → enforce schemas or reshape model output
  • StreamToClient → real-time streaming of partial responses with metadata

Workflows maintain a shared GraphState, enabling complex interactions where earlier conversations inform later responses, user intents drive routing, and vector searches provide grounding.

The workflow runner enforces strict safety rules:

  • Virtual START/END edges added automatically
  • Schema and connectivity validation before execution
  • All nodes must be reachable, all router/slot targets must exist

This ensures workflows are deployment-ready before they ever run.

Prompt Management That Scales

A standout feature of Matter & Gas is its centralized prompt engine. Prompts are versioned artifacts, not strings embedded in code:

  • BasePromptVersion → immutable prompt content with SHA-256 hash, stored inline or in S3 (KMS-encrypted)
  • ActivePromptPointer → mutable selector with rollback history
  • CAS Update Lambda → atomic Compare-And-Set prevents race conditions
  • PromptArchiveBucket → archival S3 bucket with 1-year retention and daily cleanup of failed uploads
  • AuditLog table → every prompt deployment or rollback recorded with author, timestamp, and reason

Resolution hierarchy:

  1. Tenant + Workflow + Model
  2. Workflow + Model
  3. Tenant + Model
  4. Model (global)
  5. Emergency fallback → neutral base prompt from DEFAULT_MODEL_ID

Template Variables

Prompts can interpolate runtime state using ${…} placeholders:

  • ${inputs.user_prompt} → caller input
  • ${workflowId} / ${conversationId} → workflow context
  • ${nodeId.output.text} → prior node outputs
  • ${slotTrackerNode.userName} → collected slot values
  • ${context.sessionId} → session identifier

⚠️ Dot-notation only. No arrays, loops, or conditionals. Missing variables resolve to empty strings with warnings logged.

Multi-Provider AI Integration

Matter & Gas avoids lock-in with a centralized Model Registry. Each model entry defines:

  • Identity: provider (OpenAI, Anthropic via Bedrock, Amazon, Meta)
  • Capabilities: streaming, JSON mode, function calling, multimodal
  • Limits: context windows, reserved output tokens
  • Pricing: per 1K tokens, tracked for cost control
  • Circuit Breaker: per-model protection with DynamoDB-backed state

Note: Failover between providers (e.g. Claude → GPT-4o) is not automatic. Instead, workflows select an explicit modelId or rely on the environment's DEFAULT_MODEL_ID.

Circuit Breaker Reliability

Circuit breakers protect apps from cascading failures:

  • Trip when error rates exceed thresholds (default: 5% in 5 minutes)
  • State persisted in DynamoDB (ModelCircuitBreaker)
  • Fail-open if health-check errors occur (prefer availability)
  • Manual trip/reset supported with audit logging

Ops teams can force a model offline, then reset when the provider recovers.

Document Intelligence and Vector Search

For knowledge-intensive apps, Matter & Gas provides a document processing pipeline:

  1. Upload → S3 event triggers async job
  2. OCR → Amazon Textract processes PDFs/scanned docs
  3. Chunking & Embedding → Bedrock Titan v2 generates embeddings
  4. Vector Storage → Pinecone, tenant-scoped namespaces
  5. Progress Tracking → DynamoDB updates job state
  6. DLQs → failures routed to DocumentProcessingDLQ or TextractProcessingDLQ

Documents are grouped into Collections:

  • Membership controlled via CollectionMembership model
  • VectorSearch nodes filter results against state.allowedDocumentIds
  • Cascade deletion removes vectors, S3 objects, and DynamoDB records

This transforms static docs into queryable knowledge bases for support, research, or Q&A.

Production-Ready Infrastructure

Matter & Gas runs on AWS Amplify Gen 2 + CDK extensions, hardened for enterprise:

  • Observability: Powertools Logger/Metrics/Tracer, CloudWatch metrics, X-Ray traces
  • Error Handling: DLQs for all async stages, CAS integrity enforcement, runner timeouts
  • Security: KMS encryption, tenant-scoped S3 prefixes, Pinecone namespaces, WorkflowAccess enforcement
  • Compliance: PromptArchiveBucket + AuditLog provide immutable history
  • Secrets: OpenAI/Pinecone keys pulled from Secrets Manager, never hardcoded

Developer Safety Nets

Developer confidence is built in:

  • CAS protection for prompt updates
  • Workflow schema validation pre-deployment
  • Rollback to prior prompt versions anytime
  • Fallbacks everywhere: prompts, circuit breakers, and streaming ensure continuity

Teams can iterate rapidly without breaking production.

Rapid Iteration and Testing

Workflows can be defined in JSON or through the visual builder:

  • Simple chatbot → ModelInvoke → Format → StreamToClient
  • Complex assistant → IntentClassifier → SlotTracker → VectorSearch → ModelInvoke → Router

Validation ensures correctness, and metrics/logs provide fast feedback.

Real-World Application Patterns

Matter & Gas enables:

  • RAG: VectorSearch + ModelInvoke for doc-aware Q&A
  • Multi-Turn Conversations: SlotTracker + ConversationMemory
  • Intent-Driven Routing: IntentClassifier + Router
  • Customer Support: classify queries, search docs, generate answers, escalate when low confidence
  • Content Creation: combine user prompts with templates for consistency

The Path Forward

Matter & Gas combines workflow orchestration, prompt versioning, model registry, document intelligence, and production infrastructure into a single foundation.

For AI-first companies, this means:

  • Faster time-to-market
  • Reliable, compliant applications
  • Iteration without fear of regressions

In a rapidly evolving AI landscape, adaptable infrastructure is a competitive edge. Matter & Gas delivers exactly that — enabling teams to build tomorrow's AI applications with the reliability users expect today.

Have questions or want to collaborate?

We'd love to hear from you about this technical approach or discuss how it might apply to your project.

Get in touch

Ready to start?

Tell us about your workflow needs and we'll set up a quick fit check.