Programming Neam
📖 18 min read

Chapter 31: Cloud Agentic Stack #

Neam v1.0 adds three cloud infrastructure declarations that turn your agent programs into production-grade services: an Agent Gateway for external access, a Model Router for intelligent LLM selection, and a Skill Marketplace for sharing and discovering skills.

These build on top of the deployment targets you learned in Chapter 19 and Chapter 20 -- while those chapters covered Docker, Kubernetes, Lambda, and Terraform, this chapter covers the application-layer infrastructure that sits in front of your agents.


Cloud Architecture Overview #

bash
                    External Clients
                         |
                    +-----------+
                    |  Gateway  |  Auth, rate limiting, routing
                    +-----------+
                         |
                  +------+------+
                  |             |
            +-----------+ +-----------+
            |  Model    | |  Agent    |  Service discovery,
            |  Router   | |  Registry |  health checks
            +-----------+ +-----------+
                  |             |
            +-----+-----+-----+-----+
            |     |     |     |     |
          Agent Agent Agent Agent Agent
            |
      +-----------+
      |Marketplace|  Skill discovery,
      +-----------+  installation

Agent Gateway #

The gateway is the managed entry point for all external task submissions. It handles authentication, rate limiting, and request routing.

Declaration #

neam
gateway AgentAPI {
    auth: {
        method: "oauth2",
        issuer: "https://auth.example.com",
        audience: "neam-agents"
    },
    rate_limit: {
        per_api_key: 1000,
        per_ip: 100,
        burst: 50
    },
    routes: {
        tasks: { handler: "TaskRouter", methods: ["POST"] },
        agents: { handler: "AgentRegistry", methods: ["GET"] },
        health: { handler: "HealthCheck", methods: ["GET"] },
        kill: { handler: "KillSwitch", methods: ["POST"], auth: "admin" }
    },
    observability: {
        metrics: { endpoint: "/metrics", format: "prometheus" },
        tracing: { provider: "opentelemetry", sample_rate: 0.1 }
    }
}

Gateway Fields #

FieldRequiredDescription
authYesAuthentication config (oauth2, api_key, jwt)
rate_limitNoPer-key, per-IP, and burst limits
routesYesURL path to handler mapping with allowed HTTP methods
observabilityNoPrometheus metrics endpoint and OTLP tracing

Authentication Methods #

MethodUse Case
"oauth2"Enterprise SSO, external partners
"api_key"Internal services, development
"jwt"Microservice-to-microservice
"mtls"High-security zero-trust environments

Model Router #

The model router provides intelligent model selection based on task complexity, cost constraints, and latency requirements. Instead of hardcoding a model per agent, you define routing strategies that automatically select the best model for each request.

Declaration #

neam
model_router SmartRouter {
    strategy: "cost_optimized",
    routes: {
        simple: { provider: "anthropic", model: "claude-haiku-4-5", max_tokens: 1000 },
        reasoning: { provider: "anthropic", model: "claude-sonnet-4-6", max_tokens: 4000 },
        complex: { provider: "anthropic", model: "claude-opus-4-6", max_tokens: 8000 },
        local: { provider: "ollama", model: "llama3:8b", max_tokens: 2000 }
    },
    fallback_chain: ["anthropic", "openai", "ollama"],
    budget: { per_request_max: 0.50, daily_max: 100.00 }
}

Routing Strategies #

StrategyDescriptionBest For
"cost_optimized"Use cheapest model that meets quality thresholdHigh-volume production
"quality_first"Always use best model, fall back only on failureCritical decisions
"latency_first"Use fastest model (usually local/haiku)Real-time applications

Fallback Chains #

When a provider fails (rate limit, outage, budget exhausted), the router automatically tries the next provider in the fallback chain. This provides resilience without manual intervention:

neam
model_router ResilientRouter {
    strategy: "quality_first",
    routes: {
        primary: { provider: "openai", model: "gpt-4o" },
        secondary: { provider: "bedrock", model: "anthropic.claude-3-5-sonnet" },
        tertiary: { provider: "ollama", model: "qwen2.5:14b" }
    },
    fallback_chain: ["openai", "bedrock", "ollama"],
    budget: { daily_max: 500.00 }
}

Skill Marketplace #

The marketplace enables sharing, discovery, and installation of skills across teams and organizations:

neam
marketplace NeamSkillStore {
    package_format: {
        manifest: "skill.neam.json",
        signature: "ed25519",
        integrity: "sha256"
    },
    publish_requires: ["signature", "tests_passing", "security_scan"],
    install_policy: {
        verify_signature: true,
        scan_for_vulnerabilities: true,
        sandbox_on_first_run: true
    }
}

Security by Default #

Every skill installed from the marketplace is:

  1. Signature-verified -- ed25519 signing ensures authenticity
  2. Integrity-checked -- SHA-256 hash verification prevents tampering
  3. Security-scanned -- Automated vulnerability scanning before installation
  4. Sandboxed -- First-run isolation prevents supply chain attacks

Combining Cloud Constructs #

Here is a complete program that uses all three cloud declarations together with OWASP security:

neam
budget B { cost: 500.00, tokens: 5000000 }

// Security layer
goal_integrity Goals { declared_objectives: ["serve customer queries securely"] }
circuit_breaker CB { failure_threshold: 5, half_open_timeout: "30s" }

// Cloud layer
gateway API {
    auth: { method: "api_key" },
    rate_limit: { per_api_key: 500, burst: 25 },
    routes: {
        query: { handler: "QueryAgent", methods: ["POST"] },
        health: { handler: "HealthCheck", methods: ["GET"] }
    },
    observability: {
        metrics: { endpoint: "/metrics", format: "prometheus" }
    }
}

model_router Router {
    strategy: "cost_optimized",
    routes: {
        simple: { provider: "ollama", model: "llama3:8b" },
        complex: { provider: "openai", model: "gpt-4o" }
    },
    fallback_chain: ["openai", "ollama"]
}

marketplace Skills {
    install_policy: { verify_signature: true, sandbox_on_first_run: true }
}

// Agent layer
agent QueryAgent {
    provider: "openai",
    model: "gpt-4o-mini",
    system: "Answer customer queries using product docs.",
    budget: B
}

print("Cloud stack initialized");
💡 Tip

For the 4-layer data intelligence architecture that these cloud constructs support in production, see The Intelligent Data Organization with Neam.


Integration with Deployment Targets #

Cloud stack declarations work alongside the existing deployment targets from Chapter 20:

bash
# Compile your agent program with cloud stack
neamc cloud_app.neam -o cloud_app.neamb

# Deploy to Kubernetes with the gateway enabled
neam deploy --target kubernetes \
    --replicas 3 \
    --cpu 500m \
    --memory 1Gi \
    --namespace production

# The gateway routes, model router, and marketplace are
# configured from the compiled declarations
Start typing to search...