Lightning.ApolloClient (Lightning v2.14.5-pre1)

View Source

HTTP client for communicating with the Apollo AI service.

This module provides a Tesla-based HTTP client for interacting with Apollo, an external AI service that powers Lightning's intelligent assistance features. Apollo offers two main AI services:

  1. Job Chat - Provides AI assistance for coding tasks, debugging, and adaptor-specific guidance within individual workflow jobs
  2. Workflow Chat - Generates complete workflow templates from natural language descriptions

Configuration

The Apollo client requires the following configuration values:

  • :endpoint - Base URL of the Apollo service
  • :ai_assistant_api_key - Authentication key for API access

Summary

Types

Context information for job-specific AI assistance.

Functions

Requests AI assistance for job-specific coding tasks and debugging.

Performs a health check on the Apollo service endpoint.

Generates or improves workflow templates using AI assistance.

Types

context()

@type context() :: %{expression: String.t(), adaptor: String.t()} | %{}

Context information for job-specific AI assistance.

Contains the job's expression code and adaptor information to help the AI provide more targeted and relevant assistance.

opts()

@type opts() :: keyword()

Functions

job_chat(content, opts \\ [])

@spec job_chat(String.t(), opts()) :: Tesla.Env.result()

Requests AI assistance for job-specific coding tasks and debugging.

Sends user queries along with job context (expression code and adaptor) to the Apollo job_chat service. The AI provides targeted assistance for coding tasks, error debugging, adaptor-specific guidance, and best practices.

Parameters

  • content - User's question or request for assistance
  • opts - Keyword list of options:
    • :context - Job context including expression code and adaptor info (default: %{})
    • :history - Previous conversation messages for context (default: [])
    • :meta - Additional metadata like session IDs or user preferences (default: %{})

Returns

Tesla.Env.result() with response body containing:

  • "history" - Updated conversation including AI response
  • "usage" - Token usage and cost information
  • "meta" - Updated metadata

test()

@spec test() :: :ok | :error

Performs a health check on the Apollo service endpoint.

Sends a GET request to the root endpoint to verify the service is running and accessible. This should be called before attempting AI operations to ensure graceful degradation when the service is unavailable.

Returns

  • :ok - Service responded with 2xx status code
  • :error - Service unavailable, network error, or non-2xx response

workflow_chat(content, opts \\ [])

@spec workflow_chat(String.t(), opts()) :: Tesla.Env.result()

Generates or improves workflow templates using AI assistance.

Sends requests to the Apollo workflow_chat service to create complete workflow YAML definitions from natural language descriptions. Can also iteratively improve existing workflows based on validation errors or user feedback.

Parameters

  • content - Natural language description of desired workflow functionality
  • opts - Keyword list of options:
    • :code - Optional existing workflow YAML to modify or improve
    • :errors - Optional validation errors from previous workflow attempts
    • :history - Previous conversation messages for context (default: [])
    • :meta - Additional metadata (default: %{})

Returns

Tesla.Env.result() with response body containing:

  • "response" - Human-readable explanation of the generated workflow
  • "response_yaml" - Complete workflow YAML definition
  • "usage" - Token usage and cost information