View Source Deployment

Head to the Deploy section of our docs site to get started.

See below for technical considerations and instructions.

Encryption

Lightning enforces encryption at rest for credentials, TOTP backup codes, and webhook trigger authentication methods, for which an encryption key must be provided when running in production.

The key is expected to be a randomized set of bytes, 32 long; and Base64 encoded when setting the environment variable.

There is a mix task that can generate keys in the correct shape for use as an environment variable:

mix lightning.gen_encryption_key
0bJ9w+hn4ebQrsCaWXuA9JY49fP9kbHmywGd5K7k+/s=

Copy your key (NOT THIS ONE) and set it as PRIMARY_ENCRYPTION_KEY in your environment.

Workers

Lightning uses external worker processes for executing Runs. There are three settings required to configure worker authentication.

  • WORKER_RUNS_PRIVATE_KEY
  • WORKER_LIGHTNING_PUBLIC_KEY
  • WORKER_SECRET

You can use the mix lightning.gen_worker_keys task to generate these for convenience.

For more information see the Workers documentation.

Environment Variables

Note that for secure deployments, it's recommended to use a combination of secrets and configMaps to generate secure environment variables.

Limits

  • WORKER_MAX_RUN_MEMORY_MB - how much memory (in MB) can a single run use?
  • RUN_GRACE_PERIOD_SECONDS - how long after the MAX_RUN_DURATION_SECONDS should the server wait for the worker to send back data on a run.
  • WORKER_MAX_RUN_DURATION_SECONDS - the maximum duration (in seconds) that workflows are allowed to run (keep this plus RUN_GRACE_PERIOD_SECONDS below your termination_grace_period if using kubernetes)
  • WORKER_CAPACITY - the number of runs a ws-worker instance will take on concurrently.
  • MAX_DATACLIP_SIZE_MB - the maximum size (in MB) of a dataclip created via the webhook trigger URL for a job. This limits the max request size via the JSON plug and may (in future) limit the size of dataclips that can be stored as run_results via the websocket connection from a worker.

Github

Lightning enables connection to github via Github Apps. The following github permissions are needed for the github app:

ResourceAccess
ActionsRead and Write
ContentsRead and Write
MetadataRead only
SecretsRead and Write
WorkflowsRead and Write

Ensure you set the following URLs:

  • Homepage URL: <app_url_here>
  • Callback URL for authorizing users: <app_url_here>/oauth/github/callback (Do NOT check the two checkboxes in this section requesting Device Flow and OAuth.)
  • Setup URL for Post installation: <app_url_here>/setup_vcs (Check the box for Redirect on update)

These envrionment variables will need to be set in order to configure the github app:

VariableDescription
GITHUB_APP_IDthe github app ID.
GITHUB_APP_NAMEthe github app name
GITHUB_APP_CLIENT_IDthe github app Client ID
GITHUB_APP_CLIENT_SECRETthe github app Client Secret
GITHUB_CERTthe github app private key

You can access these from your github app settings menu. Also needed for the configurtaion is:

  • REPO_CONNECTION_SIGNING_SECRET - secret used to sign access tokens. This access token is used to authenticate requests made from the github actions. You can generate this using mix lightning.gen_encryption_key

Storage

Lightning can use a storage backend to store exports.

VariableDescription
STORAGE_BACKENDthe storage backend to use. (default is local)
STORAGE_PATHthe path to store files in. (default is .)

Supported backends:

  • local - local file storage
  • gcs - Google Cloud Storage

Google Cloud Storage

For Google Cloud Storage, the following environment variables are required:

VariableDescription
GCS_BUCKETthe name of the bucket to store files in
GOOGLE_APPLICATION_CREDENTIALS_JSONA base64 encoded JSON keyfile for the service account with access to the bucket.

ℹ️ Note: The GOOGLE_APPLICATION_CREDENTIALS_JSON should be base64 encoded, currently Workload Identity is not supported.

Other config

VariableDescription
ADAPTORS_PATHWhere you store your locally installed adaptors
ALLOW_SIGNUPSet to true to enable user access to the registration page. Set to false to disable new user registrations and block access to the registration page.<br>Default is true.
CORS_ORIGINA list of acceptable hosts for browser/cors requests (',' separated)
DISABLE_DB_SSLIn production, the use of an SSL connection to Postgres is required by default.<br>Setting this to "true" allows unencrypted connections to the database. This is strongly discouraged in a real production environment.
EMAIL_ADMINThis is used as the sender email address for system emails. It is also displayed in the menu as the support email.
EMAIL_SENDER_NAMEThis is displayed in the email client as the sender name for emails sent by the application.
IDLE_TIMEOUTThe number of seconds that must pass without data being received before the Lightning web server kills the connection.
IS_RESETTABLE_DEMOIf set to yes, it allows this instance to be reset to the initial "Lightning Demo" state. Note that this will destroy most of what you have in your database!
K8S_HEADLESS_SERVICEThis environment variable is automatically set if you're running on GKE and it is used to establish an Erlang node cluster. Note that if you're not using Kubernetes, the "gossip" strategy is used to establish clusters.
LISTEN_ADDRESSThe address the web server should bind to. Defaults to 127.0.0.1 to block access from other machines.
LOG_LEVELHow noisy you want the logs to be (e.g., debug, info)
MIX_ENVYour mix env, likely prod for deployment
NODE_ENVNode env, likely production for deployment
ORIGINSThe allowed origins for web traffic to the backend
PORTThe port your Phoenix app runs on
PRIMARY_ENCRYPTION_KEYA base64 encoded 32 character long string.<br>See Encryption.
QUEUE_RESULT_RETENTION_PERIOD_SECONDSThe number of seconds to keep completed (successful) ObanJobs in the queue (not to be confused with runs and/or history)
SCHEMAS_PATHPath to the credential schemas that provide forms for different adaptors
SECRET_KEY_BASEA secret key used as a base to generate secrets for encrypting and signing data.
SENTRY_DSNIf using Sentry for error monitoring, your DSN
URL_HOSTThe host used for writing URLs (e.g., demo.openfn.org)
URL_PORTThe port, usually 443 for production
URL_SCHEMEThe scheme for writing URLs (e.g., https)
USAGE_TRACKER_HOSTThe host that receives usage tracking submissions<br>(defaults to https://impact.openfn.org)
USAGE_TRACKING_DAILY_BATCH_SIZEThe number of days that will be reported on with each run of UsageTracking.DayWorker. This will only have a noticeable effect in cases where there is a backlog or where reports are being generated retroactively (defaults to 10).
USAGE_TRACKING_ENABLEDEnables the submission of anonymized usage data to OpenFn (defaults to true)
USAGE_TRACKING_RESUBMISSION_BATCH_SIZEThe number of failed reports that will be submitted on each resubmission run (defaults to 10)
USAGE_TRACKING_UUIDSIndicates whether submissions should include cleartext UUIDs or not. Options are cleartext or hashed_only, with the default being hashed_only.

AI Chat

🧪 Experimental

Lightning can be configured to use an AI chatbot for user interactions.

See openfn/apollo for more information on the Apollo AI service.

The following environment variables are required:

  • OPENAI_API_KEY - your OpenAI API key.
  • APOLLO_ENDPOINT - the endpoint for the OpenFn Apollo AI service.

Kafka Triggers

🧪 Experimental

Lightning workflows can be configured with a trigger that will consume messages from a Kafka Cluster. By default this is disabled and you will not see the option to create a Kafka trigger in the UI, nor will the Kafka consumer groups be running.

To enable this feature set the KAFKA_TRIGGERS_ENABLED environment variable to yes and restart Lightning. Please note that, if you enable this feature and then create some Kafka triggers and then disable the feature, you will not be able to edit any triggers created before the feature was disabled.

Performance Tuning

The number of Kafka consumers in the consumer group can be modified by setting the KAFKA_NUMBER_OF_CONSUMERS environment variable. The default value is currently 1. The optimal setting is one consumer per topic partition. NOTE: This setting will move to KafkaConfiguration as it will be trigger-specific.

The number of messages that the Kafka consumer will forward is rate-limited by the KAFKA_NUMBER_OF_MESSAGES_PER_SECOND environment variable. This can be set to a value of less than 1 (minimum 0.1) and will converted (and rounded-down) to an integer value of messages over a 10-second interval (e.g. 0.15 becomes 1 message every 10 seconds). The default value is 1.

Processing concurrency within the Kafka Broadway pipeline is controlled by the KAFKA_NUMBER_OF_PROCESSORS environment variable. Modifying this, modifies the number of processors that are downstream of the Kafka consumer, so an increase in this value should increase throughput (when factoring in the rate limit set by KAFKA_NUMBER_OF_MESSAGES_PER_SECOND). The default value is 1.

Deduplication

Each Kafka trigger maintains record of the topic, parition and offset for each message received. This to protect against the ingestion of duplicate messages from the cluster. These records are periodically cleaned out. The duration for which they are retained is controlled by KAFKA_DUPLICATE_TRACKING_RETENTION_SECONDS. The default value is 3600.

Disabling Kafka Triggers

After a Kafka consumer group connects to a Kafka cluster, the cluster will track the last committed offset for a given consumer group ,to ensure that the consumer group receives the correct messages.

This data is retained for a finite period. If an enabled Kafka trigger is disabled for longer than the offset retention period the consumer group offset data will be cleared.

If the Kafka trigger is re-enabled after the offset data has been cleared, this will result in the consumer group reverting to what has been configured as the 'Initial offset reset policy' for the trigger. This may result in the duplication of messages or even data loss.

It is recommended that you check the value of the offsets.retention.minutes for the Kafka cluster to determine what the cluster's retention period is, and consider this when disabling a Kafka trigger for an extended period.

Google Oauth2

Using your Google Cloud account, provision a new OAuth 2.0 Client with the 'Web application' type.

Set the callback url to: https://<ENDPOINT DOMAIN>/authenticate/callback. Replacing ENDPOINT DOMAIN with the host name of your instance.

Once the client has been created, get/download the OAuth client JSON and set the following environment variables:

VariableDescription
GOOGLE_CLIENT_IDWhich is client_id from the client details.
GOOGLE_CLIENT_SECRETclient_secret from the client details.

Salesforce Oauth2

Using your Salesforce developer account, create a new Oauth 2.0 connected application.

Set the callback url to: https://<ENDPOINT DOMAIN>/authenticate/callback. Replacing ENDPOINT DOMAIN with the host name of your instance.

Grant permissions as desired.

Once the client has been created set the following environment variables:

VariableDescription
SALESFORCE_CLIENT_IDWhich is Consumer Key from the "Manage Consumer Details" screen.
SALESFORCE_CLIENT_SECRETWhich is Consumer Secret from the "Manage Consumer Details" screen.