# Chainlink Runtime Environment (CRE)
Source: https://docs.chain.link/cre
Last Updated: 2025-11-04
## What is CRE?
**Chainlink Runtime Environment (CRE)** is the all-in-one orchestration layer unlocking institutional-grade smart contracts—data-connected, compliance-ready, privacy-preserving, and interoperable across blockchains and existing systems.
Using the **CRE SDK** (available in Go and TypeScript), you build **Workflows**. Using the CRE CLI, you compile them into binaries and deploy them to production, where CRE runs them across a Decentralized Oracle Network (DON).
- Each workflow is orchestrated by a **Workflow DON (Decentralized Oracle Network)** that monitors for triggers and coordinates execution.
- The workflow can then invoke specialized **Capability DONs**—for example, one that fetches offchain data or one that writes to a chain.
- During execution, each node in a DON performs the requested task independently.
- Their results are then cryptographically verified and aggregated via a Byzantine Fault Tolerant (BFT) consensus protocol. This guarantees a single, correct, and consistent outcome.
## What you can do today
### Build and simulate (available now)
You can start building and [simulating](/cre/guides/operations/simulating-workflows) CRE workflows immediately, without any approval:
- **Create an account** at [cre.chain.link](https://cre.chain.link) to access the platform
- **Install the CRE CLI** on your machine
- **Build workflows** using the Go or TypeScript SDKs
- **Simulate workflows** to test and debug before deployment
Simulation compiles your workflows into [WebAssembly (WASM)](https://webassembly.org/) and runs them on your machine—but makes **real calls** to live APIs and public EVM blockchains. This gives you confidence your workflow will work as expected when deployed to a DON.
### Deploy your workflows (Early Access)
Early Access to workflow deployment includes:
- **Deploy and run workflows** on a Chainlink DON
- **Workflow lifecycle management**: Deploy, activate, pause, update, and delete workflows through the CLI
- **Monitoring and debugging**: Access detailed logs, events, and performance metrics in the CRE UI
To request Early Access, please share details about your project and use case—this helps us provide better support as you build with CRE.
## How CRE runs your workflows
Now that you understand what CRE is, let's explore how it executes your workflows.
### The trigger-and-callback model
Workflows use a **trigger-and-callback model** to provide a code-first developer experience. This model is the primary architectural pattern you will use in your workflows. It consists of three simple parts:
1. **A Trigger**: An event source that starts a workflow execution (e.g., `cron.Trigger`). This is the "when" of your workflow.
2. **A Callback**: A function that contains your business logic. It is inside this function that you will use the SDK's clients to invoke capabilities. This is the "what" of your workflow.
3. **The `cre.handler()`**: The glue that connects a single trigger to a single callback.
You can define multiple trigger and callback combinations in your workflow. You can also attach the same callback to multiple triggers for reusability.
Here's what the trigger-and-callback pattern looks like:
```go
cre.Handler(
cron.Trigger(&cron.Config{Schedule: "0 */10 * * * *"}), // Trigger fires every 10 minutes
onCronTrigger, // your Go callback
)
func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (struct{}, error) {
// Create SDK clients and call capabilities
return struct{}{}, nil
}
```
### Execution lifecycle
When a trigger fires, the Workflow DON orchestrates the execution of your callback function on every node in the network. **Each execution is independent and stateless**—your callback runs, performs its work, returns a result, and completes. Inside your callback, you create SDK clients and invoke capabilities.
Each capability call is an **asynchronous operation** that returns a `Promise`—a placeholder for a future result. This allows you to pipeline multiple capability calls and run them in parallel.
Your callback typically follows this pattern:
1. Invoke multiple capabilities in parallel (each returns a `Promise` immediately)
2. Await the consensus-verified results
3. Use the trusted results in your business logic
4. Optionally perform final actions like writing back to a blockchain
For every capability you invoke, CRE handles the underlying process of having a dedicated DON execute the task, reach consensus, and return the verified result.
### Built-in consensus for every operation
One of CRE's most powerful features is that **every capability execution automatically includes consensus**. When your workflow invokes a capability (like fetching data from an API or reading from a blockchain), multiple independent nodes perform the operation. Their results are then validated and aggregated through a Byzantine Fault Tolerant (BFT) consensus protocol, ensuring a single, verified outcome.
This means your entire workflow—not just the onchain parts—benefits from the same security and reliability guarantees as blockchain transactions. Unlike traditional applications that rely on a single API provider or RPC endpoint, CRE eliminates single points of failure by having multiple nodes independently verify every operation.
Learn more about [Consensus Computing in CRE](/cre/concepts/consensus-computing).
## Glossary: Building blocks
| Concept | One-liner |
| ------------------ | ----------------------------------------------------------------- |
| **Workflow** | Compiled WebAssembly (WASM) binary. |
| **Handler** | `cre.handler(trigger, callback)` pair; the atom of execution. |
| **Trigger** | Event that starts an execution (cron, HTTP, EVM log, …). |
| **Callback** | Function that runs when its trigger fires; contains your logic. |
| **Runtime** | Object passed to a callback; used to invoke capabilities. |
| **Capability** | Decentralized microservice (chain read/write, HTTP Fetch, ...). |
| **Workflow DON** | Watches triggers and coordinates the workflow. |
| **Capability DON** | Executes a specific capability. |
| **Consensus** | BFT protocol that merges node results into one verifiable report. |
Full definitions live on **[Key Terms and Concepts](/cre/key-terms)**.
## Why build on CRE?
- **Unified cross-domain orchestration**: Seamlessly combine onchain and offchain operations in a single workflow. Read from multiple blockchains, call authenticated APIs, perform computations, and write results back onchain or offchain—all orchestrated by CRE.
- **Institutional-grade security by default**: Every operation—API calls, blockchain reads, computations—runs across multiple independent nodes with Byzantine Fault Tolerant consensus. Your workflows inherit the same security guarantees as blockchain transactions.
- **One platform, any chain**: Build your logic once and connect to any supported blockchain. No need to deploy separate infrastructure for each chain you support.
- **Code-first developer experience**: Write workflows in Go or TypeScript using familiar patterns. The SDK abstracts away the complexity of distributed systems, letting you focus on your business logic.
## Where to go next?
### New to CRE?
Start here:
1. **[Create Your Account](/cre/account/creating-account)** - Set up your CRE account (required for all CLI commands)
2. **[Install the CLI](/cre/getting-started/cli-installation)** - Download and install the `cre` command-line tool
Then choose your path:
- **Learn by building:** [Getting Started Guide](/cre/getting-started/overview) - Step-by-step guide where you build your first workflow, learning core concepts along the way
- **Quick start:** [Run the Custom Data Feed Demo](/cre/templates/running-demo-workflow) - See a production-ready workflow in action. Just follow the steps to run a complete, pre-built example
### Already familiar?
Jump to what you need:
- **[Workflow Guides](/cre/guides/workflow/using-triggers/overview)** - Learn how to use triggers, make API calls, and interact with blockchains
- **[Workflow Operations](/cre/guides/operations/simulating-workflows)** - Simulate, deploy, and manage your workflows
- **[SDK Reference](/cre/reference/sdk)** - Detailed API documentation for Go and TypeScript SDKs
---
# Key Terms and Concepts
Source: https://docs.chain.link/cre/key-terms
Last Updated: 2025-11-04
This page defines the fundamental terms and concepts for the Chainlink Runtime Environment (CRE).
## High-level concepts
### Chainlink Runtime Environment (CRE)
The all-in-one orchestration layer unlocking institutional-grade smart contracts—data-connected, compliance-ready, privacy-preserving, and interoperable across blockchains and existing systems
### Decentralized Oracle Network (DON)
A decentralized, peer-to-peer network of independent nodes that work together to execute a specific task. In CRE, there are two primary types of DONs: **Workflow DONs** that orchestrates the workflow, and specialized **Capability DONs** that execute specific tasks like blockchain interactions.
## Workflow architecture
### Workflow
A workflow uses the CRE SDK (Go or TypeScript) and comprises one or more [handlers](/cre/key-terms#handler), which define the logic that executes when events ([triggers](/cre/key-terms#trigger)) occur. CRE compiles the workflow to a WASM binary and runs it on a Workflow DON.
### Handler
The basic building block of a workflow, created using the `cre.Handler` function. It connects a single **Trigger** event to a single **Callback** function.
### Trigger
An event source that initiates the execution of a handler's callback function. Examples include Cron trigger, HTTP trigger, and EVM Log trigger. Learn more in the [Trigger capability page](/cre/capabilities/triggers).
### Callback
A function that contains your core logic. It is executed by the Workflow DON every time its corresponding trigger fires.
## The developer's toolkit: The CRE SDK
### `Runtime` & `NodeRuntime`
Short-lived objects passed to your callback function during a specific execution. The key difference between `Runtime` and `NodeRuntime` is who is responsible for creating a single, trusted result from the work of many nodes.
- **`Runtime`**: Think of it as the "Easy Mode". It is used for operations that are guaranteed to be Byzantine Fault Tolerant (BFT). You ask the network to execute something, and CRE handles the underlying complexity to ensure you get back one final, secure, and trustworthy result.
- **`NodeRuntime`**: Think of this as the "Manual Mode". It is used when a BFT guarantee cannot be provided automatically (e.g. calling a standard API). You tell each node to perform a task on its own. Each node returns its own individual answer, and you are responsible for telling the SDK how to combine them into a single, trusted result by providing an aggregation algorithm. This is always used inside a `cre.RunInNodeMode` block.
Learn more about [Consensus and Aggregation](/cre/reference/sdk/consensus).
### SDK Clients: `EVMClient` & `HTTPClient`
The primary SDK clients you use inside a callback to interact with capabilities. For example, you use an EVM client to read from a smart contract and an HTTP client to make offchain API requests.
**Language-specific implementations:**
- **Go SDK**: `evm.Client` and `http.Client`
- **TypeScript SDK**: `EVMClient` and `HTTPClient` classes
### `Bindings` (Go SDK only)
A Go package generated from a smart contract's ABI using the `cre generate-bindings` CLI command. Bindings create a type-safe Go interface for a specific smart contract, abstracting away the low-level complexity of ABI encoding and decoding.
Using generated bindings is the recommended best practice for Go workflows, as they provide helper methods for:
- Reading from `view`/`pure` functions.
- Encoding data structures for onchain writes.
- Creating triggers for and decoding event logs.
This makes your workflow code cleaner, safer, and easier to maintain. Learn more in the [Generating Contract Bindings](/cre/guides/workflow/using-evm-client/generating-bindings) guide.
**Note for TypeScript**: The TypeScript SDK uses [Viem](https://viem.sh/) for type-safe contract interactions with manual ABI definitions instead of generated bindings.
### Async Patterns
Asynchronous operations in the SDK (like contract reads or HTTP requests) return a placeholder for a future result:
- **Go SDK**: Operations return a `Promise`, and you must call `.Await()` to pause execution and wait for the result.
- **TypeScript SDK**: Operations return an object with a `.result()` method that you call to wait for the result.
### `Secrets`
Securely managed credentials (e.g., API keys) made available to your workflow at runtime. Secrets can be fetched within a callback using the runtime's secret retrieval method:
- **Go SDK**: `runtime.GetSecret()`
- **TypeScript SDK**: `runtime.getSecret()`
## Underlying architectural concepts
### Capability
A conceptual, decentralized "microservice" that is backed by its own DON. Capabilities are the fundamental building blocks of the CRE platform (e.g., HTTP Fetch, EVM Read). You do not interact with them directly; instead, you use the SDK's developer-facing clients (like `evm.Client`) to invoke them.
### Consensus
The mechanism by which a DON comes to a single, reliable, and tamper-proof result, even if individual nodes observe slightly different data. Consensus is what makes the outputs of capabilities secure and trustworthy.
## Where to go next?
- **[Getting Started](/cre/getting-started/overview)**: Start building your first workflow.
- **[About CRE](/cre)**: Learn more about the vision and high-level architecture of CRE.
---
# Service Quotas
Source: https://docs.chain.link/cre/service-quotas
Last Updated: 2025-11-04
This page documents the service quotas for Chainlink Runtime Environment (CRE) workflows.
## Per-owner quotas
These quotas apply to each workflow owner (user account) within an organization.
| Quota | Description | Value |
| ------------------------------- | ------------------------------------------------------------------------------------------------ | ---------------------------------- |
| Workflow Deployment Rate | Maximum rate at which an organization can deploy new workflows | Rate: 1 per minute Burst: 1 |
| Concurrent Workflow Executions | Maximum number of workflows that can execute simultaneously | 5 |
| Workflow Trigger Rate | Maximum rate at which triggers can fire for all workflows owned by a single owner (user account) | Rate: 5 per second Burst: 5 |
| Workflow Binary Size | Maximum size of the compiled WASM binary | 100 MB |
| Workflow Compressed Binary Size | Maximum size of the compressed WASM binary | 20 MB |
| Workflow Configuration Size | Maximum size of the workflow configuration | 1 MB |
| Secrets Size | Maximum total size of secrets accessible to a workflow | 1 MB |
## Per-workflow quotas
These quotas apply to each individual workflow.
### Trigger quotas
| Quota | Description | Value |
| ----------------------------- | ---------------------------------------------------------------------- | -------------------------------------- |
| Trigger Rate | Maximum rate at which a workflow's triggers can fire | Rate: 1 per 30 seconds Burst: 3 |
| Maximum Triggers per Workflow | Maximum number of triggers that can be registered to a single workflow | 10 |
### Execution quotas
| Quota | Description | Value |
| ----------------------------- | --------------------------------------------------------------- | ----------------- |
| Concurrent Workflow Instances | Maximum number of concurrent executions for a specific workflow | 5 Burst: 5 |
| Workflow Timeout | Maximum total execution time for a single workflow run | 5 minutes |
| Workflow Memory Allocation | Maximum memory allocated to a workflow | 100 MB |
| Response Size | Maximum size of the data a workflow can return | 100 KB |
### Capability call quotas
| Quota | Description | Value |
| --------------------------- | ------------------------------------------------------------------------------------------------------ | --------- |
| Concurrent Capability Calls | Maximum concurrent capability calls (HTTP, EVM read/write, secrets) that can execute within a workflow | 3 |
| Capability Call Timeout | Maximum time a single capability call can take to complete | 3 minutes |
### Secrets quotas
| Quota | Description | Value |
| -------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | ----- |
| Secrets Size | Maximum total size of secrets accessible to a workflow | 1 MB |
| Concurrent Secrets Fetches | Maximum number of secrets that can be fetched concurrently. [Learn how to fetch multiple secrets](/cre/guides/workflow/secrets). | 5 |
### Consensus quotas
| Quota | Description | Value |
| --------------------- | ---------------------------------------------------------------- | ------ |
| Observation Size | Maximum size of data that can be passed to consensus aggregation | 100 KB |
| Total Consensus Calls | Maximum number of consensus calls per workflow execution | 2,000 |
### Logging quotas
| Quota | Description | Value |
| ------------- | --------------------------------------------------- | ----- |
| Log Line Size | Maximum size of a single log line | 1 KB |
| Log Events | Maximum number of log events per workflow execution | 1,000 |
## Trigger-specific quotas
### Cron trigger
| Quota | Description | Value |
| ------------ | --------------------------------------------- | -------------------------------------- |
| Trigger Rate | Maximum rate at which a cron trigger can fire | Rate: 1 per 30 seconds Burst: 1 |
### HTTP trigger
| Quota | Description | Value |
| ------------ | ---------------------------------------------- | -------------------------------------- |
| Trigger Rate | Maximum rate at which an HTTP trigger can fire | Rate: 1 per 30 seconds Burst: 3 |
### EVM log trigger
| Quota | Description | Value |
| ---------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------- |
| Maximum Log Triggers | Maximum number of EVM log triggers per workflow | 5 |
| Event Rate | Maximum rate at which log events can be processed | Rate: 10 per 6 seconds Burst: 10 |
| Filter Addresses | Maximum number of contract addresses that can be monitored | 5 |
| Filter Topics per Slot | Maximum number of topic values that can be specified within a single topic position (Topics[0], Topics[1], Topics[2], or Topics[3]). [Learn about topic filtering](/cre/guides/workflow/using-triggers/evm-log-trigger). | 10 |
| Event Size | Maximum size of a single log event | 5 KB |
## Capability-specific quotas
### EVM write capability
| Quota | Description | Value |
| --------------------- | --------------------------------------------------------- | --------- |
| Target Chains | Maximum number of destination chains for write operations | 10 |
| Report Size | Maximum size of a report payload | 1 KB |
| Transaction Gas Quota | Gas quota per EVM transaction | 5,000,000 |
### EVM read capability
| Quota | Description | Value |
| ------------------------ | ---------------------------------------------------------------- | ----- |
| Read Calls per Execution | Maximum number of EVM read calls per workflow execution | 10 |
| Log Query Block Quota | Maximum number of blocks that can be queried for historical logs | 100 |
| Payload Size | Maximum size of an EVM read request payload | 5 KB |
### HTTP capability
| Quota | Description | Value |
| ------------------------ | ------------------------------------------------------ | ---------- |
| HTTP Calls per Execution | Maximum number of HTTP requests per workflow execution | 5 |
| Response Size | Maximum size of an HTTP response | 10 KB |
| Connection Timeout | Maximum time to establish an HTTP connection | 10 seconds |
| Request Size | Maximum size of an HTTP request payload | 100 KB |
| Cache Age | Maximum time HTTP responses can be cached | 10 minutes |
## Quota increases
[Contact us](/cre/support-feedback) to discuss quota increases.
---
# Support & Feedback
Source: https://docs.chain.link/cre/support-feedback
Last Updated: 2025-11-04
Need help with CRE? Have feedback, want to report a bug, or request a feature? You can submit a support request directly through the CRE UI.
## How to submit a support request
1. Go to cre.chain.link
2. Log in to your account (if you're not already logged in)
3. In the left sidebar, click **Help**
4. The support form will open as a slide-out panel
5. Select your request type from the dropdown:
- **Support Request** - Need help with an issue
- **Bug Report** - Found a problem
- **Feature Request** - Suggest an improvement
- **General Feedback** - Share your thoughts
- **Other** - Anything else
6. Describe your issue or feedback
7. Click **Request** to submit
## What to include in your request
To help us assist you faster, please include:
**For bug reports:**
- Steps to reproduce the issue
- Expected behavior vs. actual behavior
- CRE CLI version (run `cre version` to check)
- Error messages or logs (if applicable)
- Operating system
**For support requests:**
- What you're trying to accomplish
- What's not working or unclear
- Any error messages you're seeing
- Relevant code snippets or configuration files
**For feature requests:**
- Your use case
- Why this feature would help you
---
# Getting Started: Overview
Source: https://docs.chain.link/cre/getting-started/overview
Last Updated: 2025-11-04
This multi-part tutorial guides you through building a complete workflow from a blank slate.
### What you'll build
You will build a simple but powerful **"Onchain Calculator"** workflow. By the end of this tutorial, your workflow will:
1. Run on a schedule using the **Cron Trigger**.
2. Fetch a random number from a public API using the **HTTP Capability**.
3. Read a value from a smart contract using the **EVM Read Capability**.
4. Combine the two values and write the final result back to the blockchain using the **EVM Write Capability**.
This tutorial is designed to teach you the core features of the CRE SDK in a logical progression. You can use any of the supported languages, Go or TypeScript using the language selector. By the end, you'll have a solid understanding of the end-to-end development process for building and simulating workflows that interact with both offchain and onchain data sources.
### Where to go next?
- **[Installing the CLI](/cre/getting-started/cli-installation)**: Download and install the `cre` command-line tool.
#### Tutorial structure
- **[Part 1: Project Setup & Simulation](/cre/getting-started/part-1-project-setup)**: Initialize a new, blank CRE project and run your first "Hello World!" simulation.
- **[Part 2: Fetching Offchain Data](/cre/getting-started/part-2-fetching-data)**: Modify your workflow to fetch data from an external API using the `http.Client`.
- **[Part 3: Reading an Onchain Value](/cre/getting-started/part-3-reading-onchain-value)**: Generate contract bindings and use the `evm.Client` to read a value from the blockchain.
- **[Part 4: Writing Onchain](/cre/getting-started/part-4-writing-onchain)**: Complete the calculator by writing your computed result back to a smart contract on Sepolia.
- **[Conclusion](/cre/getting-started/conclusion)**: Review what you've learned and find resources to continue your journey.
---
# Installing the CRE CLI
Source: https://docs.chain.link/cre/getting-started/cli-installation
Last Updated: 2025-11-04
These guides explain how to install the Chainlink Developer Platform CLI (also referred to as the CRE CLI).
---
# Installing the CRE CLI on macOS and Linux
Source: https://docs.chain.link/cre/getting-started/cli-installation/macos-linux
Last Updated: 2025-11-04
This page explains how to install the CRE CLI on macOS or Linux. The recommended version at the time of writing is **v1.0.0**.
## Installation
Choose your installation method:
- **[Automatic installation](#automatic-installation)** - Quick setup with a single command
- **[Manual installation](#manual-installation)** - Download and install the binary yourself
### Automatic installation
The easiest way to install the CRE CLI is using the installation script:
```bash
curl -sSL https://cre.chain.link/install.sh | sh
```
This script will:
- Detect your operating system and architecture automatically
- Download the correct binary for your system
- Verify the binary's integrity
- Install it to `/usr/local/bin` (or prompt you for a custom location)
- Make the binary executable
After the script completes, verify the installation:
```bash
cre version
```
**Expected output:** `cre version v1.0.0`
### Manual installation
If you prefer to install manually or the automatic installation doesn't work for your environment, follow these steps:
The CRE CLI is publicly available on GitHub. Visit the releases page and download the appropriate binary archive for your operating system and architecture.
After downloading the correct file from the releases page, move on to the next step to verify its integrity.
#### 1. Verify file integrity
Before installing, verify the file integrity using a checksum to ensure the binary hasn't been tampered with:
**Check the SHA-256 checksum**
Run the following command in the directory where you downloaded the archive (replace the filename with your specific binary):
```bash
shasum -a 256 cre_darwin_arm64.zip
```
**Verify against official checksums**
Compare the output with the official checksum below:
| File | SHA-256 Checksum |
| ------------------------ | ---------------------------------------------------------------- |
| `cre_darwin_amd64.zip` | be7d595a87ae74ecbbde95a576d4117c88af9d6751191fa7098bd0fe1f75a226 |
| `cre_darwin_arm64.zip` | 2b1ca0992d2c7a70ece60623a1490155b74e04041722caf0bcc2f4c795686ebf |
| `cre_linux_amd64.tar.gz` | dab1e966fbbf67ec136d7f3ec1236028db93493a067cdc8a772fb105593b2773 |
| `cre_linux_arm64.tar.gz` | e1f6a51010f4b2c73825eba2f703a8164972a56643891f91bfeddbfeecc32e34 |
If the checksum doesn't match, do not proceed with installation. Contact your Chainlink point of contact for assistance.
#### 2. Extract and install
1. **Navigate** to the directory where you downloaded the archive.
2. **Extract the archive**
For `.tar.gz` files:
```bash
tar -xzf cre_linux_arm64.tar.gz
```
For `.zip` files:
```bash
unzip cre_darwin_arm64.zip
```
3. **Rename the extracted binary to `cre`**
```bash
mv cre_v1.0.0_darwin_arm64 cre
```
4. **Make it executable**:
```bash
chmod +x cre
```
**Note (macOS Gatekeeper)**: If you see warnings about "unrecognized developer/source", remove extended attributes:
```bash
xattr -c cre
```
#### 3. Add the CLI to your PATH
Now that you have the `cre` binary, you need to make it accessible from anywhere on your system. This means you can run `cre` commands from any directory, not just where the binary is located.
**Recommended approach: Move to a standard location**
The easiest and most reliable method is to move the `cre` binary to a directory that's already in your system's PATH. For example:
```bash
sudo mv cre /usr/local/bin/
```
This command moves the `cre` binary to `/usr/local/bin/`, which is typically included in your PATH by default.
**Alternative: Add current directory to PATH**
If you prefer to keep the binary in its current location, you can add that directory to your PATH:
1. **Find your current directory:**
```bash
pwd
```
Note the full path (e.g., `/Users/yourname/Downloads/cre`)
2. **Add to your shell profile** (choose based on your shell):
For **zsh** (default on newer macOS):
```bash
echo 'export PATH="/Users/yourname/Downloads/cre:$PATH"' >> ~/.zshrc
source ~/.zshrc
```
For **bash**:
```bash
echo 'export PATH="/Users/yourname/Downloads/cre:$PATH"' >> ~/.bash_profile
source ~/.bash_profile
```
Replace `/Users/yourname/Downloads/cre` with your actual directory path from step 1.
3. **For temporary access** (this session only):
```bash
export PATH="$(pwd):$PATH"
```
#### 4. Verify the installation
**Test that `cre` is accessible:**
Open a **new terminal window** and run:
```bash
cre version
```
**Expected output:**
You should see version information: `cre version v1.0.0`.
**If it doesn't work:**
- Make sure you opened a **new terminal window** after making PATH changes
- Check the binary location: `which cre` should return `/usr/local/bin/cre` (or your custom path)
- Check that the binary has execute permissions: `ls -la /usr/local/bin/cre`
- Verify your PATH includes the correct directory: `echo $PATH`
#### 5. Confirm your PATH (troubleshooting)
If you're having issues, check what directories are in your PATH:
```bash
echo "$PATH" | tr ':' '\n'
```
You should see either:
- `/usr/local/bin` (if you moved the binary there)
- Your custom directory (if you added it to PATH)
## Next steps
Now that you have the `cre` CLI installed, you'll need to create a CRE account and authenticate before you can use it.
- **[Creating Your Account](/cre/account/creating-account)**: Create your CRE account and set up two-factor authentication
- **[Logging in with the CLI](/cre/account/cli-login)**: Authenticate the CLI with your account
Once you're authenticated, you're ready to build your first workflow:
- **[Getting Started — Part 1: Project Setup & Simulation](/cre/getting-started/part-1-project-setup)**: Initialize a new, blank CRE project and run your first "Hello World!" simulation.
---
# Installing the CRE CLI on Windows
Source: https://docs.chain.link/cre/getting-started/cli-installation/windows
Last Updated: 2025-11-04
This page explains how to install the Chainlink Developer Platform CLI (also referred to as the CRE CLI) on Windows. The recommended version at the time of writing is **v1.0.0**.
## Installation
Choose your installation method:
- **[Automatic installation](#automatic-installation)** - Quick setup with a PowerShell script
- **[Manual installation](#manual-installation)** - Download and install the binary yourself
### Automatic installation
The easiest way to install the CRE CLI is using the installation script. Open **PowerShell** and run:
```powershell
irm https://cre.chain.link/install.ps1 | iex
```
This script will:
- Download the correct binary for Windows
- Verify the binary's integrity
- Install it to a location in your PATH
- Make the binary executable
After the script completes, **open a new PowerShell window** and verify the installation:
```powershell
cre version
```
**Expected output:** `cre version v1.0.0`
### Manual installation
If you prefer to install manually or the automatic installation doesn't work for your environment, follow these steps:
The CRE CLI is publicly available on GitHub. Click the button below to access the releases page and download `cre_windows_amd64.zip`.
After downloading the file from the releases page, move on to the next step to verify its integrity.
#### 1. Verify file integrity
Before installing, verify the file integrity using a checksum to ensure the binary hasn't been tampered with.
**Check the SHA-256 checksum**
Open a PowerShell terminal and run the following command in the directory where you downloaded the archive:
```powershell
Get-FileHash cre_windows_amd64.zip -Algorithm SHA256
```
**Verify against the official checksum**
Compare the `Hash` value in the output with the official checksum below:
| File | SHA-256 Checksum |
| ----------------------- | ---------------------------------------------------------------- |
| `cre_windows_amd64.zip` | 72ca89ddc043837e13e7076c3ee3d177f5bcfadd4be83184405aa2cec7eec707 |
If the checksum doesn't match, do not proceed with installation. Contact your Chainlink point of contact for assistance.
#### 2. Extract and install
1. Navigate to the directory where you downloaded the archive.
2. Right-click the `.zip` file and select **Extract All...**.
3. Choose a permanent location for the extracted folder (e.g., `C:\Program Files\cre-cli`).
4. Inside the extracted folder, rename the file `cre_v1.0.0_windows_amd64.exe` to `cre.exe`.
#### 3. Add the CLI to your PATH
To run `cre` commands from any directory, you need to add the folder where you saved `cre.exe` to your system's PATH environment variable.
1. Open the **Start Menu** and search for "environment variables".
2. Select **Edit the system environment variables**.
3. In the System Properties window, click the **Environment Variables...** button.
4. In the **System variables** section, find and select the `Path` variable, then click **Edit...**.
5. Click **New** and add the full path to the folder where you saved `cre.exe` (e.g., `C:\Program Files\cre-cli`).
6. Click **OK** on all windows to save your changes.
#### 4. Verify the installation
Open a new **PowerShell** or **Command Prompt** window and run:
```bash
cre version
```
You should see version information: `cre version v1.0.0`.
## Next steps
Now that you have the `cre` CLI installed, you'll need to create a CRE account and authenticate before you can use it.
- **[Creating Your Account](/cre/account/creating-account)**: Create your CRE account and set up two-factor authentication
- **[Logging in with the CLI](/cre/account/cli-login)**: Authenticate the CLI with your account
Once you're authenticated, you're ready to build your first workflow:
- **[Getting Started — Part 1: Project Setup & Simulation](/cre/getting-started/part-1-project-setup)**: Initialize a new, blank CRE project and run your first "Hello World!" simulation.
---
# Conclusion & Next Steps
Source: https://docs.chain.link/cre/getting-started/conclusion
Last Updated: 2025-11-04
You've built a complete, end-to-end CRE workflow from scratch.
You started with an empty project and progressively built a workflow that:
- Fetches data from an offchain API with consensus
- Reads values from a smart contract
- Performs calculations combining onchain and offchain data
- Writes results back to the blockchain
**This is no small achievement.** You've mastered the core pattern that powers most CRE workflows: the trigger-and-callback model with capabilities for HTTP, EVM, and consensus.
## What's next?
Now that you have a working workflow, here's your natural progression from simulation to production and beyond.
### 1. See a complete example
Ready to see all these concepts in a more complex, real-world scenario?
- **[Run the Custom Data Feed Demo](/cre/templates/running-demo-workflow)** - Explore an advanced template that combines multiple capabilities
**Why this matters:** Templates show production-ready patterns.
### 2. Deploy your Calculator workflow to Production
You've simulated your workflow locally. **The logical next step is to deploy it to the CRE production environment** so it runs across a Decentralized Oracle Network (DON).
**Follow this deployment sequence:**
1. **[Link a Wallet Key](/cre/organization/linking-keys)** - Connect your wallet address to your organization (required before deployment)
2. **[Deploy Your Workflow](/cre/guides/operations/deploying-workflows)** - Push your calculator workflow live
3. **[Monitor Your Workflow](/cre/guides/operations/monitoring-workflows)** - Watch it execute in production and debug any issues
**Why this matters:** Deploying moves your workflow from local simulation to production execution across a DON.
### 3. Explore different triggers
You used a **Cron trigger** (time-based). **Most production workflows react to real-world events.**
**Try these next:**
- **[HTTP Trigger](/cre/guides/workflow/using-triggers/http-trigger)** - Let external systems trigger your workflow via API calls
- **[EVM Log Trigger](/cre/guides/workflow/using-triggers/evm-log-trigger)** - React to onchain events (e.g., token transfers, contract events)
**Why this matters:** Event-driven workflows are more powerful than scheduled ones. They respond instantly to real-world changes.
### 4. Add secrets
Your calculator used a public API. **Real workflows often need API keys and other sensitive data.**
**Learn how to secure your secrets:**
- **[Using Secrets in Simulation](/cre/guides/workflow/secrets/using-secrets-simulation)** - Store secrets in your local environment for development
- **[Using Secrets with Deployed Workflows](/cre/guides/workflow/secrets/using-secrets-deployed)** - Store secrets in the Vault DON for production
- **[Managing Secrets with 1Password](/cre/guides/workflow/secrets/managing-secrets-1password)** - Best practice: inject secrets at runtime
**Why this matters:** Hardcoded credentials are a security risk. CRE's secrets management lets you safely use authenticated APIs and private keys.
### 5. Build your own consumer contract
You used a **pre-deployed consumer contract**. **For production workflows, you'll create custom contracts tailored to your use case.**
**Learn the secure pattern:**
- **[Building Consumer Contracts](/cre/guides/workflow/using-evm-client/onchain-write/building-consumer-contracts)** - Create contracts that safely receive CRE data
**Why this matters:** Consumer contracts enforce business logic and validation onchain, enabling trustless and verifiable execution.
## Reference: Deepen Your Understanding
Want to dive deeper into specific concepts from the Getting Started guide? Use this section as a quick reference.
**Workflow Structure & Triggers**
- **[Core SDK Reference](/cre/reference/sdk/core/)** - Fundamental building blocks (`InitWorkflow`, `Handler`, `Runtime`)
- **[Triggers Overview](/cre/guides/workflow/using-triggers/overview)** - Compare all available event sources
**HTTP & Offchain Data**
- **[API Interactions Guide](/cre/guides/workflow/using-http-client/)** - Complete patterns for HTTP requests
- **[Consensus & Aggregation](/cre/reference/sdk/consensus)** - All aggregation methods (median, mode, custom)
- **[Consensus Computing Concept](/cre/concepts/consensus-computing)** - How CRE's consensus-based execution works
**EVM & Onchain Interactions**
- **[EVM Client Overview](/cre/guides/workflow/using-evm-client/overview)** - Introduction to smart contract interactions
- **[Onchain Read Guide](/cre/guides/workflow/using-evm-client/onchain-read)** - Reading from a smart contract
- **[Onchain Write Guide](/cre/guides/workflow/using-evm-client/onchain-write)** - Complete write patterns and report generation
**Configuration & Secrets**
- **[Project Configuration](/cre/reference/project-configuration/)** - Complete guide to `project.yaml`, `workflow.yaml`, and targets
- **[Secrets Guide](/cre/guides/workflow/secrets)** - All secrets management patterns
**All Capabilities**
- **[Capabilities Overview](/cre/capabilities/)** - See the full list of CRE capabilities and how they work together
---
# Using Triggers
Source: https://docs.chain.link/cre/guides/workflow/using-triggers/overview
Last Updated: 2025-11-04
Triggers are a special type of capability that start your workflows. They are event-driven services that watch for a specific condition to be met. When the condition occurs, the trigger fires and instructs CRE to run the callback function you have registered for that event.
A single workflow can contain multiple handlers, allowing you to react to different events with specific logic.
This section provides detailed guides for each available trigger type:
- **[Cron Trigger](/cre/guides/workflow/using-triggers/cron-trigger)**: Run workflows on a time-based schedule.
- **[HTTP Trigger](/cre/guides/workflow/using-triggers/http-trigger)**: Start workflows in response to an HTTP request from an external system.
- **[EVM Log Trigger](/cre/guides/workflow/using-triggers/evm-log-trigger)**: Initiate workflows in response to a specific event being emitted by a smart contract.
---
# Generating Contract Bindings
Source: https://docs.chain.link/cre/guides/workflow/using-evm-client/generating-bindings
Last Updated: 2025-11-04
To interact with a smart contract from your Go workflow, you first need to create **bindings**. Bindings are type-safe Go interfaces auto-generated from your contract's ABI. They provide a bridge between your Go code and the EVM.
How they work depends on whether you are reading from or writing to the chain:
- **For onchain reads**, bindings provide Go functions that directly mirror your contract's `view` and `pure` methods.
- **For onchain writes**, bindings provide powerful helper methods to ABI-encode your data structures, preparing them to be sent in a report to a [consumer contract](/cre/guides/workflow/using-evm-client/onchain-write/building-consumer-contracts/).
This is a **one-time code generation step** performed using the CRE CLI.
## The generation process
The CRE CLI provides an automated binding generator that reads contract ABIs and creates corresponding Go packages.
### Step 1: Add your contract ABI
Place your contract's ABI JSON file into the `contracts/evm/src/abi/` directory. For example, to generate bindings for a `PriceUpdater` contract, you would create `contracts/evm/src/abi/PriceUpdater.abi` with your ABI content.
### Step 2: Generate the bindings
From your **project root**, run the binding generator:
```bash
cre generate-bindings evm
```
This command scans all `.abi` files in `contracts/evm/src/abi/` and generates corresponding Go packages in `contracts/evm/src/generated/`. For each contract, two files are generated:
- `.go` — The main binding for interacting with the contract
- `_mock.go` — A mock implementation for testing your workflows without deploying contracts
## Using generated bindings
### For onchain reads
For `view` or `pure` functions, the generator creates a client with methods that you can call directly. These methods return a `Promise`, which you must `.Await()` to get the result after consensus.
**Example: A simple `Storage` contract**
If you have a `Storage.abi` for a contract with a `get()` view function, you can use the bindings like this:
```go
// Import the generated package for your contract, replacing "" with your project's module name
import "/contracts/evm/src/generated/storage"
import "github.com/ethereum/go-ethereum/common"
// In your workflow function...
evmClient := &evm.Client{ ChainSelector: config.ChainSelector }
contractAddress := common.HexToAddress(config.ContractAddress)
// Create a new contract instance
storageContract, err := storage.NewStorage(evmClient, contractAddress, nil)
if err != nil { /* ... */ }
// Call a read-only method - note that it returns the decoded type directly
value, err := storageContract.Get(runtime, big.NewInt(-3)).Await() // -3 = finalized block
if err != nil { /* ... */ }
// value is already a *big.Int, ready to use!
```
### For onchain writes
For onchain writes, your goal is to send an ABI-encoded report to your [consumer contract](/cre/guides/workflow/using-evm-client/onchain-write/building-consumer-contracts/). The binding generator creates helper methods that handle the entire process: creating the report, sending it for consensus, and delivering it to the chain.
#### Signaling the generator
To generate the necessary Go types and write helpers, your ABI must include at least one **`public` or `external` function that uses the data `struct` you want to send as a parameter**.
The generated helper method is named after the **input struct type**. For example, a struct named `PriceData` will generate a `WriteReportFromPriceData` helper.
**Example: A `PriceUpdater` contract ABI**
This ABI contains a `PriceData` struct and a public `updatePrices` function. This is all the generator needs.
```solidity
// contracts/evm/src/PriceUpdater.sol
// This contract can be used purely to generate the bindings.
// The actual onchain logic can live elsewhere.
contract PriceUpdater {
struct PriceData {
uint256 ethPrice;
uint256 btcPrice;
}
// The struct type (`PriceData`) determines the generated helper name.
// The generator will create a `WriteReportFromPriceData` method.
function updatePrices(PriceData memory) public {}
}
```
#### Using write bindings in a workflow
After running `cre generate-bindings`, you can use the generated `PriceUpdater` client to send a report. The workflow code will look like this:
```go
// Import the generated package for your contract, replacing "" with your project's module name
import "/contracts/evm/src/generated/price_updater"
import "github.com/smartcontractkit/cre-sdk-go/capabilities/blockchain/evm"
import "github.com/ethereum/go-ethereum/common"
import "math/big"
import "fmt"
// In your workflow function...
// The address should be your PROXY contract's address.
contractAddress := common.HexToAddress(config.ProxyAddress)
evmClient := &evm.Client{ ChainSelector: config.ChainSelector }
// 1. Create a new contract instance using the generated bindings.
// Even though it's called `price_updater`, it's configured with your proxy address.
priceUpdater, err := price_updater.NewPriceUpdater(evmClient, contractAddress, nil)
if err != nil { /* ... */ }
// 2. Instantiate the generated Go struct with your data.
reportData := price_updater.PriceData{
EthPrice: big.NewInt(4000_000000),
BtcPrice: big.NewInt(60000_000000),
}
// 3. Call the generated WriteReportFrom method on the contract instance.
// This method name is derived from the input struct of your contract's function.
writePromise := priceUpdater.WriteReportFromPriceData(runtime, reportData, nil)
// 4. Await the promise to confirm the transaction has been mined.
resp, err := writePromise.Await()
if err != nil {
return nil, fmt.Errorf("WriteReport await failed: %w", err)
}
// 5. The response contains the transaction hash.
logger := runtime.Logger()
logger.Info("Write report transaction succeeded", "txHash", common.BytesToHash(resp.TxHash).Hex())
```
### For event logs
The binding generator also creates powerful helpers for interacting with your contract's events. You can easily trigger a workflow when an event is emitted and decode the event data into a type-safe Go struct.
**Example: A contract with a `UserAdded` event**
```solidity
contract UserDirectory {
event UserAdded(address indexed userAddress, string userName);
function addUser(string calldata userName) external {
emit UserAdded(msg.sender, userName);
}
}
```
#### Triggering and Decoding Events
After generating bindings for the `UserDirectory` ABI, you can use the helpers to create a trigger and decode the logs in your handler.
```go
import (
"log/slog"
"/contracts/evm/src/generated/user_directory" // Replace "" with your project's module name
"github.com/smartcontractkit/cre-sdk-go/capabilities/blockchain/evm"
"github.com/smartcontractkit/cre-sdk-go/cre"
)
// In InitWorkflow, create an instance of the contract binding and use it
// to generate a trigger for the "UserAdded" event.
func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) {
// ...
userDirectory, err := user_directory.NewUserDirectory(evmClient, contractAddress, nil)
if err != nil { /* ... */ }
// Use the generated helper to create a trigger for the UserAdded event.
// Set confidence to evm.ConfidenceLevel_CONFIDENCE_LEVEL_FINALIZED to only trigger on finalized blocks.
// The last argument (filters) is nil to listen for all UserAdded events.
userAddedTrigger, err := userDirectory.LogTriggerUserAddedLog(chainSelector, evm.ConfidenceLevel_CONFIDENCE_LEVEL_FINALIZED, nil)
if err != nil { /* ... */ }
return cre.Workflow[*Config]{
cre.Handler(
userAddedTrigger,
onUserAdded,
),
}, nil
}
// The handler function receives the raw event log.
func onUserAdded(config *Config, runtime cre.Runtime, log *evm.Log) (string, error) {
logger := runtime.Logger()
// You must re-create the contract instance to access the decoder.
userDirectory, err := user_directory.NewUserDirectory(evmClient, contractAddress, nil)
if err != nil { /* ... */ }
// Use the generated Codec to decode the raw log into a typed Go struct.
decodedLog, err := userDirectory.Codec.DecodeUserAdded(log)
if err != nil {
return "", fmt.Errorf("failed to decode log: %w", err)
}
logger.Info("New user added!", "address", decodedLog.UserAddress, "name", decodedLog.UserName)
return "ok", nil
}
```
## What the CLI Generates
The generator creates a Go package for each ABI file.
- **For all contracts**:
- `Codec` interface for low-level encoding and decoding.
- **For onchain reads**:
- A contract **client struct** (e.g., `Storage`) to interact with.
- A **constructor function** (e.g., `NewStorage(...)`) to instantiate the client.
- **Method wrappers** for each `view`/`pure` function (e.g., `storage.Get(...)`) that return a promise.
- **For onchain writes**:
- A **Go type** for each `struct` exposed via a public function (e.g., `price_updater.PriceData`).
- A `WriteReportFrom` method on the **contract client struct** (e.g., `priceUpdater.WriteReportFromPriceData(...)`). This method handles the full process of generating and sending a report and returns a promise that resolves with the transaction details.
- **For events**:
- A **Go struct** for each `event` (e.g., `UserAdded`).
- A `Decode` method on the `Codec` to parse raw log data into the corresponding Go struct.
- A `LogTriggerLog` method on the contract client to easily create a workflow trigger.
- A `FilterLogs` method to query historical logs for that event.
## Using mock bindings for testing
The `_mock.go` files allow you to test your workflows without deploying or interacting with real contracts. Each mock struct provides:
- **Test-friendly constructor**: `NewMock(address, evmMockClient)` creates a mock instance
- **Mockable methods**: Set custom function implementations for each contract `view`/`pure` function
- **Type safety**: The same input/output types as the real binding
### Complete example: Testing a workflow with mocks
Let's say you have a workflow in `my-workflow/main.go` that reads from a `Storage` contract. Create a test file named `main_test.go` in the same directory.
```go
// File: my-workflow/main_test.go
package main
import (
"math/big"
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/smartcontractkit/cre-sdk-go/capabilities/blockchain/evm"
evmmock "github.com/smartcontractkit/cre-sdk-go/capabilities/blockchain/evm/mock"
"github.com/stretchr/testify/require"
"your-project/contracts/evm/src/generated/storage"
)
// Define your config types in the test file to match your workflow's structure
// Note: Your main.go likely has //go:build wasip1 (for WASM compilation),
// which means those types aren't available when running regular Go tests.
// So you need to redefine them here in your test file.
type EvmConfig struct {
StorageAddress string `json:"storageAddress"`
ChainName string `json:"chainName"`
}
type Config struct {
Evms []EvmConfig `json:"evms"`
}
func TestStorageRead(t *testing.T) {
// 1. Set up your config
config := &Config{
Evms: []EvmConfig{
{
StorageAddress: "0xa17CF997C28FF154eDBae1422e6a50BeF23927F4",
ChainName: "ethereum-testnet-sepolia",
},
},
}
// 2. Create a mock EVM client
chainSelector := uint64(evm.EthereumTestnetSepolia)
evmMock, err := evmmock.NewClientCapability(chainSelector, t)
require.NoError(t, err)
// 3. Create a mock Storage contract and set up mock behavior
storageAddress := common.HexToAddress(config.Evms[0].StorageAddress)
storageMock := storage.NewStorageMock(storageAddress, evmMock)
// 4. Mock the Get() function to return a controlled value
storageMock.Get = func() (*big.Int, error) {
return big.NewInt(42), nil
}
// 5. Now when your workflow code creates a Storage contract with this evmMock,
// it will automatically use the mocked Get() function.
// The mock is registered with the evmMock, so any contract at this address
// will use the mock behavior you defined.
// In a real test, you would call your workflow function here and verify results.
// Example:
// result, err := onCronTrigger(config, runtime, &cron.Payload{})
// require.NoError(t, err)
// require.Equal(t, big.NewInt(42), result.StorageValue)
// For this demo, we just verify the mock was set up
require.NotNil(t, storageMock)
t.Logf("Mock set up successfully - Get() will return 42")
}
```
### Running your tests
From your project root, run:
```bash
# Test a specific workflow
go test ./my-workflow
# Test with verbose output (shows t.Logf messages)
go test -v ./my-workflow
# Test all workflows in your project
go test ./...
```
**Expected output with `-v` flag:**
```bash
=== RUN TestStorageRead
main_test.go:55: Mock Storage contract set up at 0xa17CF997C28FF154eDBae1422e6a50BeF23927F4
main_test.go:56: When Storage.Get() is called, it will return: 42
--- PASS: TestStorageRead (0.00s)
PASS
ok onchain-calculator/my-calculator-workflow 0.257s
```
The test passes, confirming your mock contract is set up correctly. In a real workflow test, you would call your workflow function and verify it produces the expected results using the mocked contract.
### Best practices for workflow testing
1. **Name test files correctly**: Use `_test.go` (e.g., `main_test.go`) and place them in your workflow directory
2. **Test function naming**: Start test functions with `Test` (e.g., `TestMyWorkflow`, `TestCronTrigger`)
3. **Mock all external dependencies**: Use mock contracts for EVM calls and mock HTTP clients for API requests
4. **Test different scenarios**: Create separate test functions for success cases, error cases, and edge cases
### Complete reference example
For a comprehensive example showing how to test workflows with multiple triggers (cron, HTTP, EVM log) and multiple mock contracts, see the Custom Data Feed demo workflow's `workflow_test.go` file.
To generate this example:
1. Run `cre init` from your project directory
2. Select **Golang** as your language
3. Choose the **"Custom data feed: Updating on-chain data periodically using offchain API data"** template
4. After initialization completes, examine the generated `workflow_test.go` file in your workflow directory
This generated test file demonstrates real-world patterns for testing complex workflows with multiple capabilities and mock contracts.
## Best practices
1. **Regenerate when needed**: Re-run the generator if you update your contract ABIs.
2. **Handle errors**: Always check for errors at each step.
3. **Organize ABIs**: Keep your ABI files clearly named in the `contracts/evm/src/abi/` directory.
4. **Use mocks in tests**: Leverage the generated mock bindings to test your workflows in isolation without needing deployed contracts.
## Where to go next
Now that you know how to generate bindings, you can use them to [read data from](/cre/guides/workflow/using-evm-client/onchain-read) or [write data to](/cre/guides/workflow/using-evm-client/onchain-write) your contracts, or [trigger workflows from events](/cre/guides/workflow/using-triggers/evm-log-trigger).
---
# Building Consumer Contracts
Source: https://docs.chain.link/cre/guides/workflow/using-evm-client/onchain-write/building-consumer-contracts
Last Updated: 2025-11-04
When your workflow [writes data to the blockchain](/cre/guides/workflow/using-evm-client/onchain-write), it doesn't call your contract directly. Instead, it submits a signed report to a Chainlink `KeystoneForwarder` contract, which then calls your contract.
This guide explains how to build a consumer contract that can securely receive and process data from a CRE workflow.
**In this guide:**
- [Core Concepts: The Onchain Data Flow](#1-core-concepts-the-onchain-data-flow)
- [The IReceiver Standard](#2-the-ireceiver-standard)
- [Using IReceiverTemplate](#3-using-ireceivertemplate)
- [Advanced Usage](#4-advanced-usage-optional)
- [Complete Examples](#5-complete-examples)
## 1. Core Concepts: The Onchain Data Flow
1. **Workflow Execution**: Your workflow [produces a final, signed report](/cre/guides/workflow/using-evm-client/onchain-write/writing-data-onchain).
2. **EVM Write**: The EVM capability sends this report to the Chainlink-managed `KeystoneForwarder` contract.
3. **Forwarder Validation**: The `KeystoneForwarder` validates the report's signatures.
4. **Callback to Your Contract**: If the report is valid, the forwarder calls a designated function (`onReport`) on your consumer contract to deliver the data.
## 2. The `IReceiver` Standard
To be a valid target for the `KeystoneForwarder`, your consumer contract must satisfy two main requirements:
### 2.1 Implement the `IReceiver` Interface
The `KeystoneForwarder` needs a standardized function to call. This is defined by the `IReceiver` interface, which mandates an `onReport` function.
```solidity
interface IReceiver is IERC165 {
function onReport(bytes calldata metadata, bytes calldata report) external;
}
```
- `metadata`: Contains information about the workflow (ID, name, owner).
- `report`: The raw, ABI-encoded data payload from your workflow.
### 2.2 Support ERC165 Interface Detection
[ERC165](https://eips.ethereum.org/EIPS/eip-165) is a standard that allows contracts to publish the interfaces they support. The `KeystoneForwarder` uses this to check if your contract supports the `IReceiver` interface before sending a report.
## 3. Using `IReceiverTemplate`
### 3.1 Overview
While you can implement these standards manually, we provide an abstract contract, `IReceiverTemplate.sol`, that does the heavy lifting for you. Inheriting from it is the recommended best practice.
**Key features:**
- **Optional Permission Controls**: Choose your security level—enable forwarder address checks, workflow ID validation, workflow owner verification, or any combination
- **Flexible and Updatable**: All permission settings can be configured and updated via setter functions after deployment
- **Simplified Logic**: You only need to implement `_processReport(bytes calldata report)` with your business logic
- **Built-in Access Control**: Includes OpenZeppelin's `Ownable` for secure permission management
- **ERC165 Support**: Includes the necessary `supportsInterface` function
- **Metadata Access**: Helper function to decode workflow ID, name, and owner for custom validation logic
### 3.2 Contract Source Code
```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
import {IERC165} from "./IERC165.sol";
import {IReceiver} from "./IReceiver.sol";
import {Ownable} from "@openzeppelin/contracts/access/Ownable.sol";
/// @title IReceiverTemplate - Abstract receiver with optional permission controls
/// @notice Provides flexible, updatable security checks for receiving workflow reports
/// @dev All permission fields default to zero (disabled). Use setter functions to enable checks.
abstract contract IReceiverTemplate is IReceiver, Ownable {
// Optional permission fields (all default to zero = disabled)
address public forwarderAddress; // If set, only this address can call onReport
address public expectedAuthor; // If set, only reports from this workflow owner are accepted
bytes10 public expectedWorkflowName; // If set, only reports with this workflow name are accepted
bytes32 public expectedWorkflowId; // If set, only reports from this specific workflow ID are accepted
// Custom errors
error InvalidSender(address sender, address expected);
error InvalidAuthor(address received, address expected);
error InvalidWorkflowName(bytes10 received, bytes10 expected);
error InvalidWorkflowId(bytes32 received, bytes32 expected);
/// @notice Constructor sets msg.sender as the owner
/// @dev All permission fields are initialized to zero (disabled by default)
constructor() Ownable(msg.sender) {}
/// @inheritdoc IReceiver
/// @dev Performs optional validation checks based on which permission fields are set
function onReport(bytes calldata metadata, bytes calldata report) external override {
// Security Check 1: Verify caller is the trusted Chainlink Forwarder (if configured)
if (forwarderAddress != address(0) && msg.sender != forwarderAddress) {
revert InvalidSender(msg.sender, forwarderAddress);
}
// Security Checks 2-4: Verify workflow identity - ID, owner, and/or name (if any are configured)
if (expectedWorkflowId != bytes32(0) || expectedAuthor != address(0) || expectedWorkflowName != bytes10(0)) {
(bytes32 workflowId, bytes10 workflowName, address workflowOwner) = _decodeMetadata(metadata);
if (expectedWorkflowId != bytes32(0) && workflowId != expectedWorkflowId) {
revert InvalidWorkflowId(workflowId, expectedWorkflowId);
}
if (expectedAuthor != address(0) && workflowOwner != expectedAuthor) {
revert InvalidAuthor(workflowOwner, expectedAuthor);
}
if (expectedWorkflowName != bytes10(0) && workflowName != expectedWorkflowName) {
revert InvalidWorkflowName(workflowName, expectedWorkflowName);
}
}
_processReport(report);
}
/// @notice Updates the forwarder address that is allowed to call onReport
/// @param _forwarder The new forwarder address (use address(0) to disable this check)
function setForwarderAddress(address _forwarder) external onlyOwner {
forwarderAddress = _forwarder;
}
/// @notice Updates the expected workflow owner address
/// @param _author The new expected author address (use address(0) to disable this check)
function setExpectedAuthor(address _author) external onlyOwner {
expectedAuthor = _author;
}
/// @notice Updates the expected workflow name
/// @param _name The new expected workflow name (use bytes10(0) to disable this check)
function setExpectedWorkflowName(bytes10 _name) external onlyOwner {
expectedWorkflowName = _name;
}
/// @notice Updates the expected workflow ID
/// @param _id The new expected workflow ID (use bytes32(0) to disable this check)
function setExpectedWorkflowId(bytes32 _id) external onlyOwner {
expectedWorkflowId = _id;
}
/// @notice Extracts all metadata fields from the onReport metadata parameter
/// @param metadata The metadata in bytes format
/// @return workflowId The unique identifier of the workflow (bytes32)
/// @return workflowName The name of the workflow (bytes10)
/// @return workflowOwner The owner address of the workflow
function _decodeMetadata(bytes memory metadata)
internal
pure
returns (bytes32 workflowId, bytes10 workflowName, address workflowOwner)
{
// Metadata structure:
// - First 32 bytes: length of the byte array (standard for dynamic bytes)
// - Offset 32, size 32: workflow_id (bytes32)
// - Offset 64, size 10: workflow_name (bytes10)
// - Offset 74, size 20: workflow_owner (address)
assembly {
workflowId := mload(add(metadata, 32))
workflowName := mload(add(metadata, 64))
workflowOwner := shr(mul(12, 8), mload(add(metadata, 74)))
}
}
/// @notice Abstract function to process the report data
/// @param report The report calldata containing your workflow's encoded data
/// @dev Implement this function with your contract's business logic
function _processReport(bytes calldata report) internal virtual;
/// @inheritdoc IERC165
function supportsInterface(bytes4 interfaceId) public pure virtual override returns (bool) {
return interfaceId == type(IReceiver).interfaceId || interfaceId == type(IERC165).interfaceId;
}
}
```
### 3.3 Quick Start
**Step 1: Inherit and implement your business logic**
The simplest way to use `IReceiverTemplate` is to inherit from it and implement the `_processReport` function:
```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.26;
import { IReceiverTemplate } from "./IReceiverTemplate.sol";
contract MyConsumer is IReceiverTemplate {
uint256 public storedValue;
event ValueUpdated(uint256 newValue);
// Simple constructor - no parameters needed
constructor() IReceiverTemplate() {}
// Implement your business logic here
function _processReport(bytes calldata report) internal override {
uint256 newValue = abi.decode(report, (uint256));
storedValue = newValue;
emit ValueUpdated(newValue);
}
}
```
### 3.4 Configuring Security
**Step 2: Configure permissions (optional)**
After deploying your contract, the owner can enable any combination of security checks using the setter functions.
**Configuration examples:**
```solidity
// Example: Enable forwarder check only
myConsumer.setForwarderAddress(0xF8344CFd5c43616a4366C34E3EEE75af79a74482); // Ethereum Sepolia
// Example: Enable workflow ID check
myConsumer.setExpectedWorkflowId(0x1234...); // Your specific workflow ID
// Example: Enable workflow owner and name checks
myConsumer.setExpectedAuthor(0xYourAddress...);
myConsumer.setExpectedWorkflowName(0x6d795f776f726b666c6f77); // "my_workflo" in hex (bytes10 = 10 chars max)
// Example: Disable a check later (set to zero)
myConsumer.setExpectedWorkflowName(bytes10(0));
```
**What the template handles for you:**
- Validates the caller address (if `forwarderAddress` is set)
- Validates the workflow ID (if `expectedWorkflowId` is set)
- Validates the workflow owner (if `expectedAuthor` is set)
- Validates the workflow name (if `expectedWorkflowName` is set)
- Validates the ERC165 interface detection
- Validates the Access control via OpenZeppelin's `Ownable`
- Calls your `_processReport` function with validated data
**What you implement:**
- Your business logic in `_processReport`
- (Optional) Configure permissions after deployment using setter functions
## 4. Advanced Usage (Optional)
### 4.1 Custom Validation Logic
You can override `onReport` to add your own validation logic before or after the standard checks:
```solidity
import { IReceiverTemplate } from "./IReceiverTemplate.sol";
contract AdvancedConsumer is IReceiverTemplate {
uint256 public minReportInterval = 1 hours;
uint256 public lastReportTime;
error ReportTooFrequent(uint256 timeSinceLastReport, uint256 minInterval);
// Add custom validation before parent's checks
function onReport(bytes calldata metadata, bytes calldata report) external override {
// Custom check: Rate limiting
if (block.timestamp < lastReportTime + minReportInterval) {
revert ReportTooFrequent(block.timestamp - lastReportTime, minReportInterval);
}
// Call parent implementation for standard permission checks
super.onReport(metadata, report);
lastReportTime = block.timestamp;
}
function _processReport(bytes calldata report) internal override {
// Your business logic here
uint256 value = abi.decode(report, (uint256));
// ... store or process the value ...
}
// Allow owner to update rate limit
function setMinReportInterval(uint256 _interval) external onlyOwner {
minReportInterval = _interval;
}
}
```
### 4.2 Using Metadata Fields in Your Logic
The `_decodeMetadata` helper function is available for use in your `_processReport` implementation. This allows you to access workflow metadata for custom business logic:
```solidity
contract MetadataAwareConsumer is IReceiverTemplate {
mapping(bytes32 => uint256) public reportCountByWorkflow;
function _processReport(bytes calldata report) internal override {
// Access the metadata to get workflow ID
bytes calldata metadata = msg.data[4:]; // Skip function selector
(bytes32 workflowId, , ) = _decodeMetadata(metadata);
// Use workflow ID in your business logic
reportCountByWorkflow[workflowId]++;
// Process the report data
uint256 value = abi.decode(report, (uint256));
// ... your logic here ...
}
}
```
### 4.3 Working with Simulation
When you run `cre workflow simulate`, your workflow interacts with a **`MockForwarder`** contract that does not provide the `workflow_name` or `workflow_owner` metadata. This means consumer contracts with `IReceiverTemplate`'s default validation **will fail during simulation**.
**To test your consumer contract with simulation:**
Override the `onReport` function to bypass validation checks:
```solidity
function onReport(bytes calldata, bytes calldata report) external override {
_processReport(report); // Skips validation checks
}
```
**For deployed workflows:**
Deployed workflows use the real **`KeystoneForwarder`** contract, which provides full metadata. You can enable all permission checks (forwarder address, workflow ID, owner, name) for production deployments.
## 5. Complete Examples
### Example 1: Simple Consumer Contract
This example inherits from `IReceiverTemplate` to store a temperature value.
```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.26;
import { IReceiverTemplate } from "./IReceiverTemplate.sol";
contract TemperatureConsumer is IReceiverTemplate {
int256 public currentTemperature;
event TemperatureUpdated(int256 newTemperature);
// Simple constructor - no parameters needed
constructor() IReceiverTemplate() {}
function _processReport(bytes calldata report) internal override {
int256 newTemperature = abi.decode(report, (int256));
currentTemperature = newTemperature;
emit TemperatureUpdated(newTemperature);
}
}
```
**Configuring permissions after deployment:**
```solidity
// Enable forwarder check for production
temperatureConsumer.setForwarderAddress(0xF8344CFd5c43616a4366C34E3EEE75af79a74482); // Ethereum Sepolia
// Enable workflow ID check for highest security
temperatureConsumer.setExpectedWorkflowId(0xYourWorkflowId...);
```
### Example 2: The Proxy Pattern
For more complex scenarios, it's best to separate your Chainlink-aware code from your core business logic. The **Proxy Pattern** is a robust architecture that uses two contracts to achieve this:
- **A Logic Contract**: Holds the state and the core functions of your application. It knows nothing about the Forwarder contract or the `onReport` function.
- **A Proxy Contract**: Acts as the secure entry point. It inherits from `IReceiverTemplate` and forwards validated reports to the Logic Contract.
This separation makes your business logic more modular and reusable.
#### The Logic Contract (`ReserveManager.sol`)
This contract, our "vault", holds the state and the `updateReserves` function. For security, it only accepts calls from its trusted Proxy. It also includes an owner-only function to update the proxy address, making the system upgradeable without requiring a migration.
```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.19;
import { Ownable } from "@openzeppelin/contracts/access/Ownable.sol";
contract ReserveManager is Ownable {
struct UpdateReserves {
uint256 ethPrice;
uint256 btcPrice;
}
address public proxyAddress;
uint256 public lastEthPrice;
uint256 public lastBtcPrice;
uint256 public lastUpdateTime;
event ReservesUpdated(uint256 ethPrice, uint256 btcPrice, uint256 updateTime);
modifier onlyProxy() {
require(msg.sender == proxyAddress, "Caller is not the authorized proxy");
_;
}
constructor() Ownable(msg.sender) {}
function setProxyAddress(address _proxyAddress) external onlyOwner {
proxyAddress = _proxyAddress;
}
function updateReserves(UpdateReserves memory data) external onlyProxy {
lastEthPrice = data.ethPrice;
lastBtcPrice = data.btcPrice;
lastUpdateTime = block.timestamp;
emit ReservesUpdated(data.ethPrice, data.btcPrice, block.timestamp);
}
}
```
#### The Proxy Contract (`UpdateReservesProxy.sol`)
This contract, our "bouncer", is the only contract that interacts with the Chainlink platform. It inherits `IReceiverTemplate` to validate incoming reports and then calls the `ReserveManager`.
```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.19;
import { ReserveManager } from "./ReserveManager.sol";
import { IReceiverTemplate } from "./keystone/IReceiverTemplate.sol";
contract UpdateReservesProxy is IReceiverTemplate {
ReserveManager public s_reserveManager;
constructor(address reserveManagerAddress) {
s_reserveManager = ReserveManager(reserveManagerAddress);
}
/// @inheritdoc IReceiverTemplate
function _processReport(bytes calldata report) internal override {
ReserveManager.UpdateReserves memory updateReservesData = abi.decode(report, (ReserveManager.UpdateReserves));
s_reserveManager.updateReserves(updateReservesData);
}
}
```
**Configuring permissions after deployment:**
```solidity
// Enable forwarder check (recommended)
updateReservesProxy.setForwarderAddress(0xF8344CFd5c43616a4366C34E3EEE75af79a74482); // Ethereum Sepolia
// Enable workflow ID check for production (highest security)
updateReservesProxy.setExpectedWorkflowId(0xYourWorkflowId...);
```
#### How it Works
The deployment and configuration process involves these steps:
1. **Deploy the Logic Contract**: Deploy `ReserveManager.sol`. The wallet that deploys this contract becomes its `owner`.
2. **Deploy the Proxy Contract**: Deploy `UpdateReservesProxy.sol`, passing the address of the deployed `ReserveManager` contract to its constructor.
3. **Link the Contracts**: The `owner` of the `ReserveManager` contract must call its `setProxyAddress` function, passing in the address of the `UpdateReservesProxy` contract. This authorizes the proxy to call the logic contract.
4. **Configure Permissions** (Recommended): The `owner` of the proxy should call setter functions to enable security checks:
```solidity
updateReservesProxy.setForwarderAddress(0xF8344CFd5c43616a4366C34E3EEE75af79a74482);
updateReservesProxy.setExpectedWorkflowId(0xYourWorkflowId...);
```
5. **Configure Workflow**: In your workflow's `config.json`, use the address of the **Proxy Contract** as the receiver address.
6. **Execution Flow**: When your workflow runs:
- The Chainlink Forwarder calls `onReport` on your **Proxy**
- The Proxy validates the report (forwarder address, workflow ID, etc.)
- The Proxy's `_processReport` function calls the `updateReserves` function on your **Logic Contract**
- Because the caller is the trusted proxy, the `onlyProxy` check passes, and your state is securely updated
7. **(Optional) Upgrade**: If you later need to deploy a new proxy, the owner can:
- Deploy the new proxy contract
- Call `setProxyAddress` on the `ReserveManager` to point it to the new proxy's address
- Update the workflow configuration to use the new proxy address
#### End-to-End Sequence
## Where to go next?
Now that you know how to build a consumer contract, the next step is to call it from your workflow.
- **[Onchain Write](/cre/guides/workflow/using-evm-client/onchain-write)**: Learn how to use the `EVMClient` to send data to your new consumer contract.
---
# Using WriteReportFrom Helpers
Source: https://docs.chain.link/cre/guides/workflow/using-evm-client/onchain-write/using-write-report-helpers
Last Updated: 2025-11-04
This guide explains how to write data to a smart contract using the `WriteReportFrom()` helper methods that are automatically generated from your contract's ABI. This is the recommended and simplest approach for most users.
**Use this approach when:**
- You're sending a **struct** to your consumer contract
- The struct appears in a `public` or `external` function's signature (as a parameter or return value); this is required for the binding generator to detect it in your contract's ABI and create the helper method
**Don't meet these requirements?** See the [Onchain Write](/cre/guides/workflow/using-evm-client/onchain-write#choosing-your-approach-which-guide-should-you-follow) page to find the right approach for your scenario.
## Prerequisites
Before you begin, ensure you have:
1. **A consumer contract** deployed that implements the `IReceiver` interface
- See [Building Consumer Contracts](/cre/guides/workflow/using-evm-client/onchain-write/building-consumer-contracts) if you need to create one
2. **Generated bindings** from your consumer contract's ABI
- See [Generating Bindings](/cre/guides/workflow/using-evm-client/generating-bindings) if you haven't created them yet
## What the helper does for you
The `WriteReportFrom()` helper method automates the entire onchain write process:
1. **ABI-encodes your struct** into bytes
2. **Generates a cryptographically signed report** via `runtime.GenerateReport()`
3. **Submits the report to the blockchain** via `evm.Client.WriteReport()`
4. **Returns a promise** with the transaction details
All of this happens in a single method call, making your workflow code clean and simple.
## The write pattern
Writing to contracts using binding helpers follows this simple pattern:
1. **Create an EVM client** with your target chain selector
2. **Instantiate the contract binding** with the consumer contract's address
3. **Prepare your data** using the generated struct type
4. **Call the write helper** and await the result
Let's walk through each step with a complete example.
## Step-by-step example
Assume you have a consumer contract with a struct that looks like this:
```solidity
struct UpdateReserves {
uint256 totalMinted;
uint256 totalReserve;
}
// This function makes the struct appear in the ABI
function processReserveUpdate(UpdateReserves memory update) public {
// ... logic
}
```
### Step 1: Create an EVM client
First, create an EVM client configured for the chain where your consumer contract is deployed:
```go
import (
"github.com/smartcontractkit/cre-sdk-go/capabilities/blockchain/evm"
)
func updateReserves(config *Config, runtime cre.Runtime, evmConfig EvmConfig, totalSupply *big.Int, totalReserveScaled *big.Int) error {
logger := runtime.Logger()
// Create EVM client with your target chain
evmClient := &evm.Client{
ChainSelector: evmConfig.ChainSelector, // e.g., 16015286601757825753 for Sepolia
}
```
### Step 2: Instantiate the contract binding
Create an instance of your [generated binding](/cre/guides/workflow/using-evm-client/generating-bindings), pointing it at your consumer contract's address:
```go
import (
"contracts/evm/src/generated/reserve_manager"
"github.com/ethereum/go-ethereum/common"
)
// Convert the address string from your config to common.Address
contractAddress := common.HexToAddress(evmConfig.ConsumerAddress)
// Create the binding instance
reserveManager, err := reserve_manager.NewReserveManager(evmClient, contractAddress, nil)
if err != nil {
return fmt.Errorf("failed to create reserve manager: %w", err)
}
```
### Step 3: Prepare your data
Create an instance of the generated struct type with your data:
```go
// Use the generated struct type from your bindings
updateData := reserve_manager.UpdateReserves{
TotalMinted: totalSupply, // *big.Int
TotalReserve: totalReserveScaled, // *big.Int
}
logger.Info("Prepared data for onchain write",
"totalMinted", totalSupply.String(),
"totalReserve", totalReserveScaled.String())
```
### Step 4: Call the write helper and await
Call the generated `WriteReportFrom()` method and await the result:
```go
// Call the generated helper - it handles encoding, report generation, and submission
writePromise := reserveManager.WriteReportFromUpdateReserves(runtime, updateData, nil)
logger.Info("Waiting for write report response")
// Await the transaction result
resp, err := writePromise.Await()
if err != nil {
logger.Error("WriteReport failed", "error", err)
return fmt.Errorf("failed to write report: %w", err)
}
// Log the successful transaction
txHash := common.BytesToHash(resp.TxHash).Hex()
logger.Info("Write report transaction succeeded", "txHash", txHash)
return nil
}
```
## Understanding the response
The write helper returns an `evm.WriteReportReply` struct with comprehensive transaction details:
```go
type WriteReportReply struct {
TxStatus TxStatus // SUCCESS, REVERTED, or FATAL
ReceiverContractExecutionStatus *ReceiverContractExecutionStatus // Contract execution status
TxHash []byte // Transaction hash
TransactionFee *pb.BigInt // Fee paid in Wei
ErrorMessage *string // Error message if failed
}
```
**Key fields to check:**
- **`TxStatus`**: Indicates whether the transaction succeeded, reverted, or had a fatal error
- **`TxHash`**: The transaction hash you can use to verify on a block explorer (e.g., Etherscan)
- **`TransactionFee`**: The total gas cost paid for the transaction in Wei
- **`ReceiverContractExecutionStatus`**: Whether your consumer contract's `onReport()` function executed successfully
- **`ErrorMessage`**: If the transaction failed, this field contains details about what went wrong
## Complete example
Here's a complete, runnable workflow function that demonstrates the end-to-end pattern:
```go
package main
import (
"contracts/evm/src/generated/reserve_manager"
"fmt"
"log/slog"
"math/big"
"github.com/ethereum/go-ethereum/common"
"github.com/smartcontractkit/cre-sdk-go/capabilities/blockchain/evm"
"github.com/smartcontractkit/cre-sdk-go/cre"
)
type EvmConfig struct {
ConsumerAddress string `json:"consumerAddress"`
ChainSelector uint64 `json:"chainSelector"`
}
type Config struct {
// Add other config fields from your workflow here
}
func updateReserves(config *Config, runtime cre.Runtime, evmConfig EvmConfig, totalSupply *big.Int, totalReserveScaled *big.Int) error {
logger := runtime.Logger()
logger.Info("Updating reserves", "totalSupply", totalSupply, "totalReserveScaled", totalReserveScaled)
// Create EVM client with chain selector
evmClient := &evm.Client{
ChainSelector: evmConfig.ChainSelector,
}
// Create contract binding
contractAddress := common.HexToAddress(evmConfig.ConsumerAddress)
reserveManager, err := reserve_manager.NewReserveManager(evmClient, contractAddress, nil)
if err != nil {
return fmt.Errorf("failed to create reserve manager: %w", err)
}
logger.Info("Writing report", "totalSupply", totalSupply, "totalReserveScaled", totalReserveScaled)
// Call the write method
writePromise := reserveManager.WriteReportFromUpdateReserves(runtime, reserve_manager.UpdateReserves{
TotalMinted: totalSupply,
TotalReserve: totalReserveScaled,
}, nil)
logger.Info("Waiting for write report response")
// Await the transaction
resp, err := writePromise.Await()
if err != nil {
logger.Error("WriteReport await failed", "error", err, "errorType", fmt.Sprintf("%T", err))
return fmt.Errorf("failed to write report: %w", err)
}
logger.Info("Write report transaction succeeded", "txHash", common.BytesToHash(resp.TxHash).Hex())
return nil
}
// NOTE: This is a placeholder. You would need a full workflow with InitWorkflow,
// a trigger, and a callback that calls this `updateReserves` function.
func main() {
// wasm.NewRunner(cre.ParseJSON[Config]).Run(InitWorkflow)
}
```
## Configuring gas limits
By default, the SDK automatically estimates gas limits for your transactions. However, for complex transactions or to ensure sufficient gas, you can explicitly set a gas limit:
```go
// Create a gas configuration
gasConfig := &evm.GasConfig{
GasLimit: 1000000, // Adjust based on your contract's needs
}
// Pass it as the third argument to the write helper
writePromise := reserveManager.WriteReportFromUpdateReserves(runtime, updateData, gasConfig)
```
## Best practices
1. **Always check errors**: Both the write call and the `.Await()` can fail—handle both error paths
2. **Log transaction details**: Include transaction hashes in your logs for debugging and monitoring
3. **Validate response status**: Check the `TxStatus` field to ensure the transaction succeeded
4. **Override gas limits when needed**: For complex transactions, set explicit gas limits higher than the automatic estimates to avoid "out of gas" errors
5. **Monitor contract execution**: Check `ReceiverContractExecutionStatus` to ensure your consumer contract processed the data correctly
## Troubleshooting
**Transaction failed with "out of gas"**
- Increase the `GasLimit` in your `GasConfig`
- Check if your consumer contract's logic is more complex than expected
**"WriteReport await failed" error**
- Check that your consumer contract address is correct
- Verify you're using the correct chain selector
- Ensure your account has sufficient funds for gas
**Transaction succeeded but contract didn't update**
- Check the `ReceiverContractExecutionStatus` field
- Review your consumer contract's `onReport()` logic for validation failures
- Verify the struct fields match what your contract expects
## Learn more
- **[Onchain Write Overview](/cre/guides/workflow/using-evm-client/onchain-write)**: Understand all onchain write approaches
- **[Building Consumer Contracts](/cre/guides/workflow/using-evm-client/onchain-write/building-consumer-contracts)**: Create secure consumer contracts
- **[Generating Bindings](/cre/guides/workflow/using-evm-client/generating-bindings)**: Generate type-safe contract bindings
- **[EVM Client Reference](/cre/reference/sdk/evm-client)**: Complete API documentation
---
# Generating Reports: Single Values
Source: https://docs.chain.link/cre/guides/workflow/using-evm-client/onchain-write/generating-reports-single-values
Last Updated: 2025-11-04
This guide shows how to manually generate a report containing a single value (like `uint256`, `address`, or `bool`). This is useful when you need to send a simple value onchain but don't have a struct or binding helper available.
**Use this approach when:**
- You're sending a **single primitive value** (like `uint256`, `address`, `bool`, `bytes32`) to your consumer contract
- You don't have (or need) binding helpers for your contract
**Don't meet these requirements?** See the [Onchain Write](/cre/guides/workflow/using-evm-client/onchain-write/overview-go#choosing-your-approach-which-guide-should-you-follow) page to find the right approach for your scenario.
## Prerequisites
- Familiarity with [Working with Solidity input types](/cre/guides/workflow/using-evm-client/onchain-write#working-with-solidity-input-types)
## What this guide covers
Manually generating a report for a single value involves two main steps:
1. **ABI-encode the value** into bytes using the `go-ethereum/accounts/abi` package
2. **Generate a cryptographically signed report** using `runtime.GenerateReport()`
The resulting report can then be:
- Submitted to the blockchain via `evm.Client.WriteReport()` (see [Submitting Reports Onchain](/cre/guides/workflow/using-evm-client/onchain-write/submitting-reports-onchain))
- Sent to an HTTP endpoint via `http.Client` (see [Submitting Reports via HTTP](/cre/guides/workflow/using-http-client/submitting-reports-http))
## Step-by-step example
### 1. Create your value
Start with a Go value that you want to send. For example, a `*big.Int` for a Solidity `uint256`:
```go
import "math/big"
myValue := big.NewInt(123456789)
logger.Info("Value to send", "value", myValue.String())
```
### 2. ABI-encode the value
Use the `ethereum/go-ethereum/accounts/abi` package to encode your value as a Solidity type:
```go
import "github.com/ethereum/go-ethereum/accounts/abi"
// Create the Solidity type definition
uint256Type, err := abi.NewType("uint256", "", nil)
if err != nil {
return fmt.Errorf("failed to create type: %w", err)
}
// Create an arguments array with your type
args := abi.Arguments{{Type: uint256Type}}
// Pack (encode) your value
encodedValue, err := args.Pack(myValue)
if err != nil {
return fmt.Errorf("failed to encode value: %w", err)
}
```
### 3. Generate the report
Use `runtime.GenerateReport()` to create a signed, consensus-verified report from the encoded bytes:
```go
reportPromise := runtime.GenerateReport(&cre.ReportRequest{
EncodedPayload: encodedValue,
EncoderName: "evm",
SigningAlgo: "ecdsa",
HashingAlgo: "keccak256",
})
report, err := reportPromise.Await()
if err != nil {
return fmt.Errorf("failed to generate report: %w", err)
}
logger.Info("Successfully generated report")
```
**Field explanations:**
- `EncodedPayload`: The ABI-encoded bytes from step 2
- `EncoderName`: Always `"evm"` for Ethereum reports
- `SigningAlgo`: Always `"ecdsa"` for Ethereum
- `HashingAlgo`: Always `"keccak256"` for Ethereum
#### Understanding the report
The `runtime.GenerateReport()` function returns a `*cre.Report` object. This report contains:
- **Your ABI-encoded data** (the payload)
- **Cryptographic signatures** from the DON nodes
- **Metadata** about the workflow (ID, name, owner)
- **Consensus proof** that the data was agreed upon by the network
This report is designed to be passed directly to either:
- `evm.Client.WriteReport()` for onchain delivery
- `http.Client` for offchain delivery
### 4. Submit the report
Now that you have a generated report, choose where to send it:
- **[Submit it to the blockchain](/cre/guides/workflow/using-evm-client/onchain-write/submitting-reports-onchain)** via `evm.Client.WriteReport()`
- **[Send it via HTTP](/cre/guides/workflow/using-http-client/submitting-reports-http)** via `http.Client` for offchain delivery
## Complete working example
Here's a workflow that generates a report from a single `uint256` value:
```go
//go:build wasip1
package main
import (
"fmt"
"log/slog"
"math/big"
"github.com/ethereum/go-ethereum/accounts/abi"
"github.com/smartcontractkit/cre-sdk-go/capabilities/scheduler/cron"
"github.com/smartcontractkit/cre-sdk-go/cre"
"github.com/smartcontractkit/cre-sdk-go/cre/wasm"
)
type Config struct {
Schedule string `json:"schedule"`
}
type MyResult struct {
OriginalValue string
EncodedHex string
}
func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) {
return cre.Workflow[*Config]{
cre.Handler(cron.Trigger(&cron.Config{Schedule: config.Schedule}), onCronTrigger),
}, nil
}
func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) {
logger := runtime.Logger()
// Step 1: Create a value
myValue := big.NewInt(123456789)
logger.Info("Generated value", "value", myValue.String())
// Step 2: ABI-encode the value as uint256
uint256Type, err := abi.NewType("uint256", "", nil)
if err != nil {
return nil, fmt.Errorf("failed to create type: %w", err)
}
args := abi.Arguments{{Type: uint256Type}}
encodedValue, err := args.Pack(myValue)
if err != nil {
return nil, fmt.Errorf("failed to encode value: %w", err)
}
logger.Info("ABI-encoded value", "hex", fmt.Sprintf("0x%x", encodedValue))
// Step 3: Generate report
reportPromise := runtime.GenerateReport(&cre.ReportRequest{
EncodedPayload: encodedValue,
EncoderName: "evm",
SigningAlgo: "ecdsa",
HashingAlgo: "keccak256",
})
report, err := reportPromise.Await()
if err != nil {
return nil, fmt.Errorf("failed to generate report: %w", err)
}
logger.Info("Report generated successfully")
// At this point, you would typically submit the report:
// - To the blockchain: see "Submitting Reports Onchain" guide
// - Via HTTP: see "Submitting Reports via HTTP" guide
// For this example, we'll just return the encoded data for verification
_ = report // Report is ready to use
// Return results
return &MyResult{
OriginalValue: myValue.String(),
EncodedHex: fmt.Sprintf("0x%x", encodedValue),
}, nil
}
func main() {
wasm.NewRunner(cre.ParseJSON[Config]).Run(InitWorkflow)
}
```
## Best practices
1. **Always check errors**: Both encoding and report generation can fail—handle both error paths
2. **Use the correct Solidity type string**: Type mismatches will cause ABI encoding failures. Verify your type strings match your contract exactly
3. **Log the encoded data**: For debugging, log the hex-encoded bytes to verify your data is encoded correctly:
```go
logger.Info("ABI-encoded value", "hex", fmt.Sprintf("0x%x", encodedValue))
```
4. **Refer to go-ethereum documentation**: For complex types, consult the [go-ethereum ABI package documentation](https://pkg.go.dev/github.com/ethereum/go-ethereum/accounts/abi)
## Troubleshooting
**"failed to create type" error**
- Verify the type string exactly matches Solidity syntax.
- For arrays, use `uint256[]` for dynamic arrays or `uint256[3]` for fixed-size arrays.
- Check the [go-ethereum type documentation](https://pkg.go.dev/github.com/ethereum/go-ethereum/accounts/abi) for supported types.
**"failed to encode value" error**
- Ensure your Go value matches the Solidity type (e.g., `*big.Int` for `uint256`, `common.Address` for `address`). Find a list of mappings [here](/cre/guides/workflow/using-evm-client/onchain-read#solidity-to-go-type-mappings).
- For integers, use `big.NewInt()` for values that fit in `int64`, or `new(big.Int).SetString()` for larger values.
- Verify you're packing the value with `args.Pack(myValue)`, not passing it directly.
**Report generation succeeds but onchain submission fails**
- This guide only covers report generation. See [Submitting Reports Onchain](/cre/guides/workflow/using-evm-client/onchain-write/submitting-reports-onchain) for troubleshooting submission issues.
## Learn more
- **[Onchain Write Overview](/cre/guides/workflow/using-evm-client/onchain-write)**: Understand all onchain write approaches
- **[Submitting Reports Onchain](/cre/guides/workflow/using-evm-client/onchain-write/submitting-reports-onchain)**: Submit your generated report to the blockchain
- **[Generating Reports: Structs](/cre/guides/workflow/using-evm-client/onchain-write/generating-reports-structs)**: Manually encode and generate reports for struct data
- **[Building Consumer Contracts](/cre/guides/workflow/using-evm-client/onchain-write/building-consumer-contracts)**: Create contracts that can receive your reports
- **[EVM Client Reference](/cre/reference/sdk/evm-client)**: Complete API documentation
---
# Generating Reports: Structs
Source: https://docs.chain.link/cre/guides/workflow/using-evm-client/onchain-write/generating-reports-structs
Last Updated: 2025-11-04
This guide shows how to generate a report containing a struct with multiple fields. There are two approaches depending on whether you have generated bindings for your contract.
## Choosing your approach
Use this table to determine which method applies to your situation:
| Situation | Binding Helper Available? | Recommended Approach | Section |
| ------------------------------------------------------------------------------------------------ | ---------------------------------------------- | --------------------------------- | ----------------------------------------------- |
| Struct appears in a `public` or `external` function's signature (as a parameter or return value) | Yes, `Codec.EncodeStruct()` exists | Use the binding's encoding helper | [Using Binding Helpers](#using-binding-helpers) |
| Struct is NOT in the ABI | No helper available | Manual tuple encoding | [Manual Encoding](#manual-encoding-advanced) |
**Don't meet these requirements?** If you're sending a single value instead of a struct, see [Generating Reports: Single Values](/cre/guides/workflow/using-evm-client/onchain-write/generating-reports-single-values). For other approaches, see the [Onchain Write](/cre/guides/workflow/using-evm-client/onchain-write#choosing-your-approach-which-guide-should-you-follow) hub page.
## Using binding helpers
If you have generated bindings for a contract that includes your struct in its ABI, the binding generator creates an `EncodeStruct()` method on the `Codec`. This is the simplest and recommended approach.
### When this applies
This method is available when your struct appears in a **public or external function's signature** (as a parameter or return value). These function types appear in the contract's ABI, which allows the binding generator to detect the struct and automatically create the encoding helper.
**Example contract:**
```solidity
contract MyContract {
struct PaymentData {
address recipient;
uint256 amount;
uint256 nonce;
}
// Because this public function uses PaymentData,
// the binding generator creates an encoding helper
function processPayment(PaymentData memory data) public {}
}
```
### Step 1: Identify the helper method
After running `cre generate-bindings`, your binding will include:
```go
type MyContractCodec interface {
// ... other methods
EncodePaymentDataStruct(in PaymentData) ([]byte, error)
// ...
}
```
### Step 2: Use the helper
```go
import "my-project/contracts/evm/src/generated/my_contract"
// Create your struct
paymentData := my_contract.PaymentData{
Recipient: common.HexToAddress("0x742d35Cc6634C0532925a3b844Bc9e7595f0bEb"),
Amount: big.NewInt(1000000000000000000),
Nonce: big.NewInt(42),
}
// Create contract instance to access the Codec
contract, err := my_contract.NewMyContract(evmClient, contractAddress, nil)
if err != nil {
return err
}
// Use the encoding helper
encodedStruct, err := contract.Codec.EncodePaymentDataStruct(paymentData)
if err != nil {
return fmt.Errorf("failed to encode struct: %w", err)
}
```
### Step 3: Generate the report
```go
reportPromise := runtime.GenerateReport(&cre.ReportRequest{
EncodedPayload: encodedStruct,
EncoderName: "evm",
SigningAlgo: "ecdsa",
HashingAlgo: "keccak256",
})
report, err := reportPromise.Await()
if err != nil {
return fmt.Errorf("failed to generate report: %w", err)
}
```
#### Understanding the report
The `runtime.GenerateReport()` function returns a `*cre.Report` object. This report contains:
- **Your ABI-encoded struct data** (the payload)
- **Cryptographic signatures** from the DON nodes
- **Metadata** about the workflow (ID, name, owner)
- **Consensus proof** that the data was agreed upon by the network
This report is designed to be passed directly to either:
- `evm.Client.WriteReport()` for onchain delivery
- `http.Client` for offchain delivery
The report can now be [submitted onchain](/cre/guides/workflow/using-evm-client/onchain-write/submitting-reports-onchain) or [sent via HTTP](/cre/guides/workflow/using-http-client/submitting-reports-http).
## Manual encoding
If your struct is **not** in the contract's ABI, you won't have a binding helper and must manually create the tuple type and encode it.
### When to use this approach
- You're working with a custom struct that doesn't appear in any `public` or `external` function's signature
- You're encoding data for a third-party contract without bindings
- You need full control over the encoding process
### Step-by-step example
Let's manually encode a `PaymentData` struct:
```solidity
struct PaymentData {
address recipient;
uint256 amount;
uint256 nonce;
}
```
### 1. Define the Go struct
Create a Go struct that matches your Solidity struct:
```go
import (
"math/big"
"github.com/ethereum/go-ethereum/common"
)
type PaymentData struct {
Recipient common.Address
Amount *big.Int
Nonce *big.Int
}
```
### 2. Create your struct instance
```go
paymentData := PaymentData{
Recipient: common.HexToAddress("0x742d35Cc6634C0532925a3b844Bc9e7595f0bEb"),
Amount: big.NewInt(1000000000000000000), // 1 ETH in wei
Nonce: big.NewInt(42),
}
```
### 3. Create the tuple type
Define the struct's fields as a tuple using `abi.NewType()`:
```go
import "github.com/ethereum/go-ethereum/accounts/abi"
tupleType, err := abi.NewType(
"tuple", "",
[]abi.ArgumentMarshaling{
{Name: "recipient", Type: "address"},
{Name: "amount", Type: "uint256"},
{Name: "nonce", Type: "uint256"},
},
)
if err != nil {
return fmt.Errorf("failed to create tuple type: %w", err)
}
```
**Important:** The field names and types must match your Solidity struct exactly.
### 4. ABI-encode the struct
```go
args := abi.Arguments{
{Name: "paymentData", Type: tupleType},
}
encodedStruct, err := args.Pack(paymentData)
if err != nil {
return fmt.Errorf("failed to encode struct: %w", err)
}
```
### 5. Generate the report
```go
reportPromise := runtime.GenerateReport(&cre.ReportRequest{
EncodedPayload: encodedStruct,
EncoderName: "evm",
SigningAlgo: "ecdsa",
HashingAlgo: "keccak256",
})
report, err := reportPromise.Await()
if err != nil {
return fmt.Errorf("failed to generate report: %w", err)
}
logger.Info("Report generated successfully")
```
### Complete working example
Here's a full workflow that generates a report from a struct:
```go
//go:build wasip1
package main
import (
"fmt"
"log/slog"
"math/big"
"github.com/ethereum/go-ethereum/accounts/abi"
"github.com/ethereum/go-ethereum/common"
"github.com/smartcontractkit/cre-sdk-go/capabilities/scheduler/cron"
"github.com/smartcontractkit/cre-sdk-go/cre"
"github.com/smartcontractkit/cre-sdk-go/cre/wasm"
)
type Config struct {
Schedule string `json:"schedule"`
}
// Go struct matching Solidity struct
type PaymentData struct {
Recipient common.Address
Amount *big.Int
Nonce *big.Int
}
type MyResult struct {
EncodedHex string
}
func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) {
return cre.Workflow[*Config]{
cre.Handler(cron.Trigger(&cron.Config{Schedule: config.Schedule}), onCronTrigger),
}, nil
}
func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) {
logger := runtime.Logger()
// Step 1: Create struct instance
paymentData := PaymentData{
Recipient: common.HexToAddress("0x742d35Cc6634C0532925a3b844Bc9e7595f0bEb"),
Amount: big.NewInt(1000000000000000000), // 1 ETH
Nonce: big.NewInt(42),
}
logger.Info("Created payment data", "recipient", paymentData.Recipient.Hex(), "amount", paymentData.Amount.String())
// Step 2: Create tuple type matching Solidity struct
tupleType, err := abi.NewType(
"tuple", "",
[]abi.ArgumentMarshaling{
{Name: "recipient", Type: "address"},
{Name: "amount", Type: "uint256"},
{Name: "nonce", Type: "uint256"},
},
)
if err != nil {
return nil, fmt.Errorf("failed to create tuple type: %w", err)
}
// Step 3: Encode the struct
args := abi.Arguments{{Name: "paymentData", Type: tupleType}}
encodedStruct, err := args.Pack(paymentData)
if err != nil {
return nil, fmt.Errorf("failed to encode struct: %w", err)
}
logger.Info("Encoded struct", "hex", fmt.Sprintf("0x%x", encodedStruct))
// Step 4: Generate report
reportPromise := runtime.GenerateReport(&cre.ReportRequest{
EncodedPayload: encodedStruct,
EncoderName: "evm",
SigningAlgo: "ecdsa",
HashingAlgo: "keccak256",
})
report, err := reportPromise.Await()
if err != nil {
return nil, fmt.Errorf("failed to generate report: %w", err)
}
logger.Info("Report generated successfully")
// At this point, you would typically submit the report:
// - To the blockchain: see "Submitting Reports Onchain" guide
// - Via HTTP: see "Submitting Reports via HTTP" guide
// For this example, we'll just return the encoded data for verification
_ = report // Report is ready to use
return &MyResult{
EncodedHex: fmt.Sprintf("0x%x", encodedStruct),
}, nil
}
func main() {
wasm.NewRunner(cre.ParseJSON[Config]).Run(InitWorkflow)
}
```
## Best practices
1. **Always check errors**: Both encoding and report generation can fail—handle both error paths
2. **Use binding helpers when available**: The `Codec.EncodeStruct()` helper is simpler and less error-prone than manual encoding
3. **Match Solidity types exactly**: For manual encoding, ensure your tuple definition matches your Solidity struct field-by-field, including order and types
4. **Log the encoded data**: For debugging, log the hex-encoded bytes to verify your struct is encoded correctly:
```go
logger.Info("ABI-encoded struct", "hex", fmt.Sprintf("0x%x", encodedStruct))
```
## Troubleshooting
**"failed to create tuple type" error**
- Verify the field types in your `ArgumentMarshaling` match Solidity exactly (e.g., `uint256`, not `uint` or `int`)
- Ensure field names match
- Check that nested types are properly defined if you have complex structs
**"failed to encode struct" error**
- Verify your Go struct fields match the Solidity struct in order and type
- Ensure you're using the correct Go types (e.g., `*big.Int` for `uint256`, `common.Address` for `address`). A list of mappings can be found [here](/cre/guides/workflow/using-evm-client/onchain-read#solidity-to-go-type-mappings).
- Check that all fields are populated (Go's zero values might not match what you expect)
**Binding helper not found**
- Confirm your struct is used in a `public` or `external` function parameter in your contract
- Verify you've run `cre generate-bindings` after updating your contract
- Check the generated binding file—the method should be named `EncodeStruct()`
**Report generation succeeds but onchain submission fails**
- This guide only covers report generation. See [Submitting Reports Onchain](/cre/guides/workflow/using-evm-client/onchain-write/submitting-reports-onchain) for troubleshooting submission issues
## Learn more
- **[Onchain Write Overview](/cre/guides/workflow/using-evm-client/onchain-write)**: Understand all onchain write approaches
- **[Submitting Reports Onchain](/cre/guides/workflow/using-evm-client/onchain-write/submitting-reports-onchain)**: Submit your generated report to the blockchain
- **[Generating Reports: Single Values](/cre/guides/workflow/using-evm-client/onchain-write/generating-reports-single-values)**: Generate reports for single primitive values
- **[Using WriteReportFrom Helpers](/cre/guides/workflow/using-evm-client/onchain-write/using-write-report-helpers)**: Use the all-in-one helper that handles encoding, generation, and submission
- **[Building Consumer Contracts](/cre/guides/workflow/using-evm-client/onchain-write/building-consumer-contracts)**: Create contracts that can receive your reports
- **[EVM Client Reference](/cre/reference/sdk/evm-client)**: Complete API documentation
---
# Submitting Reports Onchain
Source: https://docs.chain.link/cre/guides/workflow/using-evm-client/onchain-write/submitting-reports-onchain
Last Updated: 2025-11-04
This guide shows how to manually submit a generated report to the blockchain using the low-level `evm.Client.WriteReport()` method.
**Use this approach when:**
- You've already generated a report using `runtime.GenerateReport()` (from [single value](/cre/guides/workflow/using-evm-client/onchain-write/generating-reports-single-values) or [struct](/cre/guides/workflow/using-evm-client/onchain-write/generating-reports-structs) generation)
- You need fine-grained control over the submission process
- You don't have (or can't use) the `WriteReportFrom()` binding helper
## Prerequisites
You must have:
- A generated report ready to submit (from [single value](/cre/guides/workflow/using-evm-client/onchain-write/generating-reports-single-values) or [struct](/cre/guides/workflow/using-evm-client/onchain-write/generating-reports-structs) generation)
- A [consumer contract](/cre/guides/workflow/using-evm-client/onchain-write/building-consumer-contracts) address that implements the `IReceiver` interface
## Step-by-step example
### Step 1: Create an EVM client
```go
import "github.com/smartcontractkit/cre-sdk-go/capabilities/blockchain/evm"
evmClient := &evm.Client{
ChainSelector: config.ChainSelector, // e.g., 16015286601757825753 for Sepolia
}
```
### Step 2: Prepare submission parameters
```go
import "github.com/ethereum/go-ethereum/common"
// Receiver contract address (must implement IReceiver interface)
receiverAddress := common.HexToAddress(config.ReceiverAddress)
// Optional gas configuration
gasConfig := &evm.GasConfig{
GasLimit: config.GasLimit, // e.g., 1000000
}
```
### Step 3: Submit the report
```go
writePromise := evmClient.WriteReport(runtime, &evm.WriteCreReportRequest{
Receiver: receiverAddress.Bytes(),
Report: report, // The report from runtime.GenerateReport()
GasConfig: gasConfig,
})
resp, err := writePromise.Await()
if err != nil {
return fmt.Errorf("failed to write report: %w", err)
}
// Extract transaction hash
txHash := fmt.Sprintf("0x%x", resp.TxHash)
logger.Info("Report submitted successfully", "txHash", txHash)
```
### Understanding the response
The `WriteReportReply` struct provides comprehensive transaction details:
```go
type WriteReportReply struct {
TxStatus TxStatus // SUCCESS, REVERTED, or FATAL
ReceiverContractExecutionStatus *ReceiverContractExecutionStatus // Contract execution status
TxHash []byte // Transaction hash
TransactionFee *pb.BigInt // Fee paid in Wei
ErrorMessage *string // Error message if failed
}
```
**Key fields to check:**
- **`TxStatus`**: Indicates whether the transaction succeeded, reverted, or had a fatal error
- **`TxHash`**: The transaction hash you can use to verify on a block explorer (e.g., Etherscan)
- **`TransactionFee`**: The total gas cost paid for the transaction in Wei
- **`ReceiverContractExecutionStatus`**: Whether your consumer contract's `onReport()` function executed successfully
- **`ErrorMessage`**: If the transaction failed, this field contains details about what went wrong
## Best practices
When submitting reports onchain, follow these practices to ensure reliability and observability:
1. **Log transaction details**: Always log the transaction hash for debugging and monitoring. This allows you to track your submission on block explorers and troubleshoot issues.
```go
txHash := fmt.Sprintf("0x%x", resp.TxHash)
logger.Info("Report submitted successfully", "txHash", txHash, "status", resp.TxStatus)
```
2. **Handle gas configuration**: Provide explicit gas limits for complex transactions to avoid out-of-gas errors. Adjust based on your contract's complexity and the data size.
```go
gasConfig := &evm.GasConfig{
GasLimit: 500000, // Adjust based on your needs
}
```
3. **Monitor transaction status**: Always check the `TxStatus` field in the response to ensure your transaction was successful. Handle `REVERTED` and `FATAL` statuses appropriately.
```go
if resp.TxStatus != evm.TxStatusSuccess {
return fmt.Errorf("transaction failed with status: %v, error: %s", resp.TxStatus, *resp.ErrorMessage)
}
```
## Complete example
Here's a full workflow that generates a report from a single value and submits it onchain:
```go
//go:build wasip1
package main
import (
"fmt"
"log/slog"
"math/big"
"github.com/ethereum/go-ethereum/accounts/abi"
"github.com/ethereum/go-ethereum/common"
"github.com/smartcontractkit/cre-sdk-go/capabilities/blockchain/evm"
"github.com/smartcontractkit/cre-sdk-go/capabilities/scheduler/cron"
"github.com/smartcontractkit/cre-sdk-go/cre"
"github.com/smartcontractkit/cre-sdk-go/cre/wasm"
)
type Config struct {
Schedule string `json:"schedule"`
ReceiverAddress string `json:"receiverAddress"`
ChainSelector uint64 `json:"chainSelector"`
GasLimit uint64 `json:"gasLimit"`
}
type MyResult struct {
TxHash string
}
func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) {
return cre.Workflow[*Config]{
cre.Handler(cron.Trigger(&cron.Config{Schedule: config.Schedule}), onCronTrigger),
}, nil
}
func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) {
logger := runtime.Logger()
// Step 1: Create and encode a value
myValue := big.NewInt(123456789)
logger.Info("Created value to encode", "value", myValue.String())
uint256Type, err := abi.NewType("uint256", "", nil)
if err != nil {
return nil, fmt.Errorf("failed to create type: %w", err)
}
args := abi.Arguments{{Type: uint256Type}}
encodedValue, err := args.Pack(myValue)
if err != nil {
return nil, fmt.Errorf("failed to encode value: %w", err)
}
logger.Info("Encoded value", "hex", fmt.Sprintf("0x%x", encodedValue))
// Step 2: Generate report
reportPromise := runtime.GenerateReport(&cre.ReportRequest{
EncodedPayload: encodedValue,
EncoderName: "evm",
SigningAlgo: "ecdsa",
HashingAlgo: "keccak256",
})
report, err := reportPromise.Await()
if err != nil {
return nil, fmt.Errorf("failed to generate report: %w", err)
}
logger.Info("Report generated successfully")
// Step 3: Create EVM client
evmClient := &evm.Client{
ChainSelector: config.ChainSelector,
}
// Step 4: Submit report onchain
receiverAddress := common.HexToAddress(config.ReceiverAddress)
gasConfig := &evm.GasConfig{GasLimit: config.GasLimit}
writePromise := evmClient.WriteReport(runtime, &evm.WriteCreReportRequest{
Receiver: receiverAddress.Bytes(),
Report: report,
GasConfig: gasConfig,
})
logger.Info("Submitting report onchain...")
resp, err := writePromise.Await()
if err != nil {
return nil, fmt.Errorf("failed to submit report: %w", err)
}
// Check transaction status
if resp.TxStatus != evm.TxStatusSuccess {
errorMsg := "unknown error"
if resp.ErrorMessage != nil {
errorMsg = *resp.ErrorMessage
}
return nil, fmt.Errorf("transaction failed with status %v: %s", resp.TxStatus, errorMsg)
}
txHash := fmt.Sprintf("0x%x", resp.TxHash)
logger.Info("Report submitted successfully", "txHash", txHash, "fee", resp.TransactionFee)
return &MyResult{TxHash: txHash}, nil
}
func main() {
wasm.NewRunner(cre.ParseJSON[Config]).Run(InitWorkflow)
}
```
**Configuration file** (`config.json`):
```json
{
"schedule": "0 */1 * * * *",
"receiverAddress": "0xYourReceiverContractAddress",
"chainSelector": 16015286601757825753,
"gasLimit": 1000000
}
```
## Broadcasting transactions
By default, `cre workflow simulate` performs a dry run without broadcasting transactions to the network. To execute real onchain transactions, use the `--broadcast` flag:
```bash
cre workflow simulate my-workflow --broadcast --target staging-settings
```
See the [CLI Reference](/cre/reference/cli#cre-workflow-simulate) for more details.
## Troubleshooting
**"failed to submit report" or transaction fails to broadcast**
- Verify your consumer contract address is correct and deployed on the target chain
- Check that you're using the correct chain selector for your target blockchain
- Verify network connectivity and RPC endpoint availability
**Transaction succeeds but `TxStatus` is `REVERTED`**
- Check the `ErrorMessage` field for details about why the transaction reverted
- Verify your consumer contract implements the `IReceiver` interface correctly (see [Building Consumer Contracts](/cre/guides/workflow/using-evm-client/onchain-write/building-consumer-contracts))
- Review your consumer contract's `onReport()` validation logic—it may be rejecting the report
- Ensure the report data format matches what your consumer contract expects
**"out of gas" error or transaction runs out of gas**
- Increase the `GasLimit` in your `GasConfig`
- Check if your consumer contract's `onReport()` function has unexpectedly complex logic
- Review the transaction on a block explorer to see the actual gas used
**`ReceiverContractExecutionStatus` indicates failure**
- Your consumer contract's `onReport()` function executed but encountered an error
- Review the contract's event logs and error messages on a block explorer
- Check that your contract's validation logic (e.g., forwarder checks, workflow ID checks) is correctly configured
- Verify the decoded data in your contract matches the expected struct/value format
**"invalid receiver address" or address-related errors**
- Confirm the receiver address is a valid Ethereum address format
- Verify the contract is deployed at that address on the target chain
- Use `common.HexToAddress()` to properly convert address strings
## Learn more
- **[Onchain Write Overview](/cre/guides/workflow/using-evm-client/onchain-write)**: Understand all onchain write approaches
- **[Using WriteReportFrom Helpers](/cre/guides/workflow/using-evm-client/onchain-write/using-write-report-helpers)**: Use the simpler all-in-one helper for struct submission
- **[Generating Reports: Single Values](/cre/guides/workflow/using-evm-client/onchain-write/generating-reports-single-values)**: Generate reports for single primitive values
- **[Generating Reports: Structs](/cre/guides/workflow/using-evm-client/onchain-write/generating-reports-structs)**: Generate reports for struct data
- **[Building Consumer Contracts](/cre/guides/workflow/using-evm-client/onchain-write/building-consumer-contracts)**: Create contracts that can receive reports
- **[EVM Client Reference](/cre/reference/sdk/evm-client)**: Complete API documentation
---
# API Interactions
Source: https://docs.chain.link/cre/guides/workflow/using-http-client
Last Updated: 2025-11-04
The CRE SDK provides an HTTP client that allows your workflows to interact with external APIs. Use it to fetch offchain data, send results to other systems, or trigger external events.
These guides will walk you through the common use cases for the HTTP client.
## Guides
- **[Making GET Requests](/cre/guides/workflow/using-http-client/get-request)**: Learn how to fetch data from a public API using a `GET` request.
- **[Making POST Requests](/cre/guides/workflow/using-http-client/post-request)**: Learn how to send data to an external endpoint using a `POST` request.
- **[Submitting Reports via HTTP](/cre/guides/workflow/using-http-client/submitting-reports-http)**: Learn how to submit cryptographically signed reports to an external HTTP endpoint.
---
# Managing Secrets
Source: https://docs.chain.link/cre/guides/workflow/secrets
Last Updated: 2025-11-04
Secrets are sensitive values like API keys, private keys, database URLs, and authentication tokens that your workflow needs to access at runtime. CRE provides different approaches for managing secrets depending on whether you're developing locally or running workflows in production.
This guide helps you choose the right approach for your use case.
## Which guide do I need?
Your workflow environment determines how you manage secrets:
### 1. Local development and simulation
**When to use:** You're testing and debugging workflows on your local machine using `cre workflow simulate`.
**How it works:**
- Secrets declared in `secrets.yaml`
- Values provided via `.env` file or environment variables
- Secrets injected locally by the CLI
- **No Vault DON required**
**→ Follow this guide:** [Using Secrets in Simulation](/cre/guides/workflow/secrets/using-secrets-simulation)
### 2. Deployed workflows
**When to use:** Your workflow is deployed to the Workflow DON.
**How it works:**
- Secrets stored in the **Vault DON** (decentralized secret storage)
- Managed via `cre secrets` CLI commands (`create`, `update`, `delete`, `list`)
- Your workflow retrieves secrets from the Vault at runtime
- **Vault DON required**
**→ Follow this guide:** [Using Secrets with Deployed Workflows](/cre/guides/workflow/secrets/using-secrets-deployed)
### 3. Secure secret management (Best practice)
**When to use:** Any environment where you want to avoid storing secrets in plaintext `.env` files.
**How it works:**
- Use **1Password CLI** to store and inject secrets
- Secrets never stored in plaintext on your filesystem
- Works for both simulation and production
**→ Follow this guide:** [Managing Secrets with 1Password CLI](/cre/guides/workflow/secrets/managing-secrets-1password)
## Quick comparison
| Aspect | Local Simulation | Deployed Workflows |
| ------------------ | ------------------------------------ | ---------------------------------- |
| **Environment** | Your local machine | Workflow DON |
| **Secret storage** | `.env` file or environment variables | Vault DON |
| **CLI commands** | None (automatic via simulation) | `cre secrets create/update/delete` |
| **Workflow code** | `runtime.GetSecret()` | `runtime.GetSecret()` (same API) |
| **Authentication** | Not required | `cre login` required |
| **Use case** | Development and testing | Deployed workflows |
## How secrets work in your workflow
Regardless of where secrets are stored (locally or in the Vault), your workflow code uses the same API to access them:
The CRE runtime automatically handles retrieving the secret from the appropriate source based on your environment.
## Getting started
1. **For local development:** Start with [Using Secrets in Simulation](/cre/guides/workflow/secrets/using-secrets-simulation) to learn the basics
2. **For deployed workflows:** Once your workflow is ready to deploy, follow [Using Secrets with Deployed Workflows](/cre/guides/workflow/secrets/using-secrets-deployed)
3. **For enhanced security:** Implement [1Password CLI integration](/cre/guides/workflow/secrets/managing-secrets-1password) to eliminate plaintext secrets
## Reference
For detailed CLI command documentation, see:
- [Secrets Management CLI Reference](/cre/reference/cli/secrets) — Complete documentation for `cre secrets` commands
---
# Using Secrets with Deployed Workflows
Source: https://docs.chain.link/cre/guides/workflow/secrets/using-secrets-deployed
Last Updated: 2025-11-04
When your workflow is deployed, it cannot access your local `.env` file or environment variables. Instead, secrets must be stored in the **Vault DON**—a decentralized, secure secret storage system that your deployed workflows can access at runtime.
This guide explains how to manage secrets for deployed workflows using the `cre secrets` CLI commands.
## Prerequisites
Before managing secrets for deployed workflows, ensure you have:
1. **CRE CLI installed**: See the [Installation Guide](/cre/getting-started/cli-installation/macos-linux)
2. **Authentication**: You must be logged in with `cre login`
3. **Owner address configured**: Your `workflow-owner-address` must be set in your project configuration
## How secrets work with deployed workflows
The workflow is similar to local development, but with a critical difference in where secrets are stored:
1. **Declare**: Define secret identifiers in a YAML file
2. **Store**: Push secrets to the Vault DON using `cre secrets create`
3. **Use**: Your deployed workflow accesses secrets from the Vault using `runtime.GetSecret()`
**Key difference from simulation:**
- **Local simulation**: Secrets read from your environment variables or `.env` file on your machine
- **Deployed workflows**: Secrets retrieved from Vault DON by the workflow
## Step-by-step guide
### Step 1: Create a secrets YAML file
Create a YAML file at the root of your project that declares the secrets you want to store.
**Example `production-secrets.yaml`:**
```yaml
secretsNames:
API_KEY:
- API_KEY_VALUE
DATABASE_URL:
- DATABASE_URL_VALUE
```
**Structure:**
- `secretsNames` — Top-level key containing all secrets
- Each secret has:
- **Key** (e.g., `API_KEY`) — The identifier your workflow code will use
- **Value** — An array containing the environment variable name that holds the actual value
### Step 2: Provide secret values as environment variables
Set the actual secret values as environment variables. These can be provided in two ways:
**Option A: Export in your shell**
```bash
export API_KEY_VALUE="your-actual-api-key"
export DATABASE_URL_VALUE="postgresql://user:pass@host:5432/db"
```
**Option B: Use a `.env` file**
Create a `.env` file (or add to your existing one):
```bash
# .env
API_KEY_VALUE=your-actual-api-key
DATABASE_URL_VALUE=postgresql://user:pass@host:5432/db
```
The `cre` CLI will automatically load variables from `.env` when you run the commands.
### Step 3: Upload secrets to the Vault DON
Use the `cre secrets create` command to upload your secrets to the Vault:
```bash
cre secrets create production-secrets.yaml --target production-settings
```
**What happens:**
1. The CLI reads your YAML file and environment variables
2. It registers the request onchain (for authorization)
3. It submits the secrets to the Vault DON
4. The secrets are stored securely and associated with your owner address
**Example output:**
```bash
{"level":"info","owner":"","digest":"041eb7a8...","time":"2025-10-22T00:14:56+02:00","message":"IsRequestAllowlisted query succeeded"}
{"level":"info","digest":"041eb7a8...","deadline":"2025-10-23T22:14:56Z","time":"2025-10-22T00:14:59+02:00","message":"AllowlistRequest submitted"}
Digest allowlisted; proceeding to gateway POST
Secret created: secret_id=API_KEY, owner=, namespace=main
Secret created: secret_id=DATABASE_URL, owner=, namespace=main
```
### Step 4: Use secrets in your workflow code
Your workflow code uses the same API to access secrets, whether running in local simulation or deployed to a workflow DON. The CRE runtime automatically retrieves secrets from the appropriate source.
**Important:**
- The secret identifier (`"API_KEY"`) must match what you declared in your YAML file
- Secrets are fetched at runtime from the Vault DON
- The namespace parameter is optional—defaults to `"main"` if omitted
- The same code works for both simulation (reads from `.env`) and production (reads from Vault)
### Step 5: Verify secrets are stored
You can list all secrets stored in the Vault for your owner address:
```bash
cre secrets list --target production-settings
```
**Example output:**
```
{"level":"info","owner":"","digest":"225d8b6f...","time":"2025-10-22T19:10:12-05:00","message":"IsRequestAllowlisted query succeeded"}
{"level":"info","digest":"225d8b6f...","deadline":"2025-10-25T00:10:12Z","time":"2025-10-22T19:10:16-05:00","message":"AllowlistRequest submitted"}
Digest allowlisted; proceeding to gateway POST: owner=, requestID=f9148fcb-3e4e-45bf-bbde-2124ddd577e4, digest=0x225d8b6f...
Secret identifier: secret_id=API_KEY, owner=, namespace=main
Secret identifier: secret_id=DATABASE_URL, owner=, namespace=main
```
## Managing secrets lifecycle
### Updating secrets
To update existing secrets, use the `cre secrets update` command:
```bash
# Update your environment variable with the new value
export API_KEY_VALUE="new-api-key-value"
# Update the secret in the Vault
cre secrets update production-secrets.yaml --target production-settings
```
**Example output:**
```
{"level":"info","owner":"","digest":"10854ac2...","time":"2025-10-22T19:12:32-05:00","message":"IsRequestAllowlisted query succeeded"}
{"level":"info","digest":"10854ac2...","deadline":"2025-10-25T00:12:32Z","time":"2025-10-22T19:12:40-05:00","message":"AllowlistRequest submitted"}
Digest allowlisted; proceeding to gateway POST: owner=, requestID=7433514f-4008-46dd-822a-633732b64ec9, digest=0x10854ac2...
Secret updated: secret_id=API_KEY, owner=, namespace=main
Secret updated: secret_id=DATABASE_URL, owner=, namespace=main
```
### Deleting secrets
To remove secrets from the Vault:
**Step 1: Create a deletion YAML file** (`secrets-to-delete.yaml`):
```yaml
secretsNames:
- API_KEY
- DATABASE_URL
```
**Step 2: Run the delete command:**
```bash
cre secrets delete secrets-to-delete.yaml --target production-settings
```
## About namespaces
When you look at CLI outputs, you'll notice secrets are organized by **namespaces**. A namespace is simply a way to group related secrets together.
## Using with multi-sig wallets
All `cre secrets` commands support the `--unsigned` flag for multi-sig wallet operations. This generates raw transaction data instead of sending transactions directly.
For complete multi-sig setup and usage, see [Using Multi-sig Wallets](/cre/guides/operations/using-multisig-wallets).
## Troubleshooting
### "Secret not found" error in deployed workflow
**Problem:** Your workflow throws a "secret not found" error when calling `runtime.GetSecret()`.
**Solution:**
1. Verify the secret exists: `cre secrets list --target production-settings`
2. Check that the secret ID in your code matches exactly
3. Recreate the secret if necessary: `cre secrets create ...`
### "Timeout expired" error
**Problem:** The CLI returns a timeout error when creating/updating secrets.
**Solution:**
The onchain authorization has expired. Re-run the command to create a new authorization.
### Different secrets for simulation vs. production
**Problem:** You want different secret values when simulating vs. running in production.
**Solution:**
- For simulation: Store values in your local `.env` file
- For production: Use `cre secrets create` with different values
- The secret IDs stay the same—only the values differ
## Learn more
- **[Secrets CLI Reference](/cre/reference/cli/secrets)** — Complete CLI command documentation
- **[Using Secrets in Simulation](/cre/guides/workflow/secrets/using-secrets-simulation)** — For local development
- **[Managing Secrets with 1Password](/cre/guides/workflow/secrets/managing-secrets-1password)** — Best practice for secure secret management
- **[Using Multi-sig Wallets](/cre/guides/operations/using-multisig-wallets)** — For multi-sig secret operations
---
# Managing Secrets with 1Password CLI
Source: https://docs.chain.link/cre/guides/workflow/secrets/managing-secrets-1password
Last Updated: 2025-11-04
While using a `.env` file or exporting environment variables is convenient for initial testing, the recommended best practice for managing sensitive data like private keys and API tokens is to use a dedicated secrets manager.
This guide explains how to use **1Password CLI** to securely inject secrets into your workflow's environment at runtime, ensuring your secrets are never stored in plaintext on your filesystem.
## Prerequisites
Before you begin, ensure you have:
1. **Installed 1Password CLI:** Follow the [1Password CLI installation guide](https://developer.1password.com/docs/cli/get-started/).
2. **Stored Your Secret in 1Password:** Save the secret you need (e.g., your `CRE_ETH_PRIVATE_KEY`) in a vault that your 1Password CLI is configured to access.
## Step 1: Get the secret reference
A secret reference is a unique URI that points to a specific field in an item in your 1Password vault.
1. Open the 1Password desktop app.
2. Find the item containing your secret.
3. Right-click on the specific field (e.g., the `private key` field).
4. Select **Copy Secret Reference**.
Your clipboard will now contain a reference, which is a safe, non-secret string that looks like this: `op:////`
## Step 2: Use the secret reference in your `.env` file
Open your project's `.env` file and replace the plaintext secret with the secret reference you just copied.
**Before:**
```bash
# .env
CRE_ETH_PRIVATE_KEY=0x123...abc
```
**After:**
```bash
# .env
CRE_ETH_PRIVATE_KEY="op://Private/Sepolia-Dev-Key/private key"
```
## Step 3: Run commands with `op run`
The `op run` command is a utility that loads the secrets from your references into the environment and then executes your command, ensuring the secrets only exist in memory for the duration of the process.
### For local simulation
To run your workflow simulation, prefix your command with `op run --env-file ../.env --`:
```bash
op run --env-file ../.env -- cre workflow simulate my-workflow --target staging-settings
```
### For deployed workflows
To upload secrets to the Vault DON, use the same pattern:
```bash
op run --env-file .env -- cre secrets create production-secrets.yaml --target production-settings
```
**What's happening here?**
- `op run` scans the `.env` file for any `op://` references.
- It securely authenticates with 1Password to fetch the real secret values.
- It injects these values as environment variables into a new, temporary sub-shell.
- It then executes your `cre` command within that secure sub-shell.
- When the command finishes, the sub-shell is destroyed, and the secrets vanish from the environment.
By following this pattern, you can manage your secrets securely without ever exposing them in plaintext. For more advanced use cases, see the official [1Password CLI documentation](https://developer.1password.com/docs/cli/secret-references).
---
# Simulating Workflows
Source: https://docs.chain.link/cre/guides/operations/simulating-workflows
Last Updated: 2025-11-04
Workflow simulation is a local execution environment that **compiles your workflow to WebAssembly (WASM)** and runs it on **your machine**. It allows you to test and debug your workflow logic before deploying it. The simulator makes real calls to public testnets and live HTTP endpoints, giving you high confidence that your code will work as expected when deployed.
## When to use simulation
Use workflow simulation to:
- **Test workflow logic during development**: Validate that your code behaves correctly before deploying.
- **Debug errors in a controlled environment**: Catch and fix issues locally without deploying to a live network.
- **Test different trigger types**: Manually select and test how your workflow responds to [cron](/cre/guides/workflow/using-triggers/cron-trigger), [HTTP](/cre/guides/workflow/using-triggers/http-trigger), or [EVM log](/cre/guides/workflow/using-triggers/evm-log-trigger) triggers.
- **Verify onchain interactions**: Test [read](/cre/guides/workflow/using-evm-client/onchain-read) and [write](/cre/guides/workflow/using-evm-client/onchain-write/overview) operations against real testnets.
## Basic usage
The `cre workflow simulate` command compiles your workflow and executes it locally.
**Basic syntax:**
```bash
cre workflow simulate [flags]
```
**Example:**
```bash
cre workflow simulate my-workflow --target staging-settings
```
### What happens during simulation
1. **Compilation**: The CLI compiles your workflow to WebAssembly (WASM).
2. **Trigger selection**: You're prompted to select which trigger to test (cron, HTTP, or EVM log).
3. **Execution**: The workflow runs locally, making real calls to configured RPCs and HTTP endpoints.
4. **Output**: The simulator displays logs from your workflow and the final execution result.
### Prerequisites
Before running a simulation:
- **CRE account & authentication**: You must have a CRE account and be logged in with the CLI. See [Create your account](/cre/account/creating-account) and [Log in with the CLI](/cre/account/cli-login) for instructions.
- **CRE CLI installed**: You must have the CRE CLI installed on your machine. See [CLI Installation](/cre/getting-started/cli-installation) for instructions.
- **Project configuration**: You must run the command from your project root directory.
- **Valid `workflow.yaml`**: Your workflow directory must contain a `workflow.yaml` file with correct paths to your workflow code, config, and secrets (optional).
- **RPC URLs configured**: If your workflow interacts with blockchains, configure RPC endpoints in your `project.yaml` for the target you're using. Without this, the simulator cannot register the EVM capability and your workflow will fail. See [Project Configuration](/cre/reference/project-configuration) for setup instructions.
- **Private key**: Set `CRE_ETH_PRIVATE_KEY` in your `.env` file if your workflow performs onchain writes.
## Interactive vs non-interactive modes
### Interactive mode (default)
In interactive mode, the simulator prompts you to select a trigger and provide necessary inputs.
**Example:**
```bash
cre workflow simulate my-workflow --target staging-settings
```
**What you'll see:**
```
Workflow compiled
🚀 Workflow simulation ready. Please select a trigger:
1. cron-trigger@1.0.0 Trigger
2. http-trigger@1.0.0-alpha Trigger
3. evm:ChainSelector:16015286601757825753@1.0.0 LogTrigger
Enter your choice (1-3):
```
Select a trigger by entering its number, and follow any additional prompts for trigger-specific inputs.
### Non-interactive mode
Non-interactive mode allows you to run simulations without prompts, making it ideal for CI/CD pipelines or automated testing.
**Requirements:**
- Use the `--non-interactive` flag
- Specify `--trigger-index` (0-based index of the trigger to run)
- Provide trigger-specific flags as needed (see [Trigger-specific configuration](#trigger-specific-configuration))
**Example:**
```bash
cre workflow simulate my-workflow --non-interactive --trigger-index 0 --target staging-settings
```
## The `--broadcast` flag
By default, the simulator performs a **dry run** for onchain write operations. It prepares the transaction but does not broadcast it to the blockchain.
To actually broadcast transactions during simulation, use the `--broadcast` flag:
```bash
cre workflow simulate my-workflow --broadcast --target staging-settings
```
**Use case:** Use `--broadcast` when you want to test the complete end-to-end flow, including actual onchain state changes, on a testnet.
## Trigger-specific configuration
Different trigger types require different inputs for simulation.
### Cron trigger
[Cron triggers](/cre/guides/workflow/using-triggers/cron-trigger) do not require additional configuration. When selected, they execute immediately.
**Interactive example:**
```bash
cre workflow simulate my-workflow --target staging-settings
```
Select the cron trigger when prompted (if multiple triggers are defined)
**Non-interactive example:**
```bash
# Assuming the cron trigger is the first trigger defined in your workflow (index 0)
cre workflow simulate my-workflow --non-interactive --trigger-index 0 --target staging-settings
```
### HTTP trigger
[HTTP triggers](/cre/guides/workflow/using-triggers/http-trigger) require a JSON payload.
**Interactive mode:**
When you select an HTTP trigger, the simulator prompts you to provide JSON input. You can:
- Enter the JSON directly
- Provide a file path (e.g., `./payload.json`)
**Non-interactive mode:**
Use the `--http-payload` flag with:
- A JSON string: `--http-payload '{"key":"value"}'`
- A file path: `--http-payload @./payload.json` (with or without `@` prefix)
**Example:**
```bash
cre workflow simulate my-workflow --non-interactive --trigger-index 1 --http-payload @./http_trigger_payload.json --target staging-settings
```
### EVM log trigger
[EVM log triggers](/cre/guides/workflow/using-triggers/evm-log-trigger) require a transaction hash and event index to fetch a specific log event from the blockchain.
**Interactive mode:**
When you select an EVM log trigger, the simulator prompts you for:
1. **Transaction hash** (e.g., `0x420721d7d00130a03c5b525b2dbfd42550906ddb3075e8377f9bb5d1a5992f8e`)
2. **Event index** (0-based index of the log in the transaction, e.g., `0`)
The simulator fetches the log from the configured RPC and passes it to your workflow.
**Non-interactive mode:**
Use the `--evm-tx-hash` and `--evm-event-index` flags:
```bash
cre workflow simulate my-workflow \
--non-interactive \
--trigger-index 2 \
--evm-tx-hash 0x420721d7d00130a03c5b525b2dbfd42550906ddb3075e8377f9bb5d1a5992f8e \
--evm-event-index 0 \
--target staging-settings
```
## Additional flags
### `--engine-logs` (`-g`)
Enables detailed engine logging for debugging purposes. This shows internal logs from the workflow execution engine.
```bash
cre workflow simulate my-workflow --engine-logs --target staging-settings
```
### `--target` (`-T`)
Specifies which target environment to use from your configuration files. This determines which RPC URLs, settings, and secrets are loaded.
```bash
cre workflow simulate my-workflow --target staging-settings
```
### `--verbose` (`-v`)
Enables debug-level logging for the CLI itself (not the workflow). Useful for troubleshooting CLI issues.
```bash
cre workflow simulate my-workflow --verbose --target staging-settings
```
## Understanding the output
When you run a simulation, you'll see the following output:
### 1. Compilation confirmation
```
Workflow compiled
```
This indicates your workflow was successfully compiled to WASM.
### 2. Trigger selection menu (interactive mode only)
If your workflow has multiple triggers, you'll see a menu:
```
🚀 Workflow simulation ready. Please select a trigger:
1. cron-trigger@1.0.0 Trigger
2. http-trigger@1.0.0-alpha Trigger
3. evm:ChainSelector:16015286601757825753@1.0.0 LogTrigger
Enter your choice (1-3):
```
If your workflow has only one trigger, it will run automatically without this prompt.
### 3. User logs
Logs from your workflow code (e.g., `logger.Info()` calls) appear with timestamps:
```
2025-10-24T19:07:27Z [USER LOG] Running CronTrigger
2025-10-24T19:07:27Z [USER LOG] fetching por url https://api.example.com
2025-10-24T19:07:27Z [USER LOG] ReserveInfo { "totalReserve": 494515082.75 }
```
### 4. Final execution result
The simulator displays the value returned by your workflow:
```
Workflow Simulation Result:
{
"result": 47
}
```
### 5. Transaction details (if your workflow writes onchain)
If your workflow performs onchain writes, the simulator will show transaction information:
**Without `--broadcast` (dry run):**
The transaction is prepared but not sent. You'll see a zero address (`0x0000...`) as the transaction hash:
```
2025-10-24T23:01:50Z [USER LOG] Write report transaction succeeded: 0x0000000000000000000000000000000000000000000000000000000000000000
```
**With `--broadcast`:**
The transaction is actually sent to the blockchain. You'll see a real transaction hash:
```
2025-10-24T17:55:48Z [USER LOG] Write report transaction succeeded: 0x1013abc0b6f345fad15b19a56cabbbaab2a2aa94f81eb3a709058adf18a4f23f
```
## Limitations
While simulation provides high confidence in your workflow's behavior, it has some limitations:
- **Single-node execution**: Simulation runs on a single node (your local machine) rather than across a DON. There is no actual consensus or quorum, it is simulated.
- **Manual trigger execution**: Time-based triggers (cron) execute immediately when selected, not on a schedule. You must manually initiate each simulation run.
- **Simplified environment**: The simulation environment mimics production but is not identical. Some edge cases or network conditions may only appear in a deployed environment.
Despite these limitations, simulation is an essential tool for catching bugs, validating logic, and testing integrations before deploying to production.
## Next steps
- **Deploy your workflow**: Once you're confident your workflow works correctly, see [Deploying Workflows](/cre/guides/operations/deploying-workflows).
- **CLI reference**: For a complete list of flags and options, see the [CLI Workflow Commands reference](/cre/reference/cli/workflow/).
---
# Deploying Workflows
Source: https://docs.chain.link/cre/guides/operations/deploying-workflows
Last Updated: 2025-11-04
When you deploy a workflow, you take your locally tested code and register it with the onchain Workflow Registry contract. This makes your workflow "live" so it can activate and respond to triggers across a [Decentralized Oracle Network (DON)](/cre/key-terms#decentralized-oracle-network-don).
## Prerequisites
Before you can deploy a workflow, you must have:
- **Early Access approval**: Workflow deployment is currently in Early Access. Request access here if you haven't already.
- **[Logged in](/cre/reference/cli/authentication#cre-login)**: Authenticated with the platform by running `cre login`. To check if you are logged in, run `cre whoami`.
- **[Linked your key](/cre/reference/cli/account#cre-account-link-key)**: Linked your EOA or multi-sig wallet to your account by running `cre account link-key`.
- **A funded wallet**: The account you are deploying from must be funded with ETH on Ethereum Mainnet to pay the gas fees for the onchain registration transaction to the Workflow Registry contract.
## The deployment process
The `cre workflow deploy` command handles the entire end-to-end process for you:
1. **Compiles** your workflow to a WASM binary.
2. **Uploads** the compiled binary and any associated configuration files (like your config file or `secrets.yaml`) to the CRE Storage Service.
3. **Registers** the workflow onchain by submitting a transaction to the Workflow Registry contract. This transaction contains the metadata for your workflow, including its name, owner, and the URL of its artifacts in the storage service.
### Step 1: Ensure your configuration is correct
Before deploying, ensure your `workflow.yaml` file is correctly configured. The `workflow-name` is required under the `user-workflow` section for your target environment.
If you are deploying from a multi-sig wallet, specify your multi-sig address in the `workflow-owner-address` field. If you are deploying from a standard EOA, you can leave this field unchanged—the owner will be automatically derived from the `CRE_ETH_PRIVATE_KEY` in your `.env` file.
For more details on configuration, see the [Project Configuration](/cre/reference/project-configuration) reference.
### Step 2: Run the deploy command
**From your project root directory**, run the `deploy` command with the path to your workflow folder.
```bash
cre workflow deploy [flags]
```
Example command to target the `production-settings` environment:
```bash
cre workflow deploy my-workflow --target production-settings
```
**Available flags:**
| Flag | Description |
| ---------------- | --------------------------------------------------------------------------------------- |
| `--target` | Sets the target environment from your configuration files (e.g., `production-settings`) |
| `--auto-start` | Activate the workflow immediately after deployment (default: `true`) |
| `--output` | The output file for the compiled WASM binary (default: `"./binary.wasm.br.b64"`) |
| `--unsigned` | Return the raw transaction instead of broadcasting it to the network |
| `--yes` | Skip confirmation prompts and proceed with the operation |
| `--project-root` | Path to the project root directory |
| `--env` | Path to your `.env` file (default: `".env"`) |
| `--verbose` | Enable verbose logging to print `DEBUG` level logs |
### Step 3: Monitor the output
The CLI will provide detailed logs of the deployment process, including the compilation, upload to the CRE Storage Service, and the final onchain transaction.
```bash
> cre workflow deploy my-workflow --target production-settings
Deploying Workflow : my-workflow
Target : production-settings
Owner Address :
Compiling workflow...
Workflow compiled successfully
Verifying ownership...
Workflow owner link status: owner=, linked=true
Key ownership verified
Uploading files...
✔ Loaded binary from: ./binary.wasm.br.b64
✔ Uploaded binary to: https://storage.cre.example.com/artifacts//binary.wasm
✔ Loaded config from: ./config.json
✔ Uploaded config to: https://storage.cre.example.com/artifacts//config
Preparing deployment transaction...
Preparing transaction for workflowID:
Transaction details:
Chain Name: ethereum-testnet-sepolia
To: 0xF3f93fc4dc177748E7557568b5354cB009e3818a
Function: UpsertWorkflow
Inputs:
[0]: my-workflow
[1]: my-workflow
[2]:
[3]: 0
[4]: zone-a
[5]: https://storage.cre.example.com/artifacts//binary.wasm
[6]: https://storage.cre.example.com/artifacts//config
[7]: 0x
[8]: false
Data: b377bfc50000000000000000000000000000000000...
Estimated Cost:
Gas Price: 0.00100001 gwei
Total Cost: 0.00000079 ETH
? Do you want to execute this transaction?:
▸ Yes
No
Transaction confirmed
View on explorer: https://sepolia.etherscan.io/tx/0x58599f6...d916b
[OK] Workflow deployed successfully
Details:
Contract address: 0xF3f93fc4dc177748E7557568b5354cB009e3818a
Transaction hash: 0x58599f6...d916b
Workflow Name: my-workflow
Workflow ID:
Binary URL: https://storage.cre.example.com/artifacts//binary.wasm
Config URL: https://storage.cre.example.com/artifacts//config
```
## Verifying your deployment
After a successful deployment, you can verify that your workflow was registered correctly by checking the Workflow Registry contract on a block explorer. The CLI output will provide the transaction hash for the registration.
The `WorkflowRegistry` contract for the `production-settings` environment is deployed on **Ethereum Sepolia** at the address `0xF3f93fc4dc177748E7557568b5354cB009e3818a`.
## Using multi-sig wallets
The `deploy` command supports multi-sig wallets through the `--unsigned` flag. When using this flag, the CLI generates raw transaction data that you can submit through your multi-sig wallet interface instead of sending the transaction directly.
For complete setup instructions, configuration requirements, and step-by-step guidance, see [Using Multi-sig Wallets](/cre/guides/operations/using-multisig-wallets).
## Next steps
- [Activating & Pausing Workflows](/cre/guides/operations/activating-pausing-workflows): Learn how to control workflow execution
- [Monitoring Workflows](/cre/guides/operations/monitoring-workflows): Track your workflow's execution and performance
- [Updating Deployed Workflows](/cre/guides/operations/updating-deployed-workflows): Deploy new versions of your workflow
---
# Activating & Pausing Workflows
Source: https://docs.chain.link/cre/guides/operations/activating-pausing-workflows
Last Updated: 2025-11-04
After deploying a workflow, you can control its operational state using the `cre workflow activate` and `cre workflow pause` commands. These commands modify the workflow's status in the Workflow Registry contract, determining whether it can respond to triggers.
**Workflow states:**
- **Active** — The workflow can respond to its configured triggers and execute
- **Paused** — The workflow cannot respond to triggers and will not execute
## Prerequisites
Before activating or pausing workflows, ensure you have:
- **A [deployed workflow](/cre/guides/operations/deploying-workflows)**: You must have a workflow that has been successfully deployed to the Workflow Registry.
- **Workflow ownership**: You must be the owner of the workflow (the account that originally deployed it). Only the workflow owner can activate or pause it.
- **Local workflow folder**: You must run these commands from your project directory. The CLI reads the workflow name and configuration from your `workflow.yaml` file to identify which workflow to activate or pause.
- **[Logged in](/cre/reference/cli/authentication#cre-login)**: Authenticated with the platform by running `cre login`. To check your authentication status, run `cre whoami`.
- **A funded wallet**: The account you are using must be funded with ETH on Ethereum Mainnet to pay the gas fees for the onchain transaction to the Workflow Registry contract.
## Activating a workflow
The `cre workflow activate` command changes a paused workflow's status to active, allowing its triggers to fire and the workflow to execute.
### When to activate
You typically use `activate` in these scenarios:
- **After pausing a workflow**: To resume execution after maintenance or debugging
- **Manual deployment**: When you deployed with `--auto-start=false`
### Usage
Run the command from your project root:
```bash
cre workflow activate my-workflow --target production-settings
```
The CLI identifies which workflow to activate based on:
- `workflow-name` from your `workflow.yaml` file
- `workflow-owner-address` (either from `workflow.yaml` or derived from your private key in `.env`)
### What happens during activation
1. The CLI fetches the workflow matching your workflow name and owner address
2. It validates that the workflow is currently paused
3. If valid, it sends an onchain transaction to change the status to active
### Example output
```bash
> cre workflow activate my-workflow --target production-settings
Activating Workflow : my-workflow
Target : production-settings
Owner Address :
Activating workflow: Name=my-workflow, Owner=, WorkflowID=
Transaction details:
Chain Name: ethereum-testnet-sepolia
To: 0xF3f93fc4dc177748E7557568b5354cB009e3818a
Function: ActivateWorkflow
Inputs:
[0]:
[1]: zone-a
Data: 530979d6000000000000000000000000...
Estimated Cost:
Gas Price: 0.00100000 gwei
Total Cost: 0.00000038 ETH
? Do you want to execute this transaction?:
▸ Yes
No
Transaction confirmed: 0xd5b94bd...87498b
View on explorer: https://sepolia.etherscan.io/tx/0xd5b94bd...87498b
[OK] Workflow activated successfully
Contract address: 0xF3f93fc4dc177748E7557568b5354cB009e3818a
Transaction hash: 0xd5b94bd...87498b
Workflow Name: my-workflow
Workflow ID:
```
## Pausing a workflow
The `cre workflow pause` command changes an active workflow's status to paused, preventing its triggers from firing and stopping execution.
### When to pause
Pause workflows when you need to:
- **Perform maintenance**: Temporarily stop execution while updating dependencies or configuration
- **Debug issues**: Halt execution to investigate errors or unexpected behavior
- **Temporarily halt operations**: Stop workflow execution without permanently deleting it
### Usage
Run the command from your project root:
```bash
cre workflow pause my-workflow --target production-settings
```
### What happens during pausing
1. The CLI fetches the workflow matching your workflow name and owner address
2. It validates that the workflow is currently active
3. If valid, it sends an onchain transaction to change the status to paused
### Example output
```bash
> cre workflow pause my-workflow --target production-settings
Pausing Workflow : my-workflow
Target : production-settings
Owner Address :
Fetching workflows to pause... Name=my-workflow, Owner=
Processing batch pause... count=1
Transaction details:
Chain Name: ethereum-testnet-sepolia
To: 0xF3f93fc4dc177748E7557568b5354cB009e3818a
Function: BatchPauseWorkflows
Inputs:
[0]: []
Data: d8b80738000000000000000000000000...
Estimated Cost:
Gas Price: 0.00100000 gwei
Total Cost: 0.00000021 ETH
? Do you want to execute this transaction?:
▸ Yes
No
Transaction confirmed
View on explorer: https://sepolia.etherscan.io/tx/0x2e09a66...db66e
[OK] Workflows paused successfully
Details:
Contract address: 0xF3f93fc4dc177748E7557568b5354cB009e3818a
Transaction hash: 0x2e09a66...db66e
Workflow Name: my-workflow
Workflow ID:
```
## Using multi-sig wallets
Both `activate` and `pause` commands support multi-sig wallets through the `--unsigned` flag. When using this flag, the CLI generates raw transaction data that you can submit through your multi-sig wallet interface instead of sending the transaction directly.
For complete setup instructions, configuration requirements, and step-by-step guidance, see [Using Multi-sig Wallets](/cre/guides/operations/using-multisig-wallets).
## Learn more
- [Deploying Workflows](/cre/guides/operations/deploying-workflows) — Learn how to deploy workflows to the registry
- [Updating Deployed Workflows](/cre/guides/operations/updating-deployed-workflows) — Update your workflow code and configuration
- [Deleting Workflows](/cre/guides/operations/deleting-workflows) — Permanently remove workflows from the registry
- [CLI Reference: Workflow Commands](/cre/reference/cli/workflow) — Complete command reference with all flags and options
---
# Updating Deployed Workflows
Source: https://docs.chain.link/cre/guides/operations/updating-deployed-workflows
Last Updated: 2025-11-04
When you update a deployed workflow, you redeploy it with the same workflow name. The new deployment replaces the previous version in the Workflow Registry contract. Currently, CRE does not maintain version history—each deployment overwrites the previous one.
## Prerequisites
Before updating a deployed workflow, ensure you have:
- **A [deployed workflow](/cre/guides/operations/deploying-workflows)**: The workflow must already exist in the Workflow Registry.
- **Workflow ownership**: You must be the owner of the workflow (the account that originally deployed it). Only the workflow owner can update it.
- **Local workflow folder**: You must run this command from your project directory. The CLI reads the workflow name and configuration from your `workflow.yaml` file to identify which workflow to update.
- **[Logged in](/cre/reference/cli/authentication#cre-login)**: Authenticated with the platform by running `cre login`. To check if you are logged in, run `cre whoami`.
- **A funded wallet**: The account must be funded with ETH on Ethereum Mainnet to pay the gas fees for the onchain transaction to the Workflow Registry contract.
## Updating a workflow
To update a workflow, simply redeploy it using the same workflow name:
```bash
cre workflow deploy my-workflow --target production-settings
```
### What happens during an update
1. **Compilation**: Your updated workflow code is compiled to WASM
2. **Upload**: The new binary and configuration files are uploaded to the CRE Storage Service
3. **Registration**: A new registration transaction is sent to the Workflow Registry contract
4. **Replacement**: The previous version is replaced with the new deployment
### Auto-start behavior
By default, `cre workflow deploy` uses `--auto-start=true`, which means the updated workflow is automatically activated after deployment. If your workflow was previously paused and you want it to remain paused after the update, use `--auto-start=false`:
```bash
cre workflow deploy my-workflow --auto-start=false --target production-settings
```
## Best practices for updates
1. **Test locally first**: Always test your changes using `cre workflow simulate` before deploying to production
2. **Pause before updating** (optional): If you want to ensure no triggers fire during the update, pause the workflow first using `cre workflow pause`
3. **Monitor after deployment**: Check that the updated workflow executes correctly after deployment
4. **Keep track of changes**: Maintain your own version control (e.g., Git tags) to track workflow versions
## Using multi-sig wallets
The `deploy` command supports multi-sig wallets through the `--unsigned` flag. When using this flag, the CLI generates raw transaction data that you can submit through your multi-sig wallet interface instead of sending the transaction directly.
For complete setup instructions, configuration requirements, and step-by-step guidance, see [Using Multi-sig Wallets](/cre/guides/operations/using-multisig-wallets).
## Learn more
- [Deploying Workflows](/cre/guides/operations/deploying-workflows) — Learn about the initial deployment process
- [Activating & Pausing Workflows](/cre/guides/operations/activating-pausing-workflows) — Control workflow execution state
- [Deleting Workflows](/cre/guides/operations/deleting-workflows) — Remove workflows from the registry
---
# Deleting Workflows
Source: https://docs.chain.link/cre/guides/operations/deleting-workflows
Last Updated: 2025-11-04
Deleting a workflow permanently removes it from the Workflow Registry contract. This action cannot be undone, and the workflow will no longer be able to respond to triggers.
## Prerequisites
Before deleting a workflow, ensure you have:
- **A [deployed workflow](/cre/guides/operations/deploying-workflows)**: The workflow must exist in the Workflow Registry.
- **Workflow ownership**: You must be the owner of the workflow (the account that originally deployed it). Only the workflow owner can delete it.
- **Local workflow folder**: You must run this command from your project directory. The CLI reads the workflow name and configuration from your `workflow.yaml` file to identify which workflow to delete.
- **[Logged in](/cre/reference/cli/authentication#cre-login)**: Authenticated with the platform by running `cre login`. To check if you are logged in, run `cre whoami`.
- **A funded wallet**: The account must be funded with ETH on Ethereum Mainnet to pay the gas fees for the onchain transaction to the Workflow Registry contract.
## Deleting a workflow
To delete a workflow, run the `cre workflow delete` command from your project root:
```bash
cre workflow delete my-workflow --target production-settings
```
The CLI identifies which workflow to delete based on:
- `workflow-name` from your `workflow.yaml` file
- `workflow-owner-address` (either from `workflow.yaml` or derived from your private key in `.env`)
### What happens during deletion
1. The CLI fetches all workflows matching your workflow name and owner address
2. It displays details about the workflow(s) to be deleted
3. It prompts you to confirm by typing the workflow name
4. Once confirmed, it sends an onchain transaction to delete the workflow from the Workflow Registry
### Example output
```bash
> cre workflow delete my-workflow --target production-settings
Deleting Workflow : my-workflow
Target : production-settings
Owner Address :
Found 1 workflow(s) to delete for name: my-workflow
1. Workflow
ID: 00f0379a2df46ad2c5af070f5871da89f589f8bff8af76ff6a44bb59bec88bf4
Owner:
DON Family: zone-a
Tag: my-workflow
Binary URL: https://storage.cre.example.com/artifacts/00f0379a.../binary.wasm
Workflow Status: PAUSED
Are you sure you want to delete the workflow 'my-workflow'?
This action cannot be undone.
To confirm, type the workflow name: my-workflow: my-workflow
Deleting 1 workflow(s)...
Transaction details:
Chain Name: ethereum-testnet-sepolia
To: 0xF3f93fc4dc177748E7557568b5354cB009e3818a
Function: DeleteWorkflow
Inputs:
[0]: 0x00f0379a2df46ad2c5af070f5871da89f589f8bff8af76ff6a44bb59bec88bf4
Data: 695e134000f0379a2df46ad2c5af070f5871da89f589f8bff8af76ff6a44bb59bec88bf4
Estimated Cost:
Gas Price: 0.00100001 gwei
Total Cost: 0.00000015 ETH
? Do you want to execute this transaction?:
▸ Yes
No
Transaction confirmed
View on explorer: https://sepolia.etherscan.io/tx/0xf059c32...fec7d
[OK] Deleted workflow ID: 00f0379a2df46ad2c5af070f5871da89f589f8bff8af76ff6a44bb59bec88bf4
Workflows deleted successfully.
```
### Skipping the confirmation prompt
If you want to skip the interactive confirmation prompt (e.g., in automated scripts), use the `--yes` flag:
```bash
cre workflow delete my-workflow --yes --target production-settings
```
## Using multi-sig wallets
The `delete` command supports multi-sig wallets through the `--unsigned` flag. When using this flag, the CLI generates raw transaction data that you can submit through your multi-sig wallet interface instead of sending the transaction directly.
For complete setup instructions, configuration requirements, and step-by-step guidance, see [Using Multi-sig Wallets](/cre/guides/operations/using-multisig-wallets).
## Learn more
- [Deploying Workflows](/cre/guides/operations/deploying-workflows) — Deploy new workflows to the registry
- [Activating & Pausing Workflows](/cre/guides/operations/activating-pausing-workflows) — Control workflow execution state
- [Updating Deployed Workflows](/cre/guides/operations/updating-deployed-workflows) — Update existing workflows
---
# Using Multi-sig Wallets
Source: https://docs.chain.link/cre/guides/operations/using-multisig-wallets
Last Updated: 2025-11-04
This guide explains how to use multi-sig wallets with CRE CLI commands for deploying, activating, pausing, updating, and deleting workflows.
## How multi-sig works with CRE CLI
When managing workflows with a multi-sig wallet, the CRE CLI can generate raw transaction data that you submit through your multi-sig wallet interface. Instead of the CLI signing and sending the transaction directly, it prepares the transaction data for you to sign offline through your multi-sig wallet.
**The workflow:**
1. Run a CRE CLI command with the `--unsigned` flag
2. The CLI generates the raw transaction data
3. You submit this data to your multi-sig wallet interface
4. Signers approve the transaction
5. Once enough signatures are collected, execute the transaction onchain
## Prerequisites
Before using multi-sig wallets with CRE CLI commands, ensure you have:
### 1. Authenticated with the CLI
You must be logged in to use any CRE CLI commands. Run `cre whoami` in your terminal to verify you're logged in, or run `cre login` to authenticate.
See [Logging in with the CLI](/cre/account/cli-login) for detailed instructions.
### 2. Configure your multi-sig address
Add your multi-sig wallet address to your `project.yaml` or `workflow.yaml` under the target you're using:
```yaml
production-settings:
user-workflow:
workflow-owner-address: ""
workflow-name: "my-workflow"
```
### 3. Keep your private key in `.env`
Even when using `--unsigned`, the CLI still requires `CRE_ETH_PRIVATE_KEY` in your `.env` file:
```bash
CRE_ETH_PRIVATE_KEY=your-private-key-here
```
## Using the `--unsigned` flag
Add the `--unsigned` flag to any workflow management command:
- **Deploy**:
```bash
cre workflow deploy my-workflow --unsigned --target production-settings
```
- **Activate**:
```bash
cre workflow activate my-workflow --unsigned --target production-settings
```
- **Pause**:
```bash
cre workflow pause my-workflow --unsigned --target production-settings
```
- **Delete**:
```bash
cre workflow delete my-workflow --unsigned --target production-settings
```
## Example output
When you run a command with `--unsigned`, the CLI generates transaction data instead of sending the transaction:
```bash
> cre workflow activate my-workflow --unsigned --target production-settings
Activating Workflow : my-workflow
Target : production-settings
Owner Address :
Activating workflow: Name=my-workflow, Owner=, WorkflowID=
--unsigned flag detected: transaction not sent on-chain.
Generating call data for offline signing and submission in your preferred tool:
MSIG workflow activation transaction prepared!
To Activate my-workflow with workflowID:
Next steps:
1. Submit the following transaction on the target chain:
Chain: ethereum-testnet-sepolia
Contract Address: 0xF3f93fc4dc177748E7557568b5354cB009e3818a
2. Use the following transaction data:
530979d600f0379a2df46ad2c5af070f5871da89f589f8bff8af76ff6a44bb59bec88bf4000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000067a6f6e652d610000000000000000000000000000000000000000000000000000
```
## Submitting the transaction to your multi-sig wallet
Once you have the transaction data, follow these steps:
### 1. Open your multi-sig wallet interface
Access your multi-sig wallet (e.g., Gnosis Safe) and navigate to the transaction creation page.
### 2. Create a new transaction
Enter the transaction details from the CLI output:
- **To address**: Use the contract address from the output (e.g., `0xF3f93fc4dc177748E7557568b5354cB009e3818a`)
- **Value**: `0` (no ETH is being sent)
- **Data**: Paste the full transaction data from the CLI output
### 3. Submit for signatures
Submit the transaction for approval. The transaction will require the configured number of signatures from your multi-sig signers.
### 4. Execute the transaction
Once enough signatures are collected, execute the transaction onchain. The multi-sig wallet will broadcast the signed transaction to the blockchain.
## Troubleshooting
### Error: "WorkflowOwner must be a valid Ethereum address"
This error occurs when:
- The `workflow-owner-address` is not set in your configuration
- The `workflow-owner-address` contains a placeholder value like `"(optional) Multi-signature contract address"`
**Solution:** Update your `project.yaml` or `workflow.yaml` with your actual multi-sig address:
```yaml
production-settings:
user-workflow:
workflow-owner-address: "0x123..." # Your actual multi-sig address
```
### Error: "failed to read keys: invalid length, need 256 bits"
This error occurs when `CRE_ETH_PRIVATE_KEY` is missing or empty in your `.env` file.
**Solution:** Ensure your `.env` file contains a valid private key:
```bash
CRE_ETH_PRIVATE_KEY=0x1234567890abcdef...
```
Remember, this key is only used for blockchain client initialization, not for signing the multi-sig transaction.
## Learn more
- [Deploying Workflows](/cre/guides/operations/deploying-workflows) — Deploy workflows to the registry
- [Activating & Pausing Workflows](/cre/guides/operations/activating-pausing-workflows) — Control workflow execution state
- [Updating Deployed Workflows](/cre/guides/operations/updating-deployed-workflows) — Update workflow code and configuration
- [Deleting Workflows](/cre/guides/operations/deleting-workflows) — Remove workflows from the registry
---
# Monitoring & Debugging Workflows
Source: https://docs.chain.link/cre/guides/operations/monitoring-workflows
Last Updated: 2025-11-04
After deploying a workflow, you can monitor its execution history, performance metrics, and logs through the CRE web interface. This guide walks you through the monitoring dashboard and debugging tools available for your deployed workflows.
## Prerequisites
- **Deployed workflow**: You must have at least one workflow deployed to your organization. See [Deploying Workflows](/cre/guides/operations/deploying-workflows) for instructions.
## Accessing the workflows dashboard
1. **Log in to the CRE UI** at cre.chain.link
2. **Navigate to the Workflows section** by clicking **Workflows** in the left sidebar, or visit cre.chain.link/workflows directly
The Workflows dashboard displays three main sections:
- **Recent executions**: Shows the most recent workflow runs across all your workflows
- **Activity chart**: Visualizes successful and unsuccessful executions over the selected time period
- **All workflows table**: Lists all deployed workflows with their status and execution history
## Understanding the workflows dashboard
### Recent executions
The top section displays the most recent workflow executions across your organization:
- **Execution ID**: A unique identifier for each workflow run (shortened for display)
- **Workflow name**: The name of the workflow that executed
- **Status**: Success or Failure indicator
- **Timestamp**: When the execution occurred
Click on any execution ID to view detailed logs and events for that specific run.
### Activity chart
The performance chart shows execution trends over the selected time period:
- **Green bars**: Successful executions
- **Red bars**: Unsuccessful executions
- **Height**: Total number of executions for that day
Use the dropdown menu to adjust the time range.
### All workflows table
The main table displays all workflows in your organization:
| Column | Description |
| -------------------------------- | --------------------------------------------------------------------------------------------------- |
| **Name** | The workflow name as defined in your `workflow.yaml` |
| **Status** | Current workflow state: `Pending` (active and ready to execute) |
| **Last Execution** | Timestamp of the most recent execution, or `N/A` if not yet triggered |
| **Execution Results (last 24h)** | Visual bar showing successful (green) vs. failed (red) executions in the past 24 hours, with counts |
Click on any workflow row to view its detailed performance and execution history.
## Viewing workflow details
When you click on a workflow, you'll see a dedicated page with comprehensive monitoring data.
### Performance section
The Performance chart shows execution trends for this specific workflow over the selected time period. This helps you identify patterns, track reliability, and spot anomalies.
### Overview section
The Overview panel displays key workflow metadata:
| Field | Description |
| ------------------------ | ----------------------------------------------------------------------------------------------- |
| **Status** | Current workflow state (e.g., `PENDING`, `PAUSED`) |
| **Workflow ID** | Unique identifier in the Workflow Registry (truncated, click to copy full ID) |
| **Workflow Owner** | The wallet address that deployed and owns this workflow (truncated, click to copy full address) |
| **Total Runs** | Cumulative number of executions since deployment |
| **Registered On** | Timestamp when the workflow was first deployed |
| **Total Workflow Spend** | Cumulative credits consumed by all executions |
### Execution history
The **Execution** tab shows a detailed table of all workflow runs:
| Column | Description |
| ------------------ | ----------------------------------------------------------------------------------- |
| **Execution ID** | Unique identifier for this specific run (click to view details) |
| **Workflow ID** | The workflow that executed (useful if viewing executions across multiple workflows) |
| **Status** | Execution result: Success, Failure, or In Progress |
| **Time Triggered** | When the workflow execution started |
| **Credits Used** | Cost of this execution in CRE credits |
Use the **ALL** dropdown filter to view all executions, only successful ones, or only failures.
### Deployments tab
The **Deployments** tab shows the history of workflow deployments and updates.
## Debugging individual executions
Click on any **Execution ID** to view detailed debugging information for that specific run.
### Events tab
The **Events** tab displays the events triggered by this execution in sequential order:
- **Event**: The type of event (e.g., `trigger`, `evm:ChainSelector`, `consensus`, etc.)
- **Status**: Whether the event succeeded, failed, or is in progress (`Success`, `Failure`, `In progress`)
- **Time Triggered**: When the event occurred
**What you'll see:**
- Trigger activation
- Capability execution steps (HTTP requests, EVM calls, etc.)
- Consensus operations
### Logs tab
The **Logs** tab shows only user-emitted logs from your workflow code:
- Each log entry displays a timestamp, log level (e.g., `INFO`), and your custom message
- This is where you'll see output from `runtime.log()` (TypeScript) or `logger.Info()` (Go)
- System messages and internal events are excluded for clarity
- Logs appear in chronological order, showing the complete execution flow of your workflow
**Example log output:**
```
time=2025-10-26T13:19:57.055Z level=INFO msg="Successfully fetched offchain value" result=4
time=2025-10-26T13:19:57.055Z level=INFO msg="Successfully read onchain value" result=22
time=2025-10-26T13:19:57.055Z level=INFO msg="Final calculated result" result=26
time=2025-10-26T13:19:57.055Z level=INFO msg="Updating calculator result" consumerAddress=0x00307d6d1f88...
time=2025-10-26T13:19:57.055Z level=INFO msg="Writing report to consumer contract" offchainValue=4 onchainValue=22
time=2025-10-26T13:19:57.055Z level=INFO msg="Waiting for write report response"
```
## Monitoring best practices
1. **Check the dashboard regularly**: Review the Activity chart to spot trends in failures or performance degradation
2. **Investigate failures immediately**: When you see red bars in the chart or failed executions, click through to view logs and identify the root cause
3. **Use descriptive log messages**: Include context in your log statements to make debugging easier:
```typescript
// TypeScript
runtime.log(`Successfully fetched offchain value: ${offchainValue}`)
runtime.log(`Final calculated result: ${finalResult}`)
```
```go
// Go
logger.Info("Successfully fetched offchain value", "result", offchainValue)
logger.Info("Final calculated result", "result", finalResult)
```
4. **Monitor execution frequency**: If a cron-triggered workflow shows fewer executions than expected, verify your cron schedule and workflow status
5. **Track credit usage**: Monitor the Total Workflow Spend to understand your workflow's cost over time
## Common debugging scenarios
### Workflow not executing
**Symptoms:** No recent executions in the dashboard
**Possible causes:**
- Workflow is paused (check the Status field)
- Trigger is not firing (e.g., invalid cron schedule, no matching onchain events)
- Workflow was recently deployed and hasn't been triggered yet
**Resolution:**
- Verify the workflow Status is `PENDING` (active)
- Check your trigger configuration in the code
- Use [Workflow Simulation](/cre/guides/operations/simulating-workflows) to test locally
***
### Execution failures
**Symptoms:** Red bars in Activity chart, failed executions in table
**Possible causes:**
- API request failures (HTTP errors, timeouts)
- Onchain reverts (contract calls failing)
- Invalid configuration or missing secrets
- Logic errors in workflow code
**Resolution:**
1. Click on the failed Execution ID
2. Check the **Events** tab to see which step failed
3. Review the **Logs** tab for error messages from your code
4. Fix the issue in your code and [update the workflow](/cre/guides/operations/updating-deployed-workflows)
## Related guides
- **[Deploying Workflows](/cre/guides/operations/deploying-workflows)** - Deploy your first workflow
- **[Activating & Pausing Workflows](/cre/guides/operations/activating-pausing-workflows)** - Control workflow execution
- **[Updating Deployed Workflows](/cre/guides/operations/updating-deployed-workflows)** - Fix issues and deploy updates
- **[Simulating Workflows](/cre/guides/operations/simulating-workflows)** - Test workflows locally before deploying
---
# Account
Source: https://docs.chain.link/cre/account
Last Updated: 2025-11-04
Your CRE account is required to use the CRE CLI. You must be logged in to run any CLI commands, including simulating workflows, deploying workflows, and managing deployed workflows. This section covers everything you need to know about creating and managing your account.
## What you'll need
To use CRE, you need:
1. **A CRE account** - Created through the web interface at cre.chain.link (to create a new organization) or via an invitation link from an organization Owner (to join an existing organization)
2. **Two-factor authentication** - Set up during account creation for security
3. **CLI authentication** - Connect your CLI to your account using `cre login`
## Account guides
- **[Creating Your Account](/cre/account/creating-account)** - Step-by-step guide to creating a new CRE account through the CRE UI
- **[Logging in with the CLI](/cre/account/cli-login)** - Authenticate your CLI with your CRE account to run commands
- **[Managing Authentication](/cre/account/managing-auth)** - Check your login status, handle session expiration, and log out
## Security features
Your CRE account includes several security features:
- **Two-factor authentication (2FA)** - Required during login for an additional layer of security
- **Recovery codes** - Provided during account setup to regain access if you lose your authenticator device
- **Session management** - CLI sessions automatically expire after a period of inactivity
## Related guides
Once you have your account set up and authenticated:
- **[Understanding Organizations](/cre/organization/understanding-organizations)** - Learn about CRE organizations, roles, and permissions
- **[Linking Wallet Keys](/cre/organization/linking-keys)** - Connect your wallet to deploy workflows
---
# Creating Your Account
Source: https://docs.chain.link/cre/account/creating-account
Last Updated: 2025-11-04
Before you can use CRE, you need to create an account. An account is required to log in with the CRE CLI and run any CLI commands, including [simulating](/cre/guides/operations/simulating-workflows) workflows.
There are two ways to create an account:
1. **Create a new organization**: Sign up directly on the CRE UI. You'll become the *Owner* of a new organization.
2. **Join an existing organization**: Accept an invitation from an existing organization Owner. You'll become a *Member* of that organization automatically after account creation.
This guide walks you through the account creation process for both scenarios.
## Prerequisites
- A valid email address
- Access to your email inbox to receive verification codes (and invitation email, if joining an existing organization)
## Step 1: Navigate to the CRE UI
There are two ways to begin the account creation process:
### Option A: Create a new organization
In this option, you'll create a new organization and become the *Owner* of that organization.
Go to cre.chain.link and click the **"Create an account"** button.
### Option B: Join an existing organization
In this option, you'll join an existing organization and become a *Member* of that organization.
If you've received an invitation email from an organization Owner, click the **"Accept Invitation"** button in the email. This will redirect you to the account creation page.
After choosing either option, continue with the following steps to complete your account creation.
## Step 2: Enter your information
Fill in the required information:
1. **Email address**: Enter a valid email address (if not already pre-filled)
2. **Country**: Select your country from the dropdown
3. **Terms and policies**: Review and accept the Terms of Service and Privacy Policy
Click **"Continue"** to proceed.
## Step 3: Verify your email
Check your email inbox for a message from Chainlink containing a 6-digit verification code. Enter this code in the verification screen and click **Continue**.
## Step 4: Set your password
Create a secure password for your account. Your password must meet the security requirements displayed on the screen.
## Step 5: Set up two-factor authentication (2FA)
To secure your account, you'll need to set up two-factor authentication. You'll be presented with two authentication method options:
1. **Fingerprint or Face Recognition** - Use biometric authentication on your device
2. **Google Authenticator or similar** - Use an authenticator app
### Using an authenticator app
If you choose the authenticator app option:
1. Click **"Google Authenticator or similar"**
2. Open your preferred authenticator app (such as Google Authenticator, Authy, or 1Password)
3. Scan the QR code displayed on the screen
4. Enter the 6-digit one-time code generated by your authenticator app
5. Click **"Continue"**
## Step 6: Save your recovery code
Your recovery code is essential for regaining access to your account if you lose access to your authenticator device.
1. Copy the recovery code displayed on the screen
2. Store it securely in a password manager or offline location
3. Check the box "I have safely recorded this code"
4. Click **"Continue"** to complete account creation
After completing these steps, you'll be redirected to your CRE dashboard.
## What's next?
Once your account is created:
1. **[Log in to the CRE CLI](/cre/account/cli-login)** - Authenticate your CLI session
2. **If you created a new organization (Owner)**: [Invite team members](/cre/organization/inviting-members) to collaborate on workflows
---
# Logging in with the CLI
Source: https://docs.chain.link/cre/account/cli-login
Last Updated: 2025-11-04
To deploy and manage workflows with the CRE CLI, you need to authenticate your CLI session with your CRE account. This guide walks you through the login process.
## Prerequisites
- [CRE CLI installed](/cre/getting-started/cli-installation) on your machine
- [CRE account created](/cre/account/creating-account)
- Access to your authenticator app for two-factor authentication
## Login process
### Step 1: Initiate login from the terminal
Open your terminal and run the login command:
```bash
cre login
```
This command will automatically open your default web browser to begin the authentication process.
### Step 2: Enter your email address
In the browser window that opens, enter the email address associated with your CRE account and click **"Continue"**.
### Step 3: Enter your password
Enter your account password and click **"Continue"**.
### Step 4: Complete two-factor authentication
You'll be prompted to complete two-factor authentication based on the method you configured during account creation:
- **If you set up an authenticator app**: Open your authenticator app and enter the 6-digit one-time code it generates for your CRE account.
- **If you set up biometric authentication**: Use your fingerprint or face recognition as prompted by your device.
Example of entering a one-time code from an authenticator app:
### Step 5: Confirm successful login
Once authenticated, you'll see a confirmation message in your browser:
You can now close the browser window and return to your terminal. Your terminal will display:
```bash
Login completed successfully
```
Your CLI session is authenticated and ready to use.
---
# Managing Authentication
Source: https://docs.chain.link/cre/account/managing-auth
Last Updated: 2025-11-04
This guide covers how to manage your CLI authentication session, including logging in, checking your status, handling session expiration, and logging out.
## Logging in
To authenticate your CLI with your CRE account, use the `cre login` command. This opens a browser window where you'll enter your credentials and complete two-factor authentication.
For detailed login instructions, see the [Logging in with the CLI](/cre/account/cli-login) guide.
## Session expiration
Your CLI session remains authenticated until you explicitly log out or until your session expires. When your session expires, you'll need to log in again.
If you attempt to run a command with an expired session, you'll see an error:
```bash
Error: failed to attach credentials: failed to load credentials: you are not logged in, try running cre login
```
To resolve this, simply run `cre login` again to re-authenticate.
## Checking authentication status
To verify that you're logged in and view your account details, use the `cre whoami` command:
```bash
cre whoami
```
This command displays your account information:
```bash
Account details retrieved:
Email: email@domain.com
Organization ID: org_mEMRknbVURM9DWsB
```
If you're not logged in, you'll receive an error message prompting you to run `cre login`.
## Logging out
To explicitly end your CLI session and remove your stored credentials, use the `cre logout` command:
```bash
cre logout
```
After logging out, you'll need to run `cre login` again to authenticate future CLI commands.
---
# Organization
Source: https://docs.chain.link/cre/organization
Last Updated: 2025-11-04
CRE organizations enable teams to collaborate on workflow development and deployment. An organization provides a shared workspace where multiple members can deploy workflows, link wallet addresses, and monitor execution activity together.
## Organization guides
- **[Understanding Organizations](/cre/organization/understanding-organizations)** - Learn how organizations work, including roles, permissions, and shared resources
- **[Linking Wallet Keys](/cre/organization/linking-keys)** - Connect your wallet address to deploy and manage workflows
- **[Inviting Team Members](/cre/organization/inviting-members)** - Add colleagues to your organization (Owner only)
---
# Understanding Organizations
Source: https://docs.chain.link/cre/organization/understanding-organizations
Last Updated: 2025-11-04
## What is an organization?
A CRE organization is a collaborative workspace that allows multiple team members to deploy, manage, and monitor workflows together. When you create a CRE account, you either start a new organization (becoming the *Owner*) or join an existing one (becoming a *Member*).
An organization serves as a container for:
- **Multiple team members** with different roles (Owner and Members)
- **Linked wallet addresses** from all team members
- **Deployed workflows** visible to all organization members
- **Shared monitoring** of workflow executions and activity
Organizations enable teams to collaborate on CRE workflows while maintaining individual control over their own wallet addresses and secrets.
## Organization structure
### Single Owner model
Every organization has exactly **one Owner**—the person who first created the organization account. The Owner role:
- Cannot be transferred or changed
- Has full administrative control
- Can invite new Members to join
- Can link wallet addresses and deploy workflows
### Multiple Members
Members are users invited by the Owner to join the organization. Each member:
- Joins automatically after accepting the invitation and creating their account
- Can link their own wallet addresses to the organization
- Can deploy and manage their own workflows
- Views all organization workflows in the shared dashboard
### Shared visibility
All organization members can:
1. **View linked wallet keys** - Use `cre account list-key` to see all addresses linked to the organization by any member
2. **Monitor all workflows** - Access cre.chain.link/workflows to view:
- All deployed workflows across the organization
- Recent execution history
- Workflow status (Active, Paused, Pending)
- Execution success/failure metrics
- Activity graphs and trends
## Organization roles
CRE uses a simple role-based access control system with two roles:
| Role | Description | Permissions | How to obtain |
| ---------- | ----------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------- |
| **Owner** | The person who first created the organization account |
Full access to all organization resources and workflows
Invite new members to the organization
Manage organization settings
Deploy, activate, pause, update, and delete workflows
| Automatically assigned when you create a new organization by signing up directly |
| **Member** | Users invited to join an existing organization |
View all organization workflows
Link wallet addresses to the organization
Deploy and manage workflows under their linked addresses
Access workflow execution data
| Invited by the Owner through the [invitation process](/cre/organization/inviting-members) |
**Key points:**
- There is only one Owner per organization
- Owner permissions cannot be transferred
- Only Owners can invite new Members
- Members automatically join after accepting the invitation and creating an account
## Learn more
- **[Inviting Team Members](/cre/organization/inviting-members)** - How Owners can add Members to the organization
- **[Linking Wallet Keys](/cre/organization/linking-keys)** - How to link your wallet address to deploy workflows
- **[Creating Your Account](/cre/account/creating-account)** - How to create an account and join or create an organization
- **[Deploying Workflows](/cre/guides/operations/deploying-workflows)** - Deploy your first workflow after linking a key
---
# Linking Wallet Keys
Source: https://docs.chain.link/cre/organization/linking-keys
Last Updated: 2025-11-04
Before you can deploy workflows, you must link a public key address to your CRE organization. This process registers your wallet address onchain in the Workflow Registry contract—the smart contract on Ethereum Mainnet that stores and manages all CRE workflows—associating it with your organization and allowing you to deploy and manage workflows.
## What is key linking?
Key linking is the process of connecting a blockchain wallet address to your CRE organization. Once linked, this address becomes a **workflow owner address** that can deploy, update, and delete workflows in the Workflow Registry.
**Key benefits:**
- Multiple team members can link their own addresses to the same organization
- Each linked address can independently deploy and manage workflows
- Addresses are labeled for easy identification (e.g., "Production Wallet", "Dev Wallet")
- All linked addresses are visible to organization members via cre account list-key
**Important constraint:**
- **One organization per address**: Each wallet address can only be linked to one CRE organization at a time. If you need to use the same address with a different organization, you must first [unlink it](#unlinking-a-key) from the current organization.
However, an organization can have multiple wallet addresses linked to it, allowing team members to use their own addresses or enabling separation between development, staging, and production environments.
## Prerequisites
Before linking a key, ensure you have:
- **CRE CLI installed and authenticated**: See [CLI Installation](/cre/getting-started/cli-installation) and [Logging in with the CLI](/cre/account/cli-login)
- **A CRE project directory**: You must run the command from a project directory that contains a `project.yaml` file
- **Private key in `.env`**: Set `CRE_ETH_PRIVATE_KEY=` (without `0x` prefix) in your `.env` file
- **Funded wallet**: Your wallet must have ETH on Ethereum Mainnet to pay for gas fees (the Workflow Registry contract is deployed on Ethereum Mainnet)
- **Unlinked address**: The wallet address must not already be linked to another CRE organization. Each address can only be associated with one organization at a time.
## Linking your first key
The easiest way to link a key is to let the deployment process handle it automatically. When you first try to [deploy a workflow](/cre/guides/operations/deploying-workflows), the CLI will detect that your address isn't linked and prompt you to link it.
### Automatic linking during deployment
1. Navigate to your project directory (where your `.env` file is located)
2. Attempt to deploy a workflow:
```bash
cre workflow deploy my-workflow --target production-settings
```
3. The CLI will detect that your address isn't linked and prompt you:
```bash
Verifying ownership...
Workflow owner link status: owner=, linked=false
Owner not linked. Attempting auto-link: owner=
Linking web3 key to your CRE organization
Target : production-settings
✔ Using Address :
✔ Provide a label for your owner address: █
```
4. Enter a descriptive label for your address
5. Review the transaction details and confirm
The CLI will submit the transaction and continue with the deployment once the key is linked.
### Manual linking
You can also link a key manually before attempting to deploy:
```bash
cre account link-key --target production-settings
```
**Interactive flow:**
1. The CLI derives your public address from the private key in `.env`
2. You're prompted to provide a label
3. The CLI checks if the address is already linked
4. Transaction details are displayed (chain, contract address, estimated gas cost)
5. You confirm to execute the transaction
6. The transaction is submitted and you receive a block explorer link
**Example output:**
```bash
Linking web3 key to your CRE organization
Target : production-settings
✔ Using Address :
Provide a label for your owner address:
Checking existing registrations...
✓ No existing link found for this address
Starting linking: owner=, label=
Contract address validation passed
Transaction details:
Chain Name: ethereum-mainnet
To: 0x4Ac54353FA4Fa961AfcC5ec4B118596d3305E7e5 # Workflow Registry contract address
Function: LinkOwner
...
Estimated Cost:
Gas Price: 0.12450327 gwei
Total Cost: 0.00001606 ETH
? Do you want to execute this transaction?:
▸ Yes
No
```
After confirming, you'll see:
```bash
Transaction confirmed
View on explorer: https://etherscan.io/tx/
[OK] web3 address linked to your CRE organization successfully
→ You can now deploy workflows using this address
```
## Viewing linked keys
To see all addresses linked to your organization:
```bash
cre account list-key
```
**Example output:**
```bash
Workflow owners retrieved successfully:
Linked Owners:
1. JohnProd
Owner Address:
Status: VERIFICATION_STATUS_SUCCESSFULL
Verified At: 2025-10-21T17:22:24.394249Z
Chain Selector: 5009297550715157269 # Chain selector for Ethereum Mainnet
Contract Address: 0x4Ac54353FA4Fa961AfcC5ec4B118596d3305E7e5 # Workflow Registry contract address
2. JaneProd
Owner Address:
Status: VERIFICATION_STATUS_SUCCESSFULL
Verified At: 2025-10-21T17:22:24.394249Z
Chain Selector: 5009297550715157269 # Chain selector for Ethereum Mainnet
Contract Address: 0x4Ac54353FA4Fa961AfcC5ec4B118596d3305E7e5 # Workflow Registry contract address
```
**Understanding the output:**
- **Label**: The friendly name you provided (e.g., "JohnProd", "JaneProd")
- **Owner Address**: The public address linked to your organization
- **Status**: `VERIFICATION_STATUS_SUCCESSFULL` (linked and verified)
- **Verified At**: Timestamp when the link was confirmed onchain
- **Chain Selector**: The chain identifier where the Workflow Registry contract is deployed
- **Contract Address**: The Workflow Registry contract address
## Linking multiple addresses
Your organization can have multiple wallet addresses linked to it simultaneously, but remember that each individual address can only be linked to one organization at a time.
This is useful for:
- **Separation of concerns**: Different addresses for development, staging, and production
- **Team collaboration**: Each team member uses their own address
- **Multi-sig wallets**: Link a multi-sig address alongside individual addresses
To link another address:
1. Update your `.env` file with the new private key
2. Run `cre account link-key --target ` again
3. Provide a unique label to distinguish this address
## Unlinking a key
If you need to remove a linked address from your organization, you can use the `cre account unlink-key` command. This is useful when:
- Rotating addresses for security reasons
- Removing addresses that are no longer in use
- Cleaning up test or development addresses
To unlink a key:
1. Ensure your `.env` file contains the private key of the address you want to unlink
2. Run the unlink command:
```bash
cre account unlink-key --target production-settings
```
3. Confirm the operation when prompted
The CLI will submit an onchain transaction to remove the address from the Workflow Registry. After the transaction is confirmed, the address and all its associated workflows will be deleted.
## Non-interactive mode
For automation or CI/CD pipelines, use the `--yes` flag to skip confirmation prompts:
```bash
cre account link-key --owner-label "CI Pipeline Wallet" --yes --target production-settings
```
## Using multi-sig wallets
If you're using a multi-sig wallet, you'll need to use the `--unsigned` flag to generate raw transaction data that you can then submit through your multi-sig interface (such as Safe).
### Prerequisites for multi-sig
1. Configure your multi-sig address in `project.yaml` under the `account` section:
```yaml
production-settings:
account:
workflow-owner-address: ""
# ... other settings
```
2. Ensure your `.env` file contains the private key of any signer from the multi-sig wallet (used only for signature generation, not for sending transactions)
### Linking a multi-sig address
Run the `link-key` command with the `--unsigned` flag:
```bash
cre account link-key --owner-label "SafeWallet" --target production-settings --unsigned
```
**Example output:**
```bash
Linking web3 key to your CRE organization
Target : production-settings
✔ Using Address :
Checking existing registrations...
✓ No existing link found for this address
Starting linking: owner=, label=SafeWallet
Contract address validation passed
--unsigned flag detected: transaction not sent on-chain.
Generating call data for offline signing and submission in your preferred tool:
Ownership linking initialized successfully!
Next steps:
1. Submit the following transaction on the target chain:
Chain: ethereum-mainnet
Contract Address: 0x4Ac54353FA4Fa961AfcC5ec4B118596d3305E7e5
2. Use the following transaction data:
dc1019690000000000000000000000000000000000000000000000000000000068fd2f9465259a804e880ee30de0fcc2b81ee25d598ee1601e13ace2c2ec10202869706800000000000000000000000000000000000000000000000000000000000000600000000000000000000000000000000000000000000000000000000000000041bd0f40824a1fdce10ee1091703833fb3d4497b3f681f6edee6b159d217326185407ce16eb1c668c90786421b053d4d25401f422aa90d156c35659d7c3e2e13221b00000000000000000000000000000000000000000000000000000000000000
Linked successfully
```
### Submitting through your multi-sig interface
1. **Copy the transaction data** provided in the CLI output
2. **Open your multi-sig interface** (e.g., Safe app at [https://app.safe.global](https://app.safe.global))
3. **Create a new transaction** with:
- **To address**: 0x4Ac54353FA4Fa961AfcC5ec4B118596d3305E7e5 (Workflow Registry contract)
- **Value**: 0 (no ETH transfer)
- **Data**: Paste the transaction data from the CLI output (add 0x prefix if required by your multi-sig interface)
4. **Submit and collect signatures** from the required number of signers
5. **Execute the transaction** once you have enough signatures
**Note**: If your multi-sig interface requires the RegistryWorkflow contract ABI, you can copy it from Etherscan.
### Verifying the multi-sig link
After the multi-sig transaction is executed onchain, you can verify the link status:
```bash
cre account list-key
```
Initially, you'll see the address with a `VERIFICATION_STATUS_PENDING` status:
```bash
Workflow owners retrieved successfully:
Linked Owners:
1. SafeWallet
Owner Address:
Status: VERIFICATION_STATUS_PENDING
Verified At:
Chain Selector: 5009297550715157269 # Chain selector for Ethereum Mainnet
Contract Address: 0x4Ac54353FA4Fa961AfcC5ec4B118596d3305E7e5 # Workflow Registry contract address
```
Once the transaction is confirmed onchain, the status will change to `VERIFICATION_STATUS_SUCCESSFULL` and the `Verified At` timestamp will be populated.
## Learn more
- **[Understanding Organizations](/cre/organization/understanding-organizations)** - Learn about organization structure and shared resources
- **[Using Multi-sig Wallets](/cre/guides/operations/using-multisig-wallets)** - Advanced guide for multi-sig wallet workflows
- **[Account Management CLI Reference](/cre/reference/cli/account)** - Complete reference for `cre account` commands
- **[Deploying Workflows](/cre/guides/operations/deploying-workflows)** - Deploy your first workflow after linking a key
---
# Inviting Team Members
Source: https://docs.chain.link/cre/organization/inviting-members
Last Updated: 2025-11-04
To collaborate with your team on CRE workflows, you can invite members to your organization. This guide walks you through the invitation process.
## Prerequisites
- You must be logged in to cre.chain.link
- You must be the Owner of your organization
- The email addresses you're inviting must belong to whitelisted domains
## Step 1: Navigate to Organization settings
In the left sidebar, click on **"Organization"**.
## Step 2: Go to the Members tab
Click on the **"Members"** tab to view your organization's members.
## Step 3: Start the invitation process
Click the **"Invite"** button in the top right corner.
## Step 4: Add team member details
Enter the **name** and **email address** of the person you want to invite.
### Inviting multiple members at once
To invite more than one person:
1. Click the **"Add person"** button (visible in the screenshot above)
2. Enter the name and email for each additional person
3. Repeat as needed
## Step 5: Send invitations
Once you've added all the people you want to invite, click the **"Submit"** button in the bottom right corner.
The invited members will receive an email invitation to join your organization.
## What happens next?
After you send invitations:
1. **Email notification**: Each invited person receives an email with instructions to join your organization
2. **Account creation**: If they don't have a CRE account yet, they'll be invited to create one
3. **Automatic membership**: Once they complete account creation, they'll automatically become members of your organization—no additional acceptance step required
4. **Access**: They'll immediately have access to your organization's workflows and resources
## Related guides
- [Understanding Organizations](/cre/organization/understanding-organizations) - Learn about organization roles and permissions
- [Linking Wallet Keys](/cre/organization/linking-keys) - Help new members connect their wallets
---
# Capabilities Overview
Source: https://docs.chain.link/cre/capabilities
Last Updated: 2025-11-04
At the core of the Chainlink Runtime Environment (CRE) is the concept of **Capabilities**. A capability is a modular, decentralized service that performs a specific task. Think of them as the individual "bricks" that you can use to build custom workflows.
Each capability is powered by its own independent Decentralized Oracle Network (DON), which is optimized for that specific task, ensuring security and reliable performance.
## Invoking Capabilities via the SDK
As a developer, you do not interact with these capability DONs directly. Instead, you invoke them through the developer-friendly interfaces provided by the **CRE SDKs** ([Go](/cre/reference/sdk/core-go) and [TypeScript](/cre/reference/sdk/core-ts)), such as the [`evm.Client`](/cre/reference/sdk/evm-client) or the [`http.Client`](/cre/reference/sdk/http-client). The SDK handles the low-level complexity of communicating with the correct DON and processing the consensus-verified result, allowing you to focus on your business logic.
## Available Capabilities
This section provides a high-level, conceptual overview of the capabilities currently available in CRE.
- **[Triggers](/cre/capabilities/triggers)**: Event sources that start your workflow executions.
- **[HTTP](/cre/capabilities/http)**: Fetch and post data from external APIs with decentralized consensus.
- **[EVM Read & Write](/cre/capabilities/evm-read-write)**: Interact with smart contracts on EVM-compatible blockchains with decentralized consensus.
All execution capabilities (HTTP, EVM) automatically use [built-in consensus](/cre/concepts/consensus-computing) to validate results across multiple nodes, ensuring security and reliability.
---
# The Trigger Capability
Source: https://docs.chain.link/cre/capabilities/triggers
Last Updated: 2025-11-04
**Triggers** are a special type of capability that initiate the execution of your workflow. They are event-driven services that constantly watch for a specific condition to be met. When the condition occurs, the trigger fires and instructs CRE to run the callback function you have registered for that event. Learn more about the [trigger-and-callback model](/cre/#the-trigger-and-callback-model).
## Trigger types
CRE provides several types of triggers to start your workflows:
- **Time-based:** The `Cron` trigger fires at a specific time or on a recurring schedule (e.g., "every 5 minutes").
- **Request-based:** The `HTTP` trigger fires when an external system makes an HTTP request to your workflow's endpoint. HTTP triggers require authorization keys when deployed to ensure only authorized addresses can trigger your workflow.
- **Onchain Events:** The `EVM Log` trigger fires when a specific event is emitted by a smart contract on a supported blockchain.
## Learn more
- **[Using Triggers Guides](/cre/guides/workflow/using-triggers/overview)**: Learn how to use the SDK to register handlers for the different trigger types.
- **[Triggers SDK Reference](/cre/reference/sdk/triggers/overview)**: See the detailed API reference for trigger configurations and payloads.
---
# The HTTP Capability
Source: https://docs.chain.link/cre/capabilities/http
Last Updated: 2025-11-04
The **HTTP** capability is a decentralized service that allows your workflow to securely interact with any external, offchain API. It can be used to both **fetch data from** and **send data to** other systems.
## Why use a Capability for HTTP Requests?
When your workflow needs to interact with an external API, the HTTP capability provides decentralized execution across multiple independent nodes.
When you use the SDK's `http.Client` to make a request (like a `GET` or a `POST`), you are invoking this capability. CRE instructs each node in a dedicated DON to make the same API request. The nodes' individual responses are then **validated through a consensus protocol**, which ensures they agree on the result before returning it to your workflow.
This provides cryptographically verified, tamper-proof execution for your offchain data operations.
## Learn more
- **[API Interactions Guide](/cre/guides/workflow/using-http-client)**: Learn how to use the SDK to invoke the HTTP capability.
- **[HTTP Client SDK Reference](/cre/reference/sdk/http-client)**: See the detailed API reference for the `http.Client`.
---
# The EVM Read & Write Capabilities
Source: https://docs.chain.link/cre/capabilities/evm-read-write
Last Updated: 2025-11-04
The **EVM Read & Write** capabilities provide a secure and reliable service for your workflow to interact with smart contracts on any EVM-compatible blockchain.
- **EVM Read:** Allows your workflow to call `view` and `pure` functions on a smart contract to read its state.
- **EVM Write:** Allows your workflow to call state-changing functions on a smart contract to write data to the blockchain.
## How it works
When you use the SDK's [`evm.Client`](/cre/reference/sdk/evm-client) to interact with a contract, you are invoking these underlying capabilities. This provides a simpler and more reliable developer experience compared to crafting raw RPC calls.
The SDKs simplify contract interactions with type-safe ABI handling (Go uses generated bindings, TypeScript uses viem), while the underlying CRE infrastructure manages consensus and transaction submission.
### Key features
- **Type-safe interactions**: SDKs provide type safety for contract calls (Go bindings offer compile-time safety, TypeScript uses viem for runtime type checking)
- **Decentralized execution**: Read and write operations are executed across multiple nodes in a DON with cryptographic verification
- **Decentralized consensus**: Multiple DON nodes independently verify read results and validate write operations before onchain submission
- **Chain selector support**: Target specific blockchains using Chainlink's chain selector system
- **Block number flexibility**: Read from finalized, latest, or a specific block number
- **Error handling**: Comprehensive error reporting for failed calls and transactions
### Understanding EVM Write operations
For **EVM Write** operations, your workflow doesn't write directly to your contract. Instead, the data follows a secure multi-step flow:
1. **Your workflow** generates a cryptographically signed report with your ABI-encoded data
2. **The EVM Write capability** submits this report to a Chainlink-managed `KeystoneForwarder` contract
3. **The forwarder** validates the report's cryptographic signatures
4. **The forwarder** calls your consumer contract's `onReport(bytes metadata, bytes report)` function to deliver the data
This architecture ensures decentralized consensus, cryptographic verification, and accountability. Your consumer contract must implement the `IReceiver` interface to receive data from the forwarder.
Learn more in the [Onchain Write guide](/cre/guides/workflow/using-evm-client/onchain-write/overview-ts).
## Learn more
- **[EVM Chain Interactions Guide](/cre/guides/workflow/using-evm-client/overview)**: Learn how to read from and write to smart contracts
- **[EVM Client SDK Reference](/cre/reference/sdk/evm-client)**: Detailed API reference for the `evm.Client`
---
# Consensus Computing in CRE
Source: https://docs.chain.link/cre/concepts/consensus-computing
Last Updated: 2025-11-04
**Consensus computing** is the foundational computing paradigm that makes CRE secure and reliable. It ensures that every operation your workflow performs—whether fetching data from an API or reading from a blockchain—is verified by multiple independent nodes before producing a final result.
## What is consensus computing?
Consensus computing is when a decentralized network of nodes must form consensus as part of executing code and storing information. Unlike traditional computing where you trust a single server or service, consensus computing provides unique guarantees:
- **Tamper-resistance**: No single node can manipulate results
- **High availability**: The network continues operating even if individual nodes fail
- **Trust minimization**: You don't need to trust any single entity
- **Verifiability**: All results are cryptographically verified
Blockchains pioneered consensus computing for maintaining asset ledgers and executing smart contracts. CRE extends this paradigm to **any offchain operation**—API calls, computations, and more.
## How CRE uses consensus
In CRE, **every execution capability automatically includes decentralized consensus**. Here's how it works:
1. **Independent execution**: When your workflow invokes a capability (like `http.Client` or `evm.Client`), each node in the dedicated capability DON performs the operation independently
2. **Result collection**: Each node produces its own result based on what it observed
3. **Consensus protocol**: The DON applies a Byzantine Fault Tolerant (BFT) consensus protocol to validate and aggregate the individual results
4. **Verified output**: A single, consensus-verified result is returned to your workflow
This process happens automatically for every capability call. You don't need to write any special code—consensus is built into the CRE runtime environment.
## Why this matters for your workflows
### Protection against node failures and manipulation
CRE's consensus model protects against individual node failures and malicious behavior. When multiple independent nodes execute the same operation:
- **Node-level resilience**: If some nodes fail or go offline, the network continues operating
- **Byzantine Fault Tolerance**: Even if some nodes are compromised and return incorrect results, the honest majority ensures the correct outcome
- **Execution consistency**: All nodes must execute your workflow logic identically, preventing manipulation by individual operators
### Validated and verified results
Every result your workflow receives has been cryptographically verified and validated across multiple nodes. This provides strong guarantees that:
- The operation was executed correctly
- The result matches what independent observers agreed upon
- No single node operator can manipulate your workflow's execution or outputs
### Unified security model
With CRE, your **entire institutional-grade smart contract**—not just the onchain parts—benefits from consensus computing. This means:
- **API responses** are validated across multiple nodes before your workflow uses them
- **Blockchain reads** are verified by multiple nodes
- **Blockchain writes** are validated by multiple nodes before being submitted onchain
- **Computation results** within your workflow are executed consistently across all nodes
Your workflow inherits the same security and reliability guarantees as blockchain transactions, but for any offchain operation.
## Consensus in practice
### HTTP capability
When your workflow makes an API request using `http.Client`, the HTTP capability DON executes your request across multiple nodes. Their responses are validated through a consensus protocol before returning a result to your workflow.
This ensures:
- Execution consistency across all nodes
- Protection against individual node compromise or failure
- Detection of inconsistent API responses (e.g., due to load balancing or timing)
See the [HTTP Capability](/cre/capabilities/http) page for details.
### EVM Read & Write capability
When your workflow reads from or writes to a blockchain using `evm.Client`, the EVM capability DON performs the operation across multiple nodes:
- **For reads**: Multiple nodes independently query the blockchain, and consensus validates their responses match
- **For writes**: Multiple nodes agree on the transaction data before submitting it onchain
This ensures:
- Execution consistency across all nodes
- Protection against individual node compromise or failure
- Validated blockchain data before use in your workflow
See the [EVM Read & Write Capability](/cre/capabilities/evm-read-write) page for details.
## Consensus in simulation vs. deployed workflows
## Learn more
Learn more about how to use CRE capabilities with built-in consensus:
- **[Capabilities Overview](/cre/capabilities)**: Explore all available capabilities
- **[API Interactions](/cre/guides/workflow/using-http-client)**: Learn how to use the HTTP capability with built-in consensus
- **[EVM Chain Interactions](/cre/guides/workflow/using-evm-client/overview)**: Learn how to use the EVM capability with built-in consensus
---
# Time in CRE
Source: https://docs.chain.link/cre/concepts/time-in-cre
Last Updated: 2025-11-04
## The problem: Why time needs consensus
Workflows often rely on time for decisions (market-hours checks), scheduling (retries/backoffs), and observability (log timestamps). In a decentralized network, nodes do not share an identical clock—clock drift, resource contention, and OS scheduling can skew each node's local time. If each node consults its own clock:
- Different nodes may take **different branches** of your logic (e.g., one thinks the market is open, another does not).
- Logs across nodes become **hard to correlate**.
- Data fetched using time (e.g., "fetch price at timestamp N") can be **inconsistent**.
**DON Time** removes these divergences by making time **deterministic in the DON**.
## The solution: DON time
**DON Time** is a timestamp computed by an OCR (Off-Chain Reporting) plugin and agreed upon by the nodes participating in CRE. You access it through the SDK's runtime call, `runtime.Now()`, not via an OS/System clock. The `runtime.Now()` function returns a standard Go `time.Time` object.
**Key properties:**
- **Deterministic across nodes**: nodes see the same timestamp.
- **Sequenced per workflow**: time responses are associated with a **time-call sequence number** inside each workflow execution (1st call, 2nd call, …). Node execution timing might be slightly off, but a given call will resolve to the **same DON timestamp**.
- **Low latency**: the plugin runs continuously with **delta round = 0**, and each node **transmits** results back to outstanding requests at the end of every round.
- **Tamper-resistant**: workflows don't expose host machine time, reducing timing-attack surface.
## How it works: A high-level view
1. Your workflow calls **`runtime.Now()`**.
2. **The Chainlink network takes this request**: The Workflow Engine's **TimeProvider** assigns that call a **sequence number** and enqueues it in the **DON Time Store**.
3. **All the nodes agree on a single time (the DON Time)**: The **OCR Time Plugin** on each node reaches consensus on a new DON timestamp (the median of observed times).
4. Each node **returns** the newest DON timestamp to every pending request and updates its **last observed DON time** cache.
5. The result is written back into the WebAssembly execution, and your workflow continues.
Because requests are sequenced, *Call 1* for a workflow instance will always return the same DON timestamp on every node. If Node A hits *Call 2* before Node B, A will block until the DON timestamp for *Call 2* is produced; when B reaches *Call 2*, it immediately reuses that value.
## Execution modes: DON mode vs. Node mode
### DON mode (default for workflows)
- Time is **consensus-based** and **deterministic**.
- Use for **any** logic where different outcomes across nodes would be a bug. Examples:
- Market-hours gates
- Time-windowed queries ("last 15 minutes")
- Retry/backoff logic that must align across nodes
- Timestamps used for cross-node correlation (logging, audit trails)
### Node mode (advanced / special cases)
- Workflow authors handle consensus themselves.
- `runtime.Now()` in Node Mode is a non-blocking call that returns the **last generated DON timestamp** from the local node's cache. This is the same mechanism used by standard Go `time.Now()` calls within the Wasm environment.
- Useful in situation where you already expect non-determinism (e.g., inherently variable HTTP responses).
## Best practices: Avoiding non-determinism in DON mode
When running in DON Mode, you get determinism **if and only if** you base time-dependent logic on DON Time.
**Avoid** these patterns:
- **Reading host/system time** (`time.Now()`, etc.). Always use `runtime.Now()` from the CRE SDK.
- **Mixing time sources** in the same control path.
- **Per-node "sleeps" based on local time** that gate deterministic decisions.
**Deterministic patterns:**
- ✅ Gate behavior with:
```go
now := runtime.Now()
if market.IsOpenAt(now):
// proceed
```
- ✅ Compute windows from DON Time:
```go
now := runtime.Now()
windowStart := now.Add(-15 * time.Minute)
fetchData(windowStart, now)
```
## FAQ
**Is DON Time "real UTC time"?**
It's the **median of node observations** per round. It closely tracks real time but prioritizes **consistency** over absolute accuracy.
**What is the resolution?**
New DON timestamps are produced continuously (multiple per second). Treat it as coarse-grained real time suitable for gating and logging, not sub-millisecond measurement.
---
# Random in CRE
Source: https://docs.chain.link/cre/concepts/random-in-cre
Last Updated: 2025-11-04
## The problem: Why randomness needs special handling
Workflows often need randomness for various purposes: generating nonces, selecting winners from a list, or creating unpredictable values. However, in a decentralized network, naive use of random number generators creates a critical problem:
**If each node generates different random values, they cannot reach consensus on the workflow's output.**
For example, if your workflow selects a lottery winner using each node's local random generator, different nodes would select different winners, making it impossible to agree on a single result to write onchain.
## The solution: Consensus-safe randomness
CRE provides randomness through the `runtime.Rand()` method, which returns a standard Go `*rand.Rand` object. This random generator is managed by the CRE platform to ensure all nodes generate the same sequence of random values, enabling consensus while still providing unpredictability across different workflow executions.
### Usage
```go
// Get the random generator from the runtime
rnd, err := runtime.Rand()
if err != nil {
return err
}
// Use it with standard Go rand methods
randomInt := rnd.Intn(100) // Random int in [0, 100)
randomBigInt := new(big.Int).Rand(rnd, big.NewInt(1000)) // Random big.Int
```
## Common use cases
- Selecting a winner from a lottery or pool
- Generating nonces for transactions
- Creating random identifiers or values
- Any random selection that needs to be agreed upon by all nodes
## Working with big.Int random values
For Solidity `uint256` types, you often need random `*big.Int` values:
```go
rnd, err := runtime.Rand()
if err != nil {
return err
}
// Generate a random number in the range [0, max)
max := new(big.Int)
max.SetString("1000000000000000000", 10) // 1 ETH in wei
randomAmount := new(big.Int).Rand(rnd, max)
// randomAmount is a random value between 0 and 1 ETH
```
## Complete example: Random lottery
Here's a complete example that demonstrates using DON mode randomness to select a lottery winner and generate a prize amount:
```go
//go:build wasip1
package main
import (
"fmt"
"log/slog"
"math/big"
"github.com/smartcontractkit/cre-sdk-go/capabilities/scheduler/cron"
"github.com/smartcontractkit/cre-sdk-go/cre"
"github.com/smartcontractkit/cre-sdk-go/cre/wasm"
)
type Config struct {
Schedule string `json:"schedule"`
}
type MyResult struct {
WinnerIndex int
Winner string
RandomBigInt string
}
func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) {
return cre.Workflow[*Config]{
cre.Handler(cron.Trigger(&cron.Config{Schedule: config.Schedule}), onCronTrigger),
}, nil
}
func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) {
logger := runtime.Logger()
logger.Info("Running random lottery")
// Define participants
participants := []string{"Alice", "Bob", "Charlie", "Diana", "Eve"}
logger.Info("Participants in lottery", "count", len(participants), "names", participants)
// Get the DON mode random generator
rnd, err := runtime.Rand()
if err != nil {
return nil, fmt.Errorf("failed to get random generator: %w", err)
}
// Select a random winner (index in range [0, 5))
winnerIndex := rnd.Intn(len(participants))
winner := participants[winnerIndex]
logger.Info("Selected winner", "index", winnerIndex, "winner", winner)
// Generate a random prize amount up to 1,000,000 wei
maxPrize := big.NewInt(1000000)
randomPrize := new(big.Int).Rand(rnd, maxPrize)
logger.Info("Generated random prize", "amount", randomPrize.String())
// Return the results
result := &MyResult{
WinnerIndex: winnerIndex,
Winner: winner,
RandomBigInt: randomPrize.String(),
}
logger.Info("Random lottery complete!", "result", result)
return result, nil
}
func main() {
wasm.NewRunner(cre.ParseJSON[Config]).Run(InitWorkflow)
}
```
**What this example demonstrates:**
1. **DON mode context**: The randomness is called directly in the trigger callback (DON mode), ensuring all nodes in the network would select the same winner and prize amount.
2. **Random selection**: Uses `rnd.Intn(len(participants))` to select a random index from the participant list. The `Intn(n)` method returns a value in the range `[0, n)`.
3. **Random big.Int for Solidity**: Generates a `*big.Int` value suitable for use with Solidity `uint256` types.
4. **Error handling**: Properly checks for errors when calling `runtime.Rand()`.
When you run this workflow multiple times, each execution will select different winners and prize amounts (because each execution gets a different seed), but within a single execution, all nodes in the DON would arrive at the same winner.
## Best practices
### Do:
- **Always use `runtime.Rand()`** for randomness in your workflows
- **Check for errors** when calling `runtime.Rand()`
```go
rnd, err := runtime.Rand()
if err != nil {
return fmt.Errorf("failed to get random generator: %w", err)
}
```
### Don't:
- **Don't use Go's global `rand` package** directly. Always get your random generator from `runtime.Rand()` first.
## Mode-aware behavior
The randomness provided by `runtime.Rand()` is **mode-aware**. The examples above demonstrate DON mode (the default execution mode for workflows). There is also a Node mode with different random behavior, used in advanced scenarios. Each mode provides a different type of randomness.
### DON mode (default)
The examples above all use DON mode. In this mode:
- All nodes generate the **same** random sequence
- Enables consensus on random values
- This is the mode your main workflow callback runs in
### Node mode
When using `cre.RunInNodeMode`, you can access Node mode randomness:
- Each node generates **different** random values
- Useful for scenarios where per-node variability is accepted
- Access via `nodeRuntime.Rand()` inside the Node mode function
**Example:**
```go
resultPromise := cre.RunInNodeMode(config, runtime,
func(config *Config, nodeRuntime cre.NodeRuntime) (int, error) {
rnd, err := nodeRuntime.Rand()
if err != nil {
return 0, err
}
// Each node generates a different value
return rnd.Intn(100), nil
},
cre.ConsensusMedianAggregation[int](),
)
```
### Important: Mode isolation
Random generators are tied to the mode they were created in. **Do not** attempt to use a random generator from one mode in another mode—it will cause a panic and crash your workflow.
## FAQ
**Is the randomness cryptographically secure?**
The randomness is sourced from the host environment's secure random generator, but the standard Go `*rand.Rand` object is **not** intended for cryptographic purposes. For cryptographic operations, use dedicated crypto libraries.
**What happens if I try to use randomness in the wrong mode?**
The SDK will panic with the error: `"random cannot be used outside the mode it was created in"`. This is intentional—it prevents subtle consensus bugs.
**Can I use the same random generator across multiple calls?**
Yes. Once you call `runtime.Rand()` and get a `*rand.Rand` object, you can reuse it within the same execution mode. Each call to methods like `Intn()` will produce the next value in the deterministic sequence.
---
# CLI Reference
Source: https://docs.chain.link/cre/reference/cli
Last Updated: 2025-11-04
The CRE Command Line Interface (CLI) is your primary tool for developing, testing, deploying, and managing workflows. It handles project setup, contract binding generation (Go workflows only), local simulation, and workflow lifecycle management.
## Global flags
These flags can be used with any `cre` command.
| Flag | Description |
| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `-h, --help` | Displays help information for any command |
| `-e, --env` | Specifies the path to your `.env` file (default: `".env"`) |
| `-T, --target` | Sets the target environment from your configuration files |
| `-R, --project-root` | Specifies the path to the project root directory. By default, the CLI automatically finds the project root by searching for `project.yaml` in the current directory and parent directories |
| `-v, --verbose` | Enables verbose logging to print `DEBUG` level logs |
## Commands overview
### Authentication
Manage your authentication and account credentials.
- **`cre login`** — Authenticate with the CRE UI and save credentials locally
- **`cre logout`** — Revoke authentication tokens and remove local credentials
- **`cre whoami`** — Show your current account details
[View authentication commands →](/cre/reference/cli/authentication)
***
### Project Setup
Initialize projects and generate contract bindings (Go only).
- **`cre init`** — Initialize a new CRE project with an interactive setup guide
- **`cre generate-bindings`** (Go only) — Generate Go bindings from contract ABI files for type-safe contract interactions
[View project setup commands →](/cre/reference/cli/project-setup)
***
### Account Management
Manage your linked public key addresses for workflow operations.
- **`cre account link-key`** — Link a public key address to your account
- **`cre account list-key`** — List workflow owners linked to your organization
- **`cre account unlink-key`** — Unlink a public key address from your account
[View account management commands →](/cre/reference/cli/account)
***
### Workflow Commands
Manage workflows throughout their entire lifecycle.
- **`cre workflow simulate`** — Compile and execute workflows in a local simulation environment
- **`cre workflow deploy`** — Deploy a workflow to the Workflow Registry contract
- **`cre workflow activate`** — Activate a workflow on the Workflow Registry contract
- **`cre workflow pause`** — Pause a workflow on the Workflow Registry contract
- **`cre workflow delete`** — Delete all versions of a workflow from the Workflow Registry
[View workflow commands →](/cre/reference/cli/workflow)
***
### Secrets Management
Manage secrets stored in the Vault DON for use in your workflows.
- **`cre secrets create`** — Create new secrets from a YAML file
- **`cre secrets update`** — Update existing secrets
- **`cre secrets delete`** — Delete secrets
- **`cre secrets list`** — List secret identifiers in a namespace
[View secrets management commands →](/cre/reference/cli/secrets)
***
### Utilities
Additional utility commands.
- **`cre update`** — Update the CRE CLI to the latest version
- **`cre version`** — Print the current version of the CRE CLI
[View utility commands →](/cre/reference/cli/utilities)
## Typical development workflow
The typical workflow development process uses these commands in sequence:
1. **`cre init`** — Initialize your project
2. **`cre generate-bindings`** — Generate contract bindings from ABIs (Go workflows only, if interacting with contracts)
3. **`cre workflow simulate`** — Test your workflow locally
4. **`cre workflow deploy`** — Deploy your workflow to the registry
5. **`cre workflow activate`** / **`cre workflow pause`** — Control workflow execution
## Learn more
- **[CLI Installation](/cre/getting-started/cli-installation)** — How to install and set up the CRE CLI
- **[Getting Started](/cre/getting-started)** — Step-by-step tutorials using the CLI
- **[Project Configuration](/cre/reference/project-configuration)** — Understanding project structure and configuration files
---
# Authentication Commands
Source: https://docs.chain.link/cre/reference/cli/authentication
Last Updated: 2025-11-04
The authentication commands manage your CRE credentials and account information.
## `cre login`
Starts the authentication flow. This command opens your browser for user login and saves your credentials locally.
**Usage:**
```bash
cre login
```
**Authentication steps:**
1. The CLI opens your default web browser to the login page
2. Enter your account email address
3. Enter your account password
4. Enter your one-time password (OTP) from your authenticator app
5. The CLI automatically captures and saves your credentials locally
## `cre logout`
Revokes authentication tokens and removes local credentials. This invalidates the current authentication tokens and deletes stored credentials from your machine.
**Usage:**
```bash
cre logout
```
## `cre whoami`
Shows your current account details. This command fetches and displays your account information, including your email and organization ID.
**Usage:**
```bash
cre whoami
```
## Authentication workflow
The typical authentication flow:
1. **`cre login`** — Authenticate with the CRE UI (browser-based)
2. **`cre whoami`** — Verify your authentication and account details
3. **Perform CLI operations** — Deploy, manage workflows, etc.
4. **`cre logout`** — (Optional) Revoke credentials when done
## Learn more
- [Getting Started](/cre/getting-started) — Includes authentication setup in the initial steps
- [Account Management](/cre/reference/cli/account) — Link keys after authentication
- [Deploying Workflows](/cre/guides/operations/deploying-workflows) — Requires authentication
---
# Account Management
Source: https://docs.chain.link/cre/reference/cli/account
Last Updated: 2025-11-04
The `cre account` commands manage your linked public key addresses for workflow operations. These commands allow you to link wallet addresses to your CRE account, list linked addresses, and unlink them when needed.
## `cre account link-key`
Links a public key address to your account for workflow operations. This command reads your private key from the `.env` file (for EOA) or uses your configuration (for multi-sig), derives the public address, and registers it onchain in the Workflow Registry contract.
For a complete step-by-step guide with examples, see [Linking Wallet Keys](/cre/organization/linking-keys).
**Usage:**
```bash
cre account link-key [flags]
```
**Flags:**
| Flag | Description |
| ------------------- | --------------------------------------------------------------- |
| `-l, --owner-label` | Label for the workflow owner |
| `--unsigned` | Return the raw transaction instead of sending it to the network |
| `--yes` | Skip the confirmation prompt and proceed with the operation |
**Interactive flow:**
When you run this command, the CLI will:
1. Extract your public address from the private key in your `.env` file
2. Prompt you to provide a label for this address (e.g., "Production Wallet")
3. Check if the address is already linked
4. Display transaction details (chain, contract, estimated gas cost)
5. Ask for confirmation to execute the transaction
6. Submit the transaction and provide a block explorer link
**Examples:**
- Interactive mode (recommended)
```bash
cre account link-key
```
- Non-interactive mode with label
```bash
cre account link-key --owner-label "My Production Wallet" --yes
```
## `cre account list-key`
Lists all public key addresses linked to your organization. This shows the verification status, chain information, and contract addresses for each linked key.
For a complete guide on linking and managing keys, see [Linking Wallet Keys](/cre/organization/linking-keys).
**Usage:**
```bash
cre account list-key
```
**Example output:**
```bash
Workflow owners retrieved successfully:
Linked Owners:
1. my-production-wallet
Owner Address:
Status: VERIFICATION_STATUS_SUCCESSFULL
Verified At:
Chain Selector:
Contract Address:
```
## `cre account unlink-key`
Unlinks a previously linked public key address from your account. This is a destructive operation that removes the address from the Workflow Registry contract and deletes all workflows registered under that address.
For a complete guide on linking and unlinking keys, see [Linking Wallet Keys](/cre/organization/linking-keys).
**Usage:**
```bash
cre account unlink-key [flags]
```
**Flags:**
| Flag | Description |
| ------------ | --------------------------------------------------------------- |
| `--unsigned` | Return the raw transaction instead of sending it to the network |
| `--yes` | Skip the confirmation prompt and proceed with the operation |
**Interactive flow:**
When you run this command, the CLI will:
1. Extract your public address from the private key in your `.env` file (for EOA) or use your configuration (for multi-sig)
2. **Display a destructive action warning** about deleting all workflows
3. Ask for first confirmation to proceed with unlinking
4. Display transaction details (chain, contract, estimated gas cost)
5. Ask for second confirmation to execute the transaction
6. Submit the transaction and provide a block explorer link
---
# Workflow Commands
Source: https://docs.chain.link/cre/reference/cli/workflow
Last Updated: 2025-11-04
The `cre workflow` commands manage workflows throughout their entire lifecycle, from local testing to deployment and ongoing management.
## `cre workflow simulate`
Compiles your workflow to WASM and executes it in a local simulation environment. This is the core command for testing and debugging your workflow.
**Usage:**
```bash
cre workflow simulate [flags]
```
**Arguments:**
- `` — (Required) Workflow folder name (e.g., `my-workflow`) or path (e.g., `./my-workflow`). When run from the project root, you can use just the folder name. The CLI looks for a `workflow.yaml` file in the workflow directory.
**Flags:**
| Flag | Description |
| ------------------------- | -------------------------------------------------------------------------------------------------- |
| `--broadcast` | Broadcast onchain write transactions (default: `false`). Without this flag, a dry run is performed |
| `-g, --engine-logs` | Enable non-fatal engine logging |
| `--non-interactive` | Run without prompts; requires `--trigger-index` and inputs for the selected trigger type |
| `--trigger-index ` | Index of the trigger to run (0-based). Required when using `--non-interactive` |
| `--http-payload ` | HTTP trigger payload as JSON string or path to JSON file (with or without `@` prefix) |
| `--evm-tx-hash ` | EVM trigger transaction hash (`0x...`). For EVM log triggers |
| `--evm-event-index ` | EVM trigger log index (0-based). For EVM log triggers |
**Examples:**
- Basic simulation
```bash
cre workflow simulate ./my-workflow --target local-simulation
```
- Broadcast real onchain transactions
```bash
cre workflow simulate ./my-workflow --broadcast --target local-simulation
```
## `cre workflow deploy`
Deploys a workflow to the Workflow Registry contract. This command compiles your workflow, uploads the artifacts to the CRE Storage Service, and registers the workflow onchain.
**Usage:**
```bash
cre workflow deploy [flags]
```
**Arguments:**
- `` — (Required) Workflow folder name (e.g., `my-workflow`) or path (e.g., `./my-workflow`)
**Flags:**
| Flag | Description |
| ------------------ | ---------------------------------------------------------------------------------------------- |
| `-r, --auto-start` | Activate the workflow immediately after deployment (default: `true`) |
| `-o, --output` | Output file for the compiled WASM binary encoded in base64 (default: `"./binary.wasm.br.b64"`) |
| `--unsigned` | Return the raw transaction instead of sending it to the network |
| `--yes` | Skip the confirmation prompt and proceed with the operation |
**Examples:**
- Deploy workflow with auto-start (default behavior)
```bash
cre workflow deploy my-workflow --target production-settings
```
- Deploy without auto-starting
```bash
cre workflow deploy my-workflow --auto-start=false --target production-settings
```
- Deploy and save the compiled binary to a custom location
```bash
cre workflow deploy my-workflow --output ./dist/workflow.wasm.br.b64
```
For more details, see [Deploying Workflows](/cre/guides/operations/deploying-workflows).
## `cre workflow activate`
Changes the workflow status to active on the Workflow Registry contract. Active workflows can respond to their configured triggers.
**Usage:**
```bash
cre workflow activate [flags]
```
**Arguments:**
- `` — (Required) Workflow folder name (e.g., `my-workflow`) or path (e.g., `./my-workflow`)
**Flags:**
| Flag | Description |
| ------------ | --------------------------------------------------------------- |
| `--unsigned` | Return the raw transaction instead of sending it to the network |
| `--yes` | Skip the confirmation prompt and proceed with the operation |
**Example:**
```bash
cre workflow activate ./my-workflow --target production-settings
```
For more details, see [Activating & Pausing Workflows](/cre/guides/operations/activating-pausing-workflows).
## `cre workflow pause`
Changes the workflow status to paused on the Workflow Registry contract. Paused workflows will not respond to triggers.
**Usage:**
```bash
cre workflow pause [flags]
```
**Arguments:**
- `` — (Required) Workflow folder name (e.g., `my-workflow`) or path (e.g., `./my-workflow`)
**Flags:**
| Flag | Description |
| ------------ | --------------------------------------------------------------- |
| `--unsigned` | Return the raw transaction instead of sending it to the network |
| `--yes` | Skip the confirmation prompt and proceed with the operation |
**Example:**
```bash
cre workflow pause ./my-workflow --target production-settings
```
For more details, see [Activating & Pausing Workflows](/cre/guides/operations/activating-pausing-workflows).
## `cre workflow delete`
Deletes a workflow from the Workflow Registry.
**Usage:**
```bash
cre workflow delete [flags]
```
**Arguments:**
- `` — (Required) Workflow folder name (e.g., `my-workflow`) or path (e.g., `./my-workflow`)
**Flags:**
| Flag | Description |
| ------------ | --------------------------------------------------------------- |
| `--unsigned` | Return the raw transaction instead of sending it to the network |
| `--yes` | Skip the confirmation prompt and proceed with the operation |
**Example:**
```bash
cre workflow delete ./my-workflow --target production-settings
```
For more details, see [Deleting Workflows](/cre/guides/operations/deleting-workflows).
## Workflow lifecycle
The typical workflow lifecycle uses these commands in sequence:
1. **Develop locally** — Write and iterate on your workflow code
2. **`cre workflow simulate`** — Test your workflow in a local simulation environment
3. **`cre workflow deploy`** — Deploy your workflow to the registry (auto-starts by default)
4. **`cre workflow pause`** / **`cre workflow activate`** — Control workflow execution as needed
5. **`cre workflow deploy`** (again) — Deploy updates (replaces the existing workflow)
6. **`cre workflow delete`** — Remove the workflow when no longer needed
## Learn more
- [Deploying Workflows](/cre/guides/operations/deploying-workflows) — Detailed deployment guide
- [Activating & Pausing Workflows](/cre/guides/operations/activating-pausing-workflows) — Managing workflow state
- [Updating Deployed Workflows](/cre/guides/operations/updating-deployed-workflows) — Version management
- [Deleting Workflows](/cre/guides/operations/deleting-workflows) — Cleanup and removal
---
# Secrets Management Commands
Source: https://docs.chain.link/cre/reference/cli/secrets
Last Updated: 2025-11-04
The `cre secrets` commands manage secrets stored in the Vault DON (Decentralized Oracle Network) for deployed workflows. These commands allow you to create, update, delete, and list secrets that your workflows can access at runtime.
## Namespaces
Secrets are organized into **namespaces**, which act as logical groupings (e.g., `"main"`, `"staging"`, `"production"`). All secrets are stored in the `"main"` namespace by default. Currently, `create`, `update`, and `delete` commands only support the default namespace. Custom namespace support may be added in future CLI versions.
## cre secrets create
Creates new secrets in the Vault DON from a YAML file.
### Usage
```bash
cre secrets create [SECRETS_FILE_PATH] [flags]
```
### Arguments
- `SECRETS_FILE_PATH` — (Required) Path to a YAML file containing the secrets to create
### Flags
| Flag | Type | Default | Description |
| ------------ | -------- | ------- | --------------------------------------------------------------- |
| `--timeout` | duration | `48h` | Timeout for the operation (e.g., `30m`, `2h`, `48h`). Max: `7d` |
| `--unsigned` | boolean | `false` | Generate raw transaction data for multi-sig wallets |
### Input file format
YAML file with `secretsNames` structure:
```yaml
secretsNames:
API_KEY:
- API_KEY_VALUE
DATABASE_URL:
- DATABASE_URL_VALUE
```
- `secretsNames` — Top-level key containing all secrets
- Each secret key (e.g., `API_KEY`) maps to an array containing an environment variable name
- Secret values are read from environment variables or `.env` file
### Examples
- Create secrets from YAML file
```bash
cre secrets create my-secrets.yaml --target production-settings
```
- Create secrets with custom timeout
```bash
cre secrets create my-secrets.yaml --timeout 1h
```
- Create secrets for multi-sig wallets
```bash
cre secrets create my-secrets.yaml --unsigned
```
## cre secrets update
Updates existing secrets in the Vault DON from a YAML file.
### Usage
```bash
cre secrets update [SECRETS_FILE_PATH] [flags]
```
### Arguments
- `SECRETS_FILE_PATH` — (Required) Path to a YAML file containing the secrets to update
### Flags
| Flag | Type | Default | Description |
| ------------ | -------- | ------- | --------------------------------------------------------------- |
| `--timeout` | duration | `48h` | Timeout for the operation (e.g., `30m`, `2h`, `48h`). Max: `7d` |
| `--unsigned` | boolean | `false` | Generate raw transaction data for multi-sig wallets |
### Input file format
Same YAML format as `create`.
### Examples
- Update secrets
```bash
cre secrets update my-secrets.yaml --target production-settings
```
- Update secrets with custom timeout
```bash
cre secrets update my-secrets.yaml --timeout 6h
```
## cre secrets delete
Deletes secrets from the Vault DON based on a YAML file.
### Usage
```bash
cre secrets delete [SECRETS_FILE_PATH] [flags]
```
### Arguments
- `SECRETS_FILE_PATH` — (Required) Path to a YAML file containing the secrets to delete
### Flags
| Flag | Type | Default | Description |
| ------------ | -------- | ------- | --------------------------------------------------------------- |
| `--timeout` | duration | `48h` | Timeout for the operation (e.g., `30m`, `2h`, `48h`). Max: `7d` |
| `--unsigned` | boolean | `false` | Generate raw transaction data for multi-sig wallets |
### Input file format
YAML file with a simple list of secret identifiers to delete:
```yaml
secretsNames:
- API_KEY
- OLD_SECRET
```
### Example
```bash
cre secrets delete secrets-to-delete.yaml --target production-settings
```
## cre secrets list
Lists all secret identifiers for your owner address in a specific namespace.
### Usage
```bash
cre secrets list [flags]
```
### Flags
| Flag | Type | Default | Description |
| ------------- | -------- | -------- | --------------------------------------------------------------- |
| `--namespace` | string | `"main"` | Namespace to list secrets from |
| `--timeout` | duration | `48h` | Timeout for the operation (e.g., `30m`, `2h`, `48h`). Max: `7d` |
| `--unsigned` | boolean | `false` | Generate raw transaction data for multi-sig wallets |
### Example
- List secrets in default namespace
```bash
cre secrets list --target production-settings
```
- List secrets in specific namespace
```bash
cre secrets list --namespace production
```
### Output
Returns secret identifiers (not values) for the specified namespace:
```
Secret identifiers in namespace 'main':
- API_KEY
- DATABASE_URL
- WEBHOOK_SECRET
```
## Using with multi-sig wallets
All commands support the `--unsigned` flag for multi-sig operations:
```bash
cre secrets create my-secrets.yaml --unsigned
```
When `--unsigned` is used:
1. CLI generates raw transaction data instead of broadcasting
2. Transaction payload is returned for submission through your multi-sig interface
3. After multi-sig confirmation, the secrets operation proceeds
For details, see [Using Multi-sig Wallets](/cre/guides/operations/using-multisig-wallets).
## Learn more
- [Managing Secrets](/cre/guides/workflow/secrets) — Overview and decision tree for secrets management
- [Using Secrets in Simulation](/cre/guides/workflow/secrets/using-secrets-simulation) — For local development
- [Using Secrets with Deployed Workflows](/cre/guides/workflow/secrets/using-secrets-deployed) — Complete guide with examples
- [Managing Secrets with 1Password](/cre/guides/workflow/secrets/managing-secrets-1password) — Best practice for secure management
- [Using Multi-sig Wallets](/cre/guides/operations/using-multisig-wallets) — Multi-sig configuration
---
# Utility Commands
Source: https://docs.chain.link/cre/reference/cli/utilities
Last Updated: 2025-11-04
Utility commands provide helpful information and troubleshooting capabilities.
## `cre update`
Updates the CRE CLI to the latest version. This command automatically downloads and installs the newest release, making it easy to stay up to date.
**Usage:**
```bash
cre update
```
**Behavior:**
- Checks for the latest available version on GitHub
- Compares it with your currently installed version
- Automatically downloads and installs the update if a newer version is available
- Downloads the appropriate binary for your operating system and architecture
- Replaces the existing CLI binary with the new version
## `cre version`
Prints the current version of the CRE CLI.
**Usage:**
```bash
cre version
```
**Example output:**
```bash
cre version v1.0.0
```
## Learn more
- [CLI Installation](/cre/getting-started/cli-installation) — How to install and update the CRE CLI
- [CLI Reference](/cre/reference/cli) — Complete CLI command reference
---
# Avoiding Non-Determinism in Workflows
Source: https://docs.chain.link/cre/concepts/non-determinism-go
Last Updated: 2025-11-04
## The problem: Why determinism matters
When your workflow runs in DON mode, multiple nodes execute the same code independently. These nodes must reach consensus on the results before proceeding. **If nodes execute different code paths, they generate different request IDs for capability calls, and consensus fails.**
The failure pattern: Code diverges → Different request IDs → No quorum → Workflow fails
## Quick reference: Common pitfalls
| Don't Use | Use Instead |
| -------------------------------- | -------------------------------------------- |
| Direct map iteration | Sort keys first, then iterate |
| `encoding/json` v2 | `encoding/json` v1 |
| Protocol Buffers `proto.Marshal` | `proto.MarshalOptions{Deterministic: true}` |
| `select` with multiple channels | Process channels in deterministic order |
| `time.Now()` or `time` package | `runtime.Now()` |
| Go's `rand` package | `runtime.Rand()` |
| LLM free-text responses | Structured output with field-level consensus |
## 1. Map iteration
Go maps are **designed to iterate in random order** for security reasons. Each time you iterate over a map, the order may be different. This means different nodes will process items in different sequences, leading to divergent capability calls and consensus failure.
**The problem:** Direct map iteration produces unpredictable order across nodes.
**The solution:** Extract map keys, sort them, then iterate in the sorted order. This ensures all nodes process items in the same sequence.
## 2. JSON and data serialization
### JSON v2 non-determinism
The `encoding/json` v2 library uses random hashing for field order in hashmaps, making serialization non-deterministic. The same data structure can serialize to different JSON strings on different nodes.
**The solution:** Use `encoding/json` v1, which provides deterministic field ordering.
### Protocol Buffers serialization
The default `proto.Marshal` function does not guarantee deterministic output. Fields may be serialized in different orders across nodes.
**The solution:** Use `proto.MarshalOptions{Deterministic: true}.Marshal()` to ensure consistent serialization order across all nodes.
## 3. Concurrency and channel selection
Go's `select` statement with multiple channels introduces non-determinism. When multiple channels are ready, `select` picks one at random. Different nodes may select different channels, causing code paths to diverge.
**The problem:** `select` with multiple ready channels picks randomly, breaking consensus.
**The solution:** Process channels in a **fixed, deterministic order** instead of using `select`. Check channels sequentially in a consistent order across all nodes.
## 4. Time and dates
Never use Go's `time` package functions in DON mode. Nodes have different system clocks, causing divergence when calling `time.Now()` or similar functions.
**The problem:** Using `time.Now()` returns different values on each node.
**The solution:** Use `runtime.Now()` from the CRE SDK, which provides DON Time—a consensus-derived timestamp that all nodes agree on. See [Time in CRE](/cre/concepts/time-in-cre) for details.
## 5. Random number generation
Go's built-in `rand` package generates different random sequences on each node, making it impossible to reach consensus on values that depend on randomness.
**The problem:** Each node generates different random values, breaking consensus.
**The solution:** Use `runtime.Rand()` from the CRE SDK, which provides consensus-safe random number generation. All nodes generate the same sequence of random values, enabling consensus. See [Random in CRE](/cre/concepts/random-in-cre) for details.
## 6. Working with LLMs
Large Language Models (LLMs) generate different responses for the same prompt, even with temperature set to 0. This inherent non-determinism breaks consensus in workflows.
**The problem:** Free-text responses from LLMs will vary across nodes, making it impossible to reach agreement on the output.
**The solution:** Request **structured output** from the LLM (such as JSON with specific fields) rather than free-form text. Then use consensus aggregation on the structured fields. This approach allows nodes to agree on the key data points even if the exact text varies slightly.
## Best practices summary
### Do:
- Sort map keys before iteration
- Use `encoding/json` v1 for deterministic JSON serialization
- Use `proto.MarshalOptions{Deterministic: true}` for Protocol Buffers
- Process channels in a fixed, deterministic order
- Use `runtime.Now()` for all time operations
- Use `runtime.Rand()` for random number generation
- Request structured output from LLMs
### Don't:
- Iterate over maps directly without sorting keys
- Use `encoding/json` v2 (uses random hashing)
- Use `proto.Marshal` without deterministic options
- Use `select` with multiple channels for decision-making
- Use `time.Now()` or other `time` package functions
- Use Go's `rand` package directly
- Rely on free-text LLM responses
## Related concepts
- **[Time in CRE](/cre/concepts/time-in-cre)**: Learn about DON Time and why `runtime.Now()` is required
- **[Random in CRE](/cre/concepts/random-in-cre)**: Understand consensus-safe random number generation
- **[Consensus Computing](/cre/concepts/consensus-computing)**: Deep dive into how nodes reach agreement
---
# Part 1: Project Setup & Simulation
Source: https://docs.chain.link/cre/getting-started/part-1-project-setup-go
Last Updated: 2025-11-04
In this first part, you'll go from an empty directory to a fully initialized CRE project and [simulate](/cre/guides/operations/simulating-workflows) your first, minimal workflow. The goal is to get a quick "win" and familiarize yourself with the core project structure and development loop.
## What you'll do
- Initialize a new project using `cre init`.
- Explore the generated project structure and workflow code.
- Configure your workflow for simulation.
- Run your first local simulation with `cre workflow simulate`.
## Prerequisites
Before you begin, ensure you have the following:
- **CRE CLI**: See the [Installation Guide](/cre/getting-started/cli-installation/macos-linux) for details.
- **CRE account & authentication**: You must have a CRE account and be logged in with the CLI. See [Create your account](/cre/account/creating-account) and [Log in with the CLI](/cre/account/cli-login) for instructions.
- **Go**: You must have Go version 1.24.4 or higher installed. Check your version with go version. See [Install Go](https://go.dev/doc/install) for instructions.
- **Funded Sepolia Account**: An account with Sepolia ETH to pay for transaction gas fees. Go to faucets.chain.link to get some Sepolia ETH.
## Step 1: Verify your authentication
Before initializing your project, verify that you're logged in to the CRE CLI:
```bash
cre whoami
```
**Expected output:**
- If you're authenticated, you'll see your account details:
```bash
Account details retrieved:
Email: email@domain.com
Organization ID: org_AbCdEfGhIjKlMnOp
```
- If you're not logged in, you'll receive an error message prompting you to run `cre login`:
```bash
Error: failed to attach credentials: failed to load credentials: you are not logged in, try running cre login
```
Run the login command and follow the prompts:
```bash
cre login
```
See [Logging in with the CLI](/cre/account/cli-login) for detailed instructions if you need help.
## Step 2: Initialize your project
The CRE CLI provides an `init` command to scaffold a new project. It's an interactive process that will ask you for a project name, a workflow template, and a name for your first workflow.
1. **In your terminal, navigate to a parent directory where you want your new CRE project to live.**
2. **Run the `init` command.** The CLI will guide you through the setup process:
```bash
cre init
```
3. **Provide the following details when prompted:**
- **Project name**: onchain-calculator
- **Language**: Select `Golang` and press Enter.
- **Pick a workflow template**: Use the arrow keys to select `Helloworld: A Golang Hello World example` and press Enter. We are starting from scratch to learn all the configuration steps.
- **Workflow name**: my-calculator-workflow
The CLI will then create a new `onchain-calculator` directory and initialize your first workflow within it.
## Step 3: Explore the generated files
The `init` command creates a directory with a standard structure and generates your first workflow code. Let's explore what was created.
### Project structure
Your new project has the following structure:
```
onchain-calculator/
├── contracts/
│ └── evm/
│ └── src/
│ ├── abi/
│ └── keystone/
├── my-calculator-workflow/
│ ├── config.production.json
│ ├── config.staging.json
│ ├── main.go
│ ├── README.md
│ └── workflow.yaml
├── .env
├── .gitignore
├── go.mod
├── go.sum
├── project.yaml
└── secrets.yaml
```
- **Project**: The top-level directory (e.g., `onchain-calculator/`).
- It contains project-wide files like `project.yaml`, which holds shared configurations for all workflows within the project.
- The entire project is a single Go module.
- A project can contain multiple workflows.
- **Workflow**: A subdirectory (e.g., `my-calculator-workflow/`) that contains source code and configuration. It functions as a Go package within the main project-level Go module.
Here are the key files and their roles:
| File | Role |
| ----------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `project.yaml` | The global configuration file. Contains shared settings like RPC URLs for different environments (called `targets`). |
| `.env` | Stores secrets and environment variables, like your private key. Never commit this file to version control. |
| `go.mod`/`go.sum` | Manages the dependencies for the entire project, including all workflows and contract bindings. |
| `contracts/evm/src/abi/` | Directory where you place contract ABI files (`.abi`). These are used to generate bindings, which are type-safe Go packages that make it easy to interact with your smart contracts from your workflow. The generated bindings will be saved to a new `contracts/evm/src/generated/` directory. |
| `contracts/evm/src/keystone/` | Directory for Keystone-related contract files. |
| `secrets.yaml` | An empty secrets configuration file created by the CLI. You'll learn how to use this for managing secrets in more advanced guides. |
| `my-calculator-workflow/` | A directory containing the source code and configuration for a single workflow. It is a package within the project's main Go module. |
| `├── workflow.yaml` | Contains configurations specific to this workflow, such as its name and workflow artifacts (entry point path, config file path, secrets file path). The `workflow-artifacts` section tells the CLI where to find your workflow's files. |
| `├── config.staging.json` | Contains parameters for your workflow when using the `staging-settings` target, which can be accessed in your code via the `Config` object. |
| `├── config.production.json` | Contains parameters for your workflow when using the `production-settings` target, which can be accessed in your code via the `Config` object. |
| `└── main.go` | The heart of your workflow where you'll write your Go logic. |
You don't need to understand every file and directory right now—this guide is designed to introduce each concept when
you actually need it. For now, let's look at the workflow code.
### The workflow code
The `init` command created a `main.go` file with a minimal `main` function. Let's replace the contents of this file with the code for a basic "Hello World!" workflow.
This code defines a `Config` struct to hold parameters from our config file. It then configures a [cron trigger](/cre/reference/sdk/triggers/cron-trigger) to run on the schedule provided in the config, and registers a simple handler that logs a message.
Open `onchain-calculator/my-calculator-workflow/main.go` and replace its entire content with the following code:
Code snippet for onchain-calculator/my-calculator-workflow/main.go:
```go
//go:build wasip1
package main
import (
"log/slog"
"github.com/smartcontractkit/cre-sdk-go/capabilities/scheduler/cron"
"github.com/smartcontractkit/cre-sdk-go/cre"
"github.com/smartcontractkit/cre-sdk-go/cre/wasm"
)
// Config struct defines the parameters that can be passed to the workflow.
type Config struct {
Schedule string `json:"schedule"`
}
// The result of our workflow, which is empty for now.
type MyResult struct{}
// onCronTrigger is the callback function that gets executed when the cron trigger fires.
func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) {
logger := runtime.Logger()
logger.Info("Hello, Calculator! Workflow triggered.")
return &MyResult{}, nil
}
// InitWorkflow is the required entry point for a CRE workflow.
// The runner calls this function to initialize the workflow and register its handlers.
func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) {
return cre.Workflow[*Config]{
cre.Handler(
// Use the schedule from our config file.
cron.Trigger(&cron.Config{Schedule: config.Schedule}),
onCronTrigger,
),
}, nil
}
// main is the entry point for the WASM binary.
func main() {
// The runner is initialized with our Config struct.
// It will automatically parse the config.json file into this struct.
wasm.NewRunner(cre.ParseJSON[Config]).Run(InitWorkflow)
}
```
## Step 4: Configure your workflow
Now that you've explored the generated files, let's configure your workflow for simulation. You'll need to adjust a few configuration files.
### Update config files
The CLI generates separate config files for each target environment. Your workflow code can access the parameters from whichever config file corresponds to the target you're using.
Inside the `my-calculator-workflow` directory, open `config.staging.json` and add the `schedule` parameter that our `main.go` code expects:
```json
{
"schedule": "0 */1 * * * *"
}
```
Since we'll be using the `staging-settings` target for this guide, you only need to update `config.staging.json` for now. The `config.production.json` file can remain empty.
### Review `workflow.yaml`
This file tells the CLI where to find your workflow files. The `cre init` command created this file with default values. Open `my-calculator-workflow/workflow.yaml` and you'll see:
```yaml
# ==========================================================================
staging-settings:
user-workflow:
workflow-name: "my-calculator-workflow-staging"
workflow-artifacts:
workflow-path: "."
config-path: "./config.staging.json"
secrets-path: ""
# ==========================================================================
production-settings:
user-workflow:
workflow-name: "my-calculator-workflow-production"
workflow-artifacts:
workflow-path: "."
config-path: "./config.production.json"
secrets-path: ""
```
**Understanding the sections:**
- **Target names** (`staging-settings`, `production-settings`): These are environment configuration sets. The `cre init` command pre-populates your `workflow.yaml` with these two common targets as a starting point, but you can name targets whatever you want (e.g., `dev`, `test`, `prod`). When running CLI commands, you specify which target to use with the `--target` flag.
- **`workflow-name`**: Each target has its own workflow name with a suffix (e.g., `-staging`, `-production`). This allows you to deploy the same workflow to different environments with distinct identities.
- **`workflow-path: "."`**: The entry point for your Go code (`.` means the current directory)
- **`config-path`**: Each target points to its own config file (`config.staging.json` or `config.production.json`)
- **`secrets-path: ""`**: The location of your secrets file (empty for now; you'll learn about secrets in more advanced guides)
You don't need to modify this file for now.
For this guide, we'll use `staging-settings` for local simulation. When you run `cre workflow simulate my-calculator-workflow --target staging-settings`, the CLI reads the configuration from the `staging-settings` section of this file.
### Set up your private key
The simulator requires a private key to initialize its environment, even for workflows that don't interact with the blockchain yet. This key will be used in later parts of this guide to read from and send transactions to the Sepolia testnet.
1. Open the `.env` file located in your `onchain-calculator/` project root directory.
2. Add your funded Sepolia account's private key:
```bash
# Replace with your own private key for your funded Sepolia account
CRE_ETH_PRIVATE_KEY=YOUR_64_CHARACTER_PRIVATE_KEY_HERE
```
## Step 5: Run your first simulation
Now that your workflow is configured and dependencies are installed, you can run the simulation. [Workflow simulation](/cre/guides/operations/simulating-workflows) is a local execution environment that compiles your code to WebAssembly and runs it on your machine, allowing you to test and debug before deploying to a live network.
Run the `simulate` command from your project root directory (the `onchain-calculator/` folder):
```bash
cre workflow simulate my-calculator-workflow --target staging-settings
```
This command compiles your Go code, uses the `staging-settings` target configuration from `workflow.yaml`, and spins up a local simulation environment.
## Step 6: Review the output
After the workflow compiles, the simulator detects the single trigger you defined in your code and immediately runs the workflow.
```bash
Workflow compiled
2025-11-03T22:34:11Z [SIMULATION] Simulator Initialized
2025-11-03T22:34:11Z [SIMULATION] Running trigger trigger=cron-trigger@1.0.0
2025-11-03T22:34:11Z [USER LOG] msg="Hello, Calculator! Workflow triggered."
Workflow Simulation Result:
{}
2025-11-03T22:34:11Z [SIMULATION] Execution finished signal received
2025-11-03T22:34:11Z [SIMULATION] Skipping WorkflowEngineV2
```
- **`[USER LOG]`**: This is the output from your own code—in this case, the `logger.Info()` call. This is where you will look for your custom log messages.
- **`[SIMULATION]`**: These are system-level messages from the simulator showing its internal state (initialization, trigger execution, completion).
- **`Workflow Simulation Result: {}`**: This is the final return value of your workflow. It's currently an empty object, but you will populate it in the next part of this guide.
Congratulations! You've built and simulated your first CRE workflow from scratch.
## Next steps
In the next section, you'll build on this foundation by modifying the workflow to fetch real data from an external API.
- **[Part 2: Fetching Offchain Data](/cre/getting-started/part-2-fetching-data)**
---
# Part 2: Fetching Offchain Data
Source: https://docs.chain.link/cre/getting-started/part-2-fetching-data-go
Last Updated: 2025-11-04
In Part 1, you successfully built and ran a minimal workflow. Now, it's time to connect it to the outside world. In this section, you will modify your workflow to fetch data from a public API using the CRE SDK's [`http.Client`](/cre/reference/sdk/http-client).
## What you'll do
- Add a new URL to your workflow's config file.
- Learn about the `http.SendRequest` helper for offchain operations.
- Write a new function to fetch data from the public [`api.mathjs.org`](https://api.mathjs.org/) API.
- Integrate the offchain data into your main workflow logic.
## Step 1: Update your configuration
First, you need to add the API endpoint to your workflow's configuration. This allows you to easily change the URL without modifying your Go code.
Open the `config.staging.json` file in your `my-calculator-workflow` directory and add the `apiUrl` key. Your file should now look like this:
```json
{
"schedule": "0 */1 * * * *",
"apiUrl": "https://api.mathjs.org/v4/?expr=randomInt(1,101)"
}
```
This URL calls the public mathjs.org API and uses its `randomInt(min, max)` function to return a random integer between 1 and 100. Note that the upper bound is exclusive, so we use `101` to get values up to 100. The API returns the number as a raw string in the response body.
## Step 2: Understand the `http.SendRequest` pattern
Many offchain data sources are **non-deterministic**, meaning different nodes calling the same API might get slightly different answers due to timing, load balancing, or other factors. The `api.mathjs.org` API with the `randomInt` function is a perfect example—each call will return a different random number.
The CRE SDK solves this with `http.SendRequest`, a helper function that transforms these potentially varied results into a single, highly reliable result. It works like a "map-reduce" for the DON:
1. **Map**: You provide a function (e.g., `fetchMathResult`) that will be executed by every node in the DON independently. Each node "maps" the offchain world by fetching its own version of the data.
2. **Reduce**: You provide a consensus algorithm (e.g., [`ConsensusMedianAggregation`](/cre/reference/sdk/consensus#consensusmedianaggregationt)) that takes all the individual results and "reduces" them into a single, trusted outcome.
This pattern is fundamental to securely and reliably bringing offchain data into your workflow.
## Step 3: Add the HTTP fetch logic
Now, let's modify your `main.go` file. You will add a new function, `fetchMathResult`, that contains the logic for calling the API. You'll also update the `onCronTrigger` function to call the `http.SendRequest` helper.
Replace the entire content of `onchain-calculator/my-calculator-workflow/main.go` with the following code.
Code snippet for onchain-calculator/my-calculator-workflow/main.go:
```go
//go:build wasip1
package main
import (
"fmt"
"log/slog"
"math/big"
"github.com/smartcontractkit/cre-sdk-go/capabilities/networking/http"
"github.com/smartcontractkit/cre-sdk-go/capabilities/scheduler/cron"
"github.com/smartcontractkit/cre-sdk-go/cre"
"github.com/smartcontractkit/cre-sdk-go/cre/wasm"
)
// Add the ApiUrl to your config struct
type Config struct {
Schedule string `json:"schedule"`
ApiUrl string `json:"apiUrl"`
}
type MyResult struct {
Result *big.Int
}
func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) {
return cre.Workflow[*Config]{
cre.Handler(
cron.Trigger(&cron.Config{Schedule: config.Schedule}),
onCronTrigger,
),
}, nil
}
// fetchMathResult is the function passed to the http.SendRequest helper.
// It contains the logic for making the request and parsing the response.
func fetchMathResult(config *Config, logger *slog.Logger, sendRequester *http.SendRequester) (*big.Int, error) {
req := &http.Request{
Url: config.ApiUrl,
Method: "GET",
}
// Send the request using the provided sendRequester
resp, err := sendRequester.SendRequest(req).Await()
if err != nil {
return nil, fmt.Errorf("failed to get API response: %w", err)
}
// The mathjs.org API returns the result as a raw string in the body.
// We need to parse it into a big.Int.
val, ok := new(big.Int).SetString(string(resp.Body), 10)
if !ok {
return nil, fmt.Errorf("failed to parse API response into big.Int")
}
return val, nil
}
// onCronTrigger is our main DON-level callback.
func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) {
logger := runtime.Logger()
logger.Info("Hello, Calculator! Workflow triggered.")
client := &http.Client{}
// Use the http.SendRequest helper to execute the offchain fetch.
mathPromise := http.SendRequest(config, runtime, client,
fetchMathResult,
// The API returns a random number, so each node can get a different result. We use Median Aggregation to find a median value.
cre.ConsensusMedianAggregation[*big.Int](),
)
// Await the final, aggregated result.
result, err := mathPromise.Await()
if err != nil {
return nil, err
}
logger.Info("Successfully fetched and aggregated math result", "result", result)
return &MyResult{
Result: result,
}, nil
}
func main() {
wasm.NewRunner(cre.ParseJSON[Config]).Run(InitWorkflow)
}
```
## Step 4: Sync your dependencies
1. **Sync Dependencies**: Your new code imports a new package. Run the following `go get` command to add the required dependency at its specific version:
```bash
go get github.com/smartcontractkit/cre-sdk-go/capabilities/networking/http@v1.0.0-beta.0
```
2. **Clean up and organize your module files**:
After fetching the new dependencies, run `go mod tidy` to clean up the `go.mod` and `go.sum` files.
```bash
go mod tidy
```
## Step 5: Run the simulation and review the output
Run the `simulate` command from your project root directory (the `onchain-calculator/` folder). Because there is only one trigger, the simulator runs it automatically.
```bash
cre workflow simulate my-calculator-workflow --target staging-settings
```
The output shows the new user logs from your workflow, followed by the final `Workflow Simulation Result`.
```bash
Workflow compiled
2025-11-03T22:35:51Z [SIMULATION] Simulator Initialized
2025-11-03T22:35:51Z [SIMULATION] Running trigger trigger=cron-trigger@1.0.0
2025-11-03T22:35:51Z [USER LOG] msg="Hello, Calculator! Workflow triggered."
2025-11-03T22:35:52Z [USER LOG] msg="Successfully fetched and aggregated math result" result=50
Workflow Simulation Result:
{
"Result": 50
}
2025-11-03T22:35:52Z [SIMULATION] Execution finished signal received
2025-11-03T22:35:52Z [SIMULATION] Skipping WorkflowEngineV2
```
- **`[USER LOG]`**: You can now see both of your `logger.Info()` calls in the output. The second log shows the fetched and aggregated value (`result=50`), confirming that the API call and consensus worked correctly.
- **`[SIMULATION]`**: These are system-level messages from the simulator showing its internal state.
- **`Workflow Simulation Result`**: This is the final return value of your workflow. The `Result` field now contains the aggregated value from the API as a number.
Your workflow can now fetch and process data from an external source.
## Next Steps
Next, you'll learn how to interact with a smart contract to read data from the blockchain and combine it with this offchain result.
- **[Part 3: Reading an Onchain Value](/cre/getting-started/part-3-reading-onchain-value)**
---
# Part 3: Reading an Onchain Value
Source: https://docs.chain.link/cre/getting-started/part-3-reading-onchain-value-go
Last Updated: 2025-11-04
In the previous part, you successfully fetched data from an offchain API. Now, you will complete the "Onchain Calculator" by reading a value from a smart contract and combining it with your offchain result.
This part of the guide introduces the core pattern for all onchain interactions: **contract bindings**.
## What you'll do
- Configure your project with a Sepolia RPC URL.
- Create a new Go package for a contract binding.
- Use a binding to read a value from a deployed smart contract.
- Integrate the onchain value into your main workflow logic.
## Step 1: The smart contract
For this guide, we will interact with a simple `Storage` contract that has already been deployed to the Sepolia testnet. All it does is store a single `uint256` value.
Here is the Solidity source code for the contract:
```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.19;
contract Storage {
uint256 public value;
constructor(uint256 initialValue) {
value = initialValue;
}
function get() public view returns (uint256) {
return value;
}
}
```
A version of this contract has been deployed to Sepolia at `0xa17CF997C28FF154eDBae1422e6a50BeF23927F4` with an `initialValue` of `22`.
## Step 2: Configure your environment
To interact with a contract on Sepolia, your workflow needs EVM chain details.
1. **Contract address and chain name**: Add the deployed contract's address and chain name to your `config.staging.json` file. We use an `evms` array to hold the configuration, which makes it easy to add more contracts (or chains) later.
```json
{
"schedule": "0 */1 * * * *",
"apiUrl": "https://api.mathjs.org/v4/?expr=randomInt(1,101)",
"evms": [
{
"storageAddress": "0xa17CF997C28FF154eDBae1422e6a50BeF23927F4",
"chainName": "ethereum-testnet-sepolia"
}
]
}
```
2. **RPC URL**: For your workflow to interact with the blockchain, it needs an RPC endpoint. The `cre init` command has already configured a public Sepolia RPC URL in your `project.yaml` file for convenience. Let's take a look at what was generated:
Open your `project.yaml` file at the root of your project. Your `staging-settings` target should look like this:
```yaml
# in onchain-calculator/project.yaml
staging-settings:
rpcs:
- chain-name: ethereum-testnet-sepolia
url: https://ethereum-sepolia-rpc.publicnode.com
```
This public RPC endpoint is sufficient for testing and following this guide. However, for production use or higher reliability, you should consider using a dedicated RPC provider like [Alchemy](https://www.alchemy.com/) or [Infura](https://infura.io/).
## Step 3: Create the contract binding
This is the core of onchain interaction. While you *could* call the generic [`evm.Client`](/cre/reference/sdk/evm-client) directly from your main workflow logic, this is not recommended. Doing so would require you to manually handle ABI encoding and decoding for every contract call, leading to code that is hard to read and prone to errors.
The recommended pattern is to create a **binding**: a separate Go package that acts as a type-safe client for your specific smart contract. The binding encapsulates all the low-level encoding/decoding logic, allowing your main workflow to remain clean and focused.
In this step, you will create a binding for the `Storage` contract.
1. **Add the contract ABI**: Create a new file called `Storage.abi` in the existing `abi` directory and add the contract's ABI JSON. From your project root (`onchain-calculator/`), run the following command:
```bash
touch contracts/evm/src/abi/Storage.abi
```
Open `contracts/evm/src/abi/Storage.abi` and paste the following ABI:
```json
[
{
"inputs": [{ "internalType": "uint256", "name": "initialValue", "type": "uint256" }],
"stateMutability": "nonpayable",
"type": "constructor"
},
{
"inputs": [],
"name": "get",
"outputs": [{ "internalType": "uint256", "name": "", "type": "uint256" }],
"stateMutability": "view",
"type": "function"
},
{
"inputs": [],
"name": "value",
"outputs": [{ "internalType": "uint256", "name": "", "type": "uint256" }],
"stateMutability": "view",
"type": "function"
}
]
```
2. **Generate the Go binding**: Run the CRE binding generator from your project root (`onchain-calculator/`):
```bash
cre generate-bindings evm
```
This command will automatically generate type-safe Go bindings for all ABI files in your `contracts/evm/src/abi/` directory. It also automatically adds the required `evm` capability dependency to your `go.mod` file. The generated bindings will be placed in `contracts/evm/src/generated/`.
3. **Verify the generated files**: After running the command, you should see two new files in `contracts/evm/src/generated/storage/`:
- `Storage.go` — The main binding that provides a type-safe interface for interacting with your contract
- `Storage_mock.go` — A mock implementation for testing workflows without deploying contracts
## Step 4: Update your workflow logic
Now you can use your new binding in your `main.go` file to read the onchain value and complete the calculation.
Replace the entire content of `onchain-calculator/my-calculator-workflow/main.go` with the version below.
**Note:** Lines highlighted in green indicate new or modified code compared to Part 2.
Code snippet for onchain-calculator/my-calculator-workflow/main.go:
```go
//go:build wasip1
package main
import (
"fmt"
"log/slog"
"math/big"
"onchain-calculator/contracts/evm/src/generated/storage"
"github.com/ethereum/go-ethereum/common"
"github.com/smartcontractkit/cre-sdk-go/capabilities/blockchain/evm"
"github.com/smartcontractkit/cre-sdk-go/capabilities/networking/http"
"github.com/smartcontractkit/cre-sdk-go/capabilities/scheduler/cron"
"github.com/smartcontractkit/cre-sdk-go/cre"
"github.com/smartcontractkit/cre-sdk-go/cre/wasm"
)
// EvmConfig defines the configuration for a single EVM chain.
type EvmConfig struct {
StorageAddress string `json:"storageAddress"`
ChainName string `json:"chainName"`
}
// Config struct now contains a list of EVM configurations.
// This makes it consistent with the structure used in Part 4.
type Config struct {
Schedule string `json:"schedule"`
ApiUrl string `json:"apiUrl"`
Evms []EvmConfig `json:"evms"`
}
type MyResult struct {
FinalResult *big.Int
}
func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) {
return cre.Workflow[*Config]{
cre.Handler(cron.Trigger(&cron.Config{Schedule: config.Schedule}), onCronTrigger),
}, nil
}
func fetchMathResult(config *Config, logger *slog.Logger, sendRequester *http.SendRequester) (*big.Int, error) {
req := &http.Request{Url: config.ApiUrl, Method: "GET"}
resp, err := sendRequester.SendRequest(req).Await()
if err != nil {
return nil, fmt.Errorf("failed to get API response: %w", err)
}
val, ok := new(big.Int).SetString(string(resp.Body), 10)
if !ok {
return nil, fmt.Errorf("failed to parse API response into big.Int")
}
return val, nil
}
func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) {
logger := runtime.Logger()
// Step 1: Fetch offchain data (from Part 2)
client := &http.Client{}
mathPromise := http.SendRequest(config, runtime, client, fetchMathResult, cre.ConsensusMedianAggregation[*big.Int]())
offchainValue, err := mathPromise.Await()
if err != nil {
return nil, err
}
logger.Info("Successfully fetched offchain value", "result", offchainValue)
// Get the first EVM configuration from the list.
evmConfig := config.Evms[0]
// Step 2: Read onchain data using the binding
// Convert the human-readable chain name to a numeric chain selector
chainSelector, err := evm.ChainSelectorFromName(evmConfig.ChainName)
if err != nil {
return nil, fmt.Errorf("invalid chain name: %w", err)
}
evmClient := &evm.Client{
ChainSelector: chainSelector,
}
storageAddress := common.HexToAddress(evmConfig.StorageAddress)
storageContract, err := storage.NewStorage(evmClient, storageAddress, nil)
if err != nil {
return nil, fmt.Errorf("failed to create contract instance: %w", err)
}
onchainValue, err := storageContract.Get(runtime, big.NewInt(-3)).Await() // -3 means finalized block
if err != nil {
return nil, fmt.Errorf("failed to read onchain value: %w", err)
}
logger.Info("Successfully read onchain value", "result", onchainValue)
// Step 3: Combine the results
finalResult := new(big.Int).Add(onchainValue, offchainValue)
logger.Info("Final calculated result", "result", finalResult)
return &MyResult{
FinalResult: finalResult,
}, nil
}
func main() {
wasm.NewRunner(cre.ParseJSON[Config]).Run(InitWorkflow)
}
```
## Step 5: Sync your dependencies
Now that your `main.go` file has been updated to import the new `storage` binding package, run `go mod tidy` to automatically update your project's `go.mod` and `go.sum` files.
```bash
go mod tidy
```
## Step 6: Run the simulation and review the output
Run the simulation from your project root directory (the `onchain-calculator/` folder). Because there is only one trigger defined, the simulator runs it automatically.
```bash
cre workflow simulate my-calculator-workflow --target staging-settings
```
The simulation logs will show the end-to-end execution of your workflow.
```bash
Workflow compiled
2025-11-03T22:37:05Z [SIMULATION] Simulator Initialized
2025-11-03T22:37:05Z [SIMULATION] Running trigger trigger=cron-trigger@1.0.0
2025-11-03T22:37:05Z [USER LOG] msg="Successfully fetched offchain value" result=53
2025-11-03T22:37:05Z [USER LOG] msg="Successfully read onchain value" result=22
2025-11-03T22:37:05Z [USER LOG] msg="Final calculated result" result=75
Workflow Simulation Result:
{
"FinalResult": 75
}
2025-11-03T22:37:05Z [SIMULATION] Execution finished signal received
2025-11-03T22:37:05Z [SIMULATION] Skipping WorkflowEngineV2
```
- **`[USER LOG]`**: You can now see all three of your `logger.Info()` calls, showing the offchain value (`result=53`), the onchain value (`result=22`), and the final combined result (`result=75`).
- **`[SIMULATION]`**: These are system-level messages from the simulator showing its internal state.
- **`Workflow Simulation Result`**: This is the final, JSON-formatted return value of your workflow. The `FinalResult` field contains the sum of the offchain and onchain values (53 + 22 = 75).
You have successfully built a complete CRE workflow that combines offchain and onchain data.
## Next Steps
You have successfully read a value from a smart contract and combined it with offchain data. The final step is to write this new result back to the blockchain.
- **[Part 4: Writing Onchain](/cre/getting-started/part-4-writing-onchain)**: Learn how to execute an onchain write transaction from your workflow to complete the project.
---
# Part 4: Writing Onchain
Source: https://docs.chain.link/cre/getting-started/part-4-writing-onchain-go
Last Updated: 2025-11-04
In the previous parts, you successfully fetched offchain data and read from a smart contract. Now, you'll complete the "Onchain Calculator" by writing your computed result back to the blockchain.
## What you'll do
- Generate bindings for a pre-deployed `CalculatorConsumer` contract
- Modify your workflow to write data to the blockchain using the EVM capability
- Execute your first onchain write transaction through CRE
- Verify your result on the blockchain
## Step 1: The consumer contract
To write data onchain, your workflow needs a target smart contract (a "consumer contract"). For this guide, we have pre-deployed a simple `CalculatorConsumer` contract on the Sepolia testnet. This contract is designed to receive and store the calculation results from your workflow.
Here is the source code for the contract so you can see how it works:
```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.19;
import { IReceiverTemplate } from "./keystone/IReceiverTemplate.sol";
/**
* @title CalculatorConsumer (Testing Version)
* @notice This contract receives reports from a CRE workflow and stores the results of a calculation onchain.
* @dev This version uses IReceiverTemplate without configuring any security checks, making it compatible
* with the mock Forwarder used during simulation. All permission fields remain at their default zero
* values (disabled).
*/
contract CalculatorConsumer is IReceiverTemplate {
// Struct to hold the data sent in a report from the workflow
struct CalculatorResult {
uint256 offchainValue;
int256 onchainValue;
uint256 finalResult;
}
// --- State Variables ---
CalculatorResult public latestResult;
uint256 public resultCount;
mapping(uint256 => CalculatorResult) public results;
// --- Events ---
event ResultUpdated(uint256 indexed resultId, uint256 finalResult);
/**
* @dev The constructor doesn't set any security checks.
* The IReceiverTemplate parent constructor will initialize all permission fields to zero (disabled).
*/
constructor() {}
/**
* @notice Implements the core business logic for processing reports.
* @dev This is called automatically by IReceiverTemplate's onReport function after security checks.
*/
function _processReport(bytes calldata report) internal override {
// Decode the report bytes into our CalculatorResult struct
CalculatorResult memory calculatorResult = abi.decode(report, (CalculatorResult));
// --- Core Logic ---
// Update contract state with the new result
resultCount++;
results[resultCount] = calculatorResult;
latestResult = calculatorResult;
emit ResultUpdated(resultCount, calculatorResult.finalResult);
}
// This function is a "dry-run" utility. It allows an offchain system to check
// if a prospective result is an outlier before submitting it for a real onchain update.
// It is also used to guide the binding generator to create a method that accepts the CalculatorResult struct.
function isResultAnomalous(CalculatorResult memory _prospectiveResult) public view returns (bool) {
// A result is not considered anomalous if it's the first one.
if (resultCount == 0) {
return false;
}
// Business logic: Define an anomaly as a new result that is more than double the previous result.
// This is just one example of a validation rule you could implement.
return _prospectiveResult.finalResult > (latestResult.finalResult * 2);
}
}
```
The contract is already deployed for you on Sepolia at the following address: `0xF3abEAa889e46c6C5b9A0bD818cE54Cc4eAF8A54`. You will use this address in your configuration file.
## Step 2: Generate the consumer contract binding
You need to create a binding for the consumer contract so your workflow can interact with it.
1. **Add the consumer contract ABI**: Create a new file for the consumer contract ABI. From your project root (`onchain-calculator/`), run:
```bash
touch contracts/evm/src/abi/CalculatorConsumer.abi
```
2. **Add the ABI content**: Open `contracts/evm/src/abi/CalculatorConsumer.abi` and paste the contract's ABI:
```json
[{"inputs":[],"stateMutability":"nonpayable","type":"constructor"},{"inputs":[{"internalType":"address","name":"received","type":"address"},{"internalType":"address","name":"expected","type":"address"}],"name":"InvalidAuthor","type":"error"},{"inputs":[{"internalType":"address","name":"sender","type":"address"},{"internalType":"address","name":"expected","type":"address"}],"name":"InvalidSender","type":"error"},{"inputs":[{"internalType":"bytes32","name":"received","type":"bytes32"},{"internalType":"bytes32","name":"expected","type":"bytes32"}],"name":"InvalidWorkflowId","type":"error"},{"inputs":[{"internalType":"bytes10","name":"received","type":"bytes10"},{"internalType":"bytes10","name":"expected","type":"bytes10"}],"name":"InvalidWorkflowName","type":"error"},{"inputs":[{"internalType":"address","name":"owner","type":"address"}],"name":"OwnableInvalidOwner","type":"error"},{"inputs":[{"internalType":"address","name":"account","type":"address"}],"name":"OwnableUnauthorizedAccount","type":"error"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"previousOwner","type":"address"},{"indexed":true,"internalType":"address","name":"newOwner","type":"address"}],"name":"OwnershipTransferred","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"uint256","name":"resultId","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"finalResult","type":"uint256"}],"name":"ResultUpdated","type":"event"},{"inputs":[],"name":"expectedAuthor","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"expectedWorkflowId","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"expectedWorkflowName","outputs":[{"internalType":"bytes10","name":"","type":"bytes10"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"forwarderAddress","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[{"components":[{"internalType":"uint256","name":"offchainValue","type":"uint256"},{"internalType":"int256","name":"onchainValue","type":"int256"},{"internalType":"uint256","name":"finalResult","type":"uint256"}],"internalType":"struct CalculatorConsumer.CalculatorResult","name":"_prospectiveResult","type":"tuple"}],"name":"isResultAnomalous","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"latestResult","outputs":[{"internalType":"uint256","name":"offchainValue","type":"uint256"},{"internalType":"int256","name":"onchainValue","type":"int256"},{"internalType":"uint256","name":"finalResult","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"bytes","name":"metadata","type":"bytes"},{"internalType":"bytes","name":"report","type":"bytes"}],"name":"onReport","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"owner","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"renounceOwnership","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"resultCount","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"uint256","name":"","type":"uint256"}],"name":"results","outputs":[{"internalType":"uint256","name":"offchainValue","type":"uint256"},{"internalType":"int256","name":"onchainValue","type":"int256"},{"internalType":"uint256","name":"finalResult","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"_author","type":"address"}],"name":"setExpectedAuthor","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"bytes32","name":"_id","type":"bytes32"}],"name":"setExpectedWorkflowId","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"bytes10","name":"_name","type":"bytes10"}],"name":"setExpectedWorkflowName","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"_forwarder","type":"address"}],"name":"setForwarderAddress","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"bytes4","name":"interfaceId","type":"bytes4"}],"name":"supportsInterface","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"pure","type":"function"},{"inputs":[{"internalType":"address","name":"newOwner","type":"address"}],"name":"transferOwnership","outputs":[],"stateMutability":"nonpayable","type":"function"}]
```
3. **Generate the bindings**: Run the binding generator to create Go bindings for all ABI files in your project. From your project root (`onchain-calculator/`), run:
```bash
cre generate-bindings evm
```
This will generate bindings for both the `Storage` contract (from Part 3) and the new `CalculatorConsumer` contract. For each contract, you'll see two files: `.go` (main binding) and `_mock.go` (for testing).
**Generated method**: The binding generator sees the `CalculatorResult` struct and creates a `WriteReportFromCalculatorResult` method in the `CalculatorConsumer` binding that automatically handles encoding the struct and submitting it onchain.
## Step 3: Update your workflow configuration
Add the `CalculatorConsumer` contract address to your `config.staging.json`:
```json
{
"schedule": "0 */1 * * * *",
"apiUrl": "https://api.mathjs.org/v4/?expr=randomInt(1,101)",
"evms": [
{
"storageAddress": "0xa17CF997C28FF154eDBae1422e6a50BeF23927F4",
"calculatorConsumerAddress": "0xF3abEAa889e46c6C5b9A0bD818cE54Cc4eAF8A54",
"chainName": "ethereum-testnet-sepolia",
"gasLimit": 500000
}
]
}
```
## Step 4: Update your workflow logic
Now modify your workflow to write the final result to the contract. The binding generator creates a `WriteReportFrom` method that automatically handles encoding the `CalculatorResult` struct.
Replace the entire content of `onchain-calculator/my-calculator-workflow/main.go` with this final version.
**Note:** Lines highlighted in green indicate new or modified code compared to Part 3.
Code snippet for onchain-calculator/my-calculator-workflow/main.go:
```go
//go:build wasip1
package main
import (
"fmt"
"log/slog"
"math/big"
"onchain-calculator/contracts/evm/src/generated/calculator_consumer"
"onchain-calculator/contracts/evm/src/generated/storage"
"github.com/ethereum/go-ethereum/common"
"github.com/smartcontractkit/cre-sdk-go/capabilities/blockchain/evm"
"github.com/smartcontractkit/cre-sdk-go/capabilities/networking/http"
"github.com/smartcontractkit/cre-sdk-go/capabilities/scheduler/cron"
"github.com/smartcontractkit/cre-sdk-go/cre"
"github.com/smartcontractkit/cre-sdk-go/cre/wasm"
)
// The EvmConfig is updated from Part 3 with new fields for the write operation.
type EvmConfig struct {
ChainName string `json:"chainName"`
StorageAddress string `json:"storageAddress"`
CalculatorConsumerAddress string `json:"calculatorConsumerAddress"`
GasLimit uint64 `json:"gasLimit"`
}
type Config struct {
Schedule string `json:"schedule"`
ApiUrl string `json:"apiUrl"`
Evms []EvmConfig `json:"evms"`
}
// MyResult struct now holds all the outputs of our workflow.
type MyResult struct {
OffchainValue *big.Int
OnchainValue *big.Int
FinalResult *big.Int
TxHash string
}
func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) {
return cre.Workflow[*Config]{
cre.Handler(cron.Trigger(&cron.Config{Schedule: config.Schedule}), onCronTrigger),
}, nil
}
func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) {
logger := runtime.Logger()
evmConfig := config.Evms[0]
// Convert the human-readable chain name to a numeric chain selector
chainSelector, err := evm.ChainSelectorFromName(evmConfig.ChainName)
if err != nil {
return nil, fmt.Errorf("invalid chain name: %w", err)
}
// Step 1: Fetch offchain data
client := &http.Client{}
mathPromise := http.SendRequest(config, runtime, client, fetchMathResult, cre.ConsensusMedianAggregation[*big.Int]())
offchainValue, err := mathPromise.Await()
if err != nil {
return nil, err
}
logger.Info("Successfully fetched offchain value", "result", offchainValue)
// Step 2: Read onchain data using the binding for the Storage contract
evmClient := &evm.Client{
ChainSelector: chainSelector,
}
storageAddress := common.HexToAddress(evmConfig.StorageAddress)
storageContract, err := storage.NewStorage(evmClient, storageAddress, nil)
if err != nil {
return nil, fmt.Errorf("failed to create contract instance: %w", err)
}
onchainValue, err := storageContract.Get(runtime, big.NewInt(-3)).Await()
if err != nil {
return nil, fmt.Errorf("failed to read onchain value: %w", err)
}
logger.Info("Successfully read onchain value", "result", onchainValue)
// Step 3: Calculate the final result
finalResultInt := new(big.Int).Add(onchainValue, offchainValue)
logger.Info("Final calculated result", "result", finalResultInt)
// Step 4: Write the result to the consumer contract
txHash, err := updateCalculatorResult(config, runtime, chainSelector, evmConfig, offchainValue, onchainValue, finalResultInt)
if err != nil {
return nil, fmt.Errorf("failed to update calculator result: %w", err)
}
// Step 5: Log and return the final, consolidated result.
finalWorkflowResult := &MyResult{
OffchainValue: offchainValue,
OnchainValue: onchainValue,
FinalResult: finalResultInt,
TxHash: txHash,
}
logger.Info("Workflow finished successfully!", "result", finalWorkflowResult)
return finalWorkflowResult, nil
}
func fetchMathResult(config *Config, logger *slog.Logger, sendRequester *http.SendRequester) (*big.Int, error) {
req := &http.Request{Url: config.ApiUrl, Method: "GET"}
resp, err := sendRequester.SendRequest(req).Await()
if err != nil {
return nil, fmt.Errorf("failed to get API response: %w", err)
}
// The mathjs.org API returns the result as a raw string in the body.
// We need to parse it into a number.
val, ok := new(big.Int).SetString(string(resp.Body), 10)
if !ok {
return nil, fmt.Errorf("failed to parse API response into big.Int")
}
return val, nil
}
// updateCalculatorResult handles the logic for writing data to the CalculatorConsumer contract.
func updateCalculatorResult(config *Config, runtime cre.Runtime, chainSelector uint64, evmConfig EvmConfig, offchainValue *big.Int, onchainValue *big.Int, finalResult *big.Int) (string, error) {
logger := runtime.Logger()
logger.Info("Updating calculator result", "consumerAddress", evmConfig.CalculatorConsumerAddress)
evmClient := &evm.Client{
ChainSelector: chainSelector,
}
// Create a contract binding instance pointed at the CalculatorConsumer address.
consumerAddress := common.HexToAddress(evmConfig.CalculatorConsumerAddress)
consumerContract, err := calculator_consumer.NewCalculatorConsumer(evmClient, consumerAddress, nil)
if err != nil {
return "", fmt.Errorf("failed to create consumer contract instance: %w", err)
}
gasConfig := &evm.GasConfig{
GasLimit: evmConfig.GasLimit,
}
logger.Info("Writing report to consumer contract", "offchainValue", offchainValue, "onchainValue", onchainValue, "finalResult", finalResult)
// Call the `WriteReport` method on the binding. This sends a secure report to the consumer.
writeReportPromise := consumerContract.WriteReportFromCalculatorResult(runtime, calculator_consumer.CalculatorResult{
OffchainValue: offchainValue,
OnchainValue: onchainValue,
FinalResult: finalResult,
}, gasConfig)
logger.Info("Waiting for write report response")
resp, err := writeReportPromise.Await()
if err != nil {
logger.Error("WriteReport await failed", "error", err)
return "", fmt.Errorf("failed to await write report: %w", err)
}
txHash := fmt.Sprintf("0x%x", resp.TxHash)
logger.Info("Write report transaction succeeded", "txHash", txHash)
logger.Info("View transaction at", "url", fmt.Sprintf("https://sepolia.etherscan.io/tx/%s", txHash))
return txHash, nil
}
func main() {
wasm.NewRunner(cre.ParseJSON[Config]).Run(InitWorkflow)
}
```
## Step 5: Sync your dependencies
Because the `main.go` file has been updated to import new packages for the `CalculatorConsumer` binding, you must sync your dependencies.
Run `go mod tidy` to automatically download the new dependencies and update your `go.mod` and `go.sum` files.
```bash
go mod tidy
```
## Step 6: Run the simulation and review the output
Run the simulation from your project root directory (the `onchain-calculator/` folder). Because there is only one trigger, the simulator runs it automatically.
```bash
cre workflow simulate my-calculator-workflow --target staging-settings --broadcast
```
Your workflow will now show the complete end-to-end execution, including the final log of the `MyResult` struct containing the transaction hash.
```bash
Workflow compiled
2025-11-03T22:48:41Z [SIMULATION] Simulator Initialized
2025-11-03T22:48:41Z [SIMULATION] Running trigger trigger=cron-trigger@1.0.0
2025-11-03T22:48:41Z [USER LOG] msg="Successfully fetched offchain value" result=56
2025-11-03T22:48:41Z [USER LOG] msg="Successfully read onchain value" result=22
2025-11-03T22:48:41Z [USER LOG] msg="Final calculated result" result=78
2025-11-03T22:48:41Z [USER LOG] msg="Updating calculator result" consumerAddress=0xF3abEAa889e46c6C5b9A0bD818cE54Cc4eAF8A54
2025-11-03T22:48:41Z [USER LOG] msg="Writing report to consumer contract" offchainValue=56 onchainValue=22 finalResult=78
2025-11-03T22:48:41Z [USER LOG] msg="Waiting for write report response"
2025-11-03T22:48:48Z [USER LOG] msg="Write report transaction succeeded" txHash=0x86a26f848c83f37b8eace8123ec275a0af9d21b23b1fbba9cc7664b7e474314f
2025-11-03T22:48:48Z [USER LOG] msg="View transaction at" url=https://sepolia.etherscan.io/tx/0x86a26f848c83f37b8eace8123ec275a0af9d21b23b1fbba9cc7664b7e474314f
2025-11-03T22:48:48Z [USER LOG] msg="Workflow finished successfully!" result="&{OffchainValue:+56 OnchainValue:+22 FinalResult:+78 TxHash:0x86a26f848c83f37b8eace8123ec275a0af9d21b23b1fbba9cc7664b7e474314f}"
Workflow Simulation Result:
{
"FinalResult": 78,
"OffchainValue": 56,
"OnchainValue": 22,
"TxHash": "0x86a26f848c83f37b8eace8123ec275a0af9d21b23b1fbba9cc7664b7e474314f"
}
2025-11-03T22:48:48Z [SIMULATION] Execution finished signal received
2025-11-03T22:48:48Z [SIMULATION] Skipping WorkflowEngineV2
```
- **`[USER LOG]`**: You can see all of your `logger.Info()` calls showing the complete workflow execution, including the offchain value (`result=56`), onchain value (`result=22`), final calculation (`result=78`), and the transaction hash.
- **`[SIMULATION]`**: These are system-level messages from the simulator showing its internal state.
- **`Workflow Simulation Result`**: This is the final return value of your workflow. The `MyResult` struct contains all the values (56 + 22 = 78) and the transaction hash confirming the write operation succeeded.
## Step 7: Verify the result onchain
### **1. Check the Transaction**
In your terminal output, you'll see a clickable URL to view the transaction on Sepolia Etherscan:
```
[USER LOG] msg="View transaction at" url=https://sepolia.etherscan.io/tx/0x...
```
Click the URL (or copy and paste it into your browser) to see the full details of the transaction your workflow submitted.
**What are you seeing on a blockchain explorer?**
You'll notice the transaction's `to` address is not the `CalculatorConsumer` contract you intended to call. Instead, it's to a **Forwarder** contract. Your workflow sends a secure report to the Forwarder, which then verifies the request and makes the final call to the `CalculatorConsumer` on your workflow's behalf. To learn more, see the [Onchain Write guide](/cre/guides/workflow/using-evm-client/onchain-write).
### **2. Check the contract state**
While your wallet interacted with the Forwarder, the `CalculatorConsumer` contract's state was still updated. You can verify this change directly on Etherscan:
- Navigate to the `CalculatorConsumer` contract address: `0xF3abEAa889e46c6C5b9A0bD818cE54Cc4eAF8A54`.
- Expand the `latestResult` function and click **Query**. The values should match the `finalResult`, `offchainValue`, and `onchainValue` from your workflow logs.
This completes the end-to-end loop: triggering a workflow, fetching data, reading onchain state, and verifiably writing the result back to a public blockchain.
To learn more about implementing consumer contracts and the secure write process, see these guides:
- **[Building Consumer Contracts](/cre/guides/workflow/using-evm-client/onchain-write/building-consumer-contracts)**: Learn how to create your own secure consumer contracts with proper validation.
- **[Onchain Write Guide](/cre/guides/workflow/using-evm-client/onchain-write)**: Dive deeper into the write patterns.
## Next steps
You've now mastered the complete CRE development workflow!
- **[Conclusion & Next Steps](/cre/getting-started/conclusion)**: Review what you've learned and find resources for advanced topics.
---
# Using Secrets in Simulation
Source: https://docs.chain.link/cre/guides/workflow/secrets/using-secrets-simulation-go
Last Updated: 2025-11-04
This guide explains how to use secrets during **local development and simulation**. When you're simulating a workflow on your local machine with `cre workflow simulate`, secrets are provided via environment variables or a `.env` file.
At a high level, the process follows a simple, three-step pattern:
1. **Declare**: You declare the logical names of your secrets in a `secrets.yaml` file.
2. **Provide**: You provide the actual secret values in a `.env` file or as environment variables.
3. **Use**: You access the secrets in your workflow code using the SDK's secret management API.
This separation of concerns ensures that your workflow code is portable and your secrets are never hard-coded.
## Step-by-step guide
### Step 1: Declare your secrets (`secrets.yaml`)
The first step is to create a `secrets.yaml` file in the root of your project. This file acts as a manifest, defining the "logical names" or "IDs" for the secrets your workflow will use.
In this file, you map a logical name (which you'll use in your workflow code) to one environment variable name that will hold the actual secret value.
**Example `secrets.yaml`:**
```yaml
# in project-root/secrets.yaml
secretsNames:
# This is the logical ID you will use in your workflow code
SECRET_ADDRESS:
# This is the environment variable the CLI will look for
- SECRET_ADDRESS_ALL
```
### Step 2: Provide the secret values
Next, you need to provide the actual values for the secrets. The `cre` CLI can read these values in two primary ways.
#### Method 1: Using shell environment variables (Recommended)
You can provide secrets as standard environment variables directly in your shell.
For example, in your terminal:
```bash
export SECRET_ADDRESS_ALL="0x1234567890abcdef1234567890abcdef12345678"
```
When you run the `cre workflow simulate` command in the same terminal session, the CLI will have access to this variable.
#### Method 2: Using a `.env` file
Create a `.env` file in your project's root directory. The `cre` CLI automatically finds this file and loads the variables defined within it into the environment for your simulation. The variable names here must match those you declared in `secrets.yaml`.
**Example `.env` file:**
```bash
# in project-root/.env
# The variable name matches the one in secrets.yaml
SECRET_ADDRESS_ALL="0x1234567890abcdef1234567890abcdef12345678"
```
### Step 3: Use the secret in your workflow
Now you can access the secret in your workflow code. The SDK provides a method to retrieve secrets using the logical ID you defined in `secrets.yaml`.
The following code shows a complete, runnable workflow that triggers on a schedule, fetches a secret, and logs its value.
**Example workflow:**
Code snippet for Fetching Single Secret (Go):
```go
//go:build wasip1
package main
import (
"log/slog"
protos "github.com/smartcontractkit/chainlink-protos/cre/go/sdk"
"github.com/smartcontractkit/cre-sdk-go/capabilities/scheduler/cron"
"github.com/smartcontractkit/cre-sdk-go/cre"
"github.com/smartcontractkit/cre-sdk-go/cre/wasm"
)
// Config can be an empty struct if you don't need any parameters from config.json.
type Config struct{}
// MyResult can be an empty struct if your workflow doesn't need to return a result.
type MyResult struct{}
// Define the logical name of the secret as a constant for clarity.
const SecretName = "SECRET_ADDRESS"
// onCronTrigger is the callback function that gets executed when the cron trigger fires.
// This is where you use the secret.
func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) {
logger := runtime.Logger()
// Build the request with the secret's logical ID.
secretReq := &protos.SecretRequest{
Id: SecretName,
}
// Call runtime.GetSecret and await the promise.
secret, err := runtime.GetSecret(secretReq).Await()
if err != nil {
logger.Error("Failed to get secret", "name", SecretName, "err", err)
return nil, err
}
// Use the secret's value.
secretAddress := secret.Value
logger.Info("Successfully fetched a secret!", "address", secretAddress)
// ... now you can use the secretAddress in your logic ...
return &MyResult{}, nil
}
// InitWorkflow is the required entry point for a CRE workflow.
func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) {
return cre.Workflow[*Config]{
cre.Handler(
cron.Trigger(&cron.Config{Schedule: "0 */10 * * * *"}),
onCronTrigger,
),
}, nil
}
// main is the entry point for the WASM binary.
func main() {
wasm.NewRunner(cre.ParseJSON[Config]).Run(InitWorkflow)
}
```
### Step 4: Configure secrets path in `workflow.yaml`
Before simulating, you need to tell the CLI where to find your secrets file. This is configured in your `workflow.yaml` file under `workflow-artifacts.secrets-path`.
Open your `workflow.yaml` file and set the `secrets-path`:
```yaml
local-simulation:
user-workflow:
workflow-name: "my-workflow"
workflow-artifacts:
workflow-path: "./main.go"
config-path: "./config.json"
secrets-path: "../secrets.yaml" # Path to your secrets file
```
Notice the path `../secrets.yaml`. Because the workflow artifacts are relative to the workflow directory, you need to point to the `secrets.yaml` file located one level up in the project root.
### Step 5: Run the simulation
Now you can simulate your workflow:
```bash
cre workflow simulate my-workflow --target staging-settings
```
The CLI will automatically read the `secrets-path` from your `workflow.yaml` and load the secrets from your `.env` file or environment variables you provided in your terminal session.
## Fetching multiple secrets
You can fetch multiple secrets by calling the secret retrieval method multiple times within your workflow.
The following example builds on the previous one. First, update your `secrets.yaml` to declare two secrets:
```yaml
secretsNames:
SECRET_ADDRESS:
- SECRET_ADDRESS_ALL
API_KEY:
- API_KEY_ALL
```
Then provide the values in your `.env` file or export them as environment variables in your terminal session:
```bash
export SECRET_ADDRESS_ALL="0x1234567890abcdef1234567890abcdef12345678"
export API_KEY_ALL="your-api-key-here"
```
Now you can fetch both secrets in your workflow code:
Code snippet for Fetching Multiple Secrets (Go):
```go
//go:build wasip1
package main
import (
"log/slog"
protos "github.com/smartcontractkit/chainlink-protos/cre/go/sdk"
"github.com/smartcontractkit/cre-sdk-go/capabilities/scheduler/cron"
"github.com/smartcontractkit/cre-sdk-go/cre"
"github.com/smartcontractkit/cre-sdk-go/cre/wasm"
)
// Config can be an empty struct if you don't need any parameters from config.json.
type Config struct{}
// MyResult can be an empty struct if your workflow doesn't need to return a result.
type MyResult struct{}
const (
SecretAddressName = "SECRET_ADDRESS"
ApiKeyName = "API_KEY"
)
func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) {
logger := runtime.Logger()
// Important: Fetch secrets sequentially, not in parallel.
// The WASM host for CRE runtime does not support parallel runtime.GetSecret() requests.
// Always call GetSecret(), then Await() before making the next GetSecret() call.
// 1. Fetch the first secret
addressPromise := runtime.GetSecret(&protos.SecretRequest{Id: SecretAddressName})
secretAddress, err := addressPromise.Await()
if err != nil {
logger.Error("Failed to get SECRET_ADDRESS", "err", err)
return nil, err
}
// 2. Fetch the second secret (only after the first is complete)
apiKeyPromise := runtime.GetSecret(&protos.SecretRequest{Id: ApiKeyName})
apiKey, err := apiKeyPromise.Await()
if err != nil {
logger.Error("Failed to get API_KEY", "err", err)
return nil, err
}
// 3. Use your secrets
logger.Info("Successfully fetched secrets!",
"address", secretAddress.Value,
"apiKey", apiKey.Value,
)
return &MyResult{}, nil
}
// InitWorkflow is the required entry point for a CRE workflow.
func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) {
return cre.Workflow[*Config]{
cre.Handler(
cron.Trigger(&cron.Config{Schedule: "0 */10 * * * *"}),
onCronTrigger,
),
}, nil
}
// main is the entry point for the WASM binary.
func main() {
wasm.NewRunner(cre.ParseJSON[Config]).Run(InitWorkflow)
}
```
---
# Onchain Read
Source: https://docs.chain.link/cre/guides/workflow/using-evm-client/onchain-read-go
Last Updated: 2025-11-04
This guide explains how to read data from a smart contract from within your CRE workflow. The process uses [generated bindings](/cre/guides/workflow/using-evm-client/generating-bindings) and the SDK's [`evm.Client`](/cre/reference/sdk/evm-client) to create a simple, type-safe developer experience.
## The read pattern
Reading from a contract follows a simple pattern:
1. **Prerequisite - Generate bindings**: You must first [generate Go bindings](/cre/guides/workflow/using-evm-client/generating-bindings) for your smart contracts using the CRE CLI. This creates type-safe Go methods that correspond to your contract's `view` and `pure` functions.
2. **Instantiate the binding**: In your workflow logic, create an instance of your generated binding.
3. **Call a read method**: Call the desired function on the binding instance, specifying a block number. This is an asynchronous call that immediately returns a [`Promise`](/cre/reference/sdk/core/#promise).
4. **Await the result**: Call `.Await()` on the returned promise to pause execution and wait for the consensus-verified result from the DON.
## Step-by-step example
Let's assume you have followed the [generating bindings guide](/cre/guides/workflow/using-evm-client/generating-bindings) and have created a binding for the Storage contract with a `get() view returns (uint256)` function.
### 1. The generated binding
After running `cre generate-bindings evm`, your binding will contain methods that wrap the onchain functions. For the `Storage` contract's `get` function, the generated method takes the `sdk.Runtime` and a block number as arguments:
```go
// In contracts/evm/src/generated/storage/storage.go
func (c Storage) Get(runtime cre.Runtime, blockNumber *big.Int) cre.Promise[*evm.CallContractReply] {
// This method handles ABI encoding, calling the evm.Client,
// and returns a promise for the response.
}
```
### 2. The workflow logic
In your main workflow file (`main.go`), you can now use this binding to read from your contract.
```go
// In your workflow's main.go
import (
"contracts/evm/src/generated/storage" // Import your generated binding
"fmt"
"math/big"
"github.com/ethereum/go-ethereum/common"
"github.com/smartcontractkit/cre-sdk-go/capabilities/blockchain/evm"
"github.com/smartcontractkit/cre-sdk-go/cre"
)
func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) {
logger := runtime.Logger()
// 1. Create the EVM client with chain selector
evmClient := &evm.Client{
ChainSelector: config.ChainSelector, // e.g., 16015286601757825753 for Sepolia
}
// 2. Instantiate the contract binding
contractAddress := common.HexToAddress(config.ContractAddress)
storageContract, err := storage.NewStorage(evmClient, contractAddress, nil)
if err != nil {
return nil, fmt.Errorf("failed to create contract instance: %w", err)
}
// 3. Call the read method - it returns the decoded value directly
// See the "Block number options" section below for details on block number parameters
storedValue, err := storageContract.Get(runtime, big.NewInt(-3)).Await() // -3 = finalized block
if err != nil {
logger.Error("Failed to read storage value", "err", err)
return nil, err
}
logger.Info("Successfully read storage value", "value", storedValue.String())
return &MyResult{StoredValue: storedValue}, nil
}
```
## Understanding return types
Generated bindings are designed to be **self-documenting**. The method signature tells you exactly what type you'll receive, so you don't need to guess or look up the ABI—the Go type system provides this information directly.
### Reading method signatures
When you call a read method on a generated binding, its signature shows you the return type. For example, from the `IERC20` binding:
```go
// This method returns a *big.Int
func (c IERC20) TotalSupply(
runtime cre.Runtime,
blockNumber *big.Int,
) cre.Promise[*big.Int] // ← The return type is right here
// This method returns a bool
func (c IERC20) Approve(
runtime cre.Runtime,
args ApproveInput,
blockNumber *big.Int,
) cre.Promise[bool] // ← Returns bool
```
### Solidity-to-Go type mappings
The binding generator follows standard Ethereum conventions:
| Solidity Type | Go Type |
| ------------------------ | ----------------------------------------------------------------------------------------- |
| `uint8`, `uint256`, etc. | `*big.Int` |
| `int8`, `int256`, etc. | `*big.Int` |
| `address` | `common.Address` |
| `bool` | `bool` |
| `string` | `string` |
| `bytes`, `bytes32`, etc. | `[]byte` |
| `struct` | Custom Go struct ([generated](/cre/guides/workflow/using-evm-client/generating-bindings)) |
### Using your IDE
Modern IDEs will show you the method signature when you hover over a function call or use autocomplete. This makes it easy to see exactly what type you're working with:
```go
// When you type this and hover over TotalSupply, your IDE shows:
value, err := token.TotalSupply(runtime, big.NewInt(-3)).Await()
// ↑ IDE tooltip: "func TotalSupply(...) cre.Promise[*big.Int]"
// So you know `value` is a *big.Int and can use it directly
```
### Practical usage
Because the type is explicit, you can immediately use the value with confidence:
```go
totalSupply, err := token.TotalSupply(runtime, big.NewInt(-3)).Await()
if err != nil {
return nil, err
}
// You know it's *big.Int, so you can use it in calculations:
doubled := new(big.Int).Mul(totalSupply, big.NewInt(2))
logger.Info("Supply doubled", "result", doubled.String())
```
## Block number options
When calling contract read methods, you must specify a block number. There are two ways to do this:
### Using magic numbers
- **Finalized block**: Use `big.NewInt(-3)` to read from the latest finalized block
- **Latest block**: Use `big.NewInt(-2)` to read from the latest block
- **Specific block**: Use `big.NewInt(blockNumber)` to read from a specific block
### Using rpc constants (alternative)
You can also use constants from the `go-ethereum/rpc` package for better readability:
```go
import (
"math/big"
"github.com/ethereum/go-ethereum/rpc"
)
// For latest block
reqBlockNumber := big.NewInt(rpc.LatestBlockNumber.Int64())
// For finalized block
reqBlockNumber := big.NewInt(rpc.FinalizedBlockNumber.Int64())
```
Both approaches are equivalent - use whichever you find more readable in your code.
## Complete example
This example shows a full, runnable workflow that triggers on a cron schedule and reads a value from the Storage contract.
```go
package main
import (
"contracts/evm/src/generated/storage" // Generated Storage binding
"fmt"
"log/slog"
"math/big"
"github.com/ethereum/go-ethereum/common"
"github.com/smartcontractkit/cre-sdk-go/capabilities/blockchain/evm"
"github.com/smartcontractkit/cre-sdk-go/capabilities/scheduler/cron"
"github.com/smartcontractkit/cre-sdk-go/cre"
"github.com/smartcontractkit/cre-sdk-go/cre/wasm"
)
type Config struct {
ContractAddress string `json:"contractAddress"`
ChainSelector uint64 `json:"chainSelector"`
}
type MyResult struct {
StoredValue *big.Int
}
func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) {
logger := runtime.Logger()
// Create EVM client
evmClient := &evm.Client{
ChainSelector: config.ChainSelector,
}
// Create contract instance
contractAddress := common.HexToAddress(config.ContractAddress)
storageContract, err := storage.NewStorage(evmClient, contractAddress, nil)
if err != nil {
return nil, fmt.Errorf("failed to create contract instance: %w", err)
}
// Call contract method - it returns the decoded type directly
storedValue, err := storageContract.Get(runtime, big.NewInt(-3)).Await()
if err != nil {
return nil, fmt.Errorf("failed to read storage value: %w", err)
}
logger.Info("Successfully read storage value", "value", storedValue.String())
return &MyResult{StoredValue: storedValue}, nil
}
func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) {
return cre.Workflow[*Config]{
cre.Handler(
cron.Trigger(&cron.Config{Schedule: "*/10 * * * * *"}),
onCronTrigger,
),
}, nil
}
func main() {
wasm.NewRunner(cre.ParseJSON[Config]).Run(InitWorkflow)
}
```
## Configuration
Your workflow configuration file (`config.json`) should include both the contract address and chain selector:
```json
{
"contractAddress": "0xYourContractAddressHere",
"chainSelector": 16015286601757825753
}
```
You pass this file to the simulator using the `--config` flag: `cre workflow simulate --config config.json main.go`
---
# Onchain Write
Source: https://docs.chain.link/cre/guides/workflow/using-evm-client/onchain-write/overview-go
Last Updated: 2025-11-04
This guide explains how to write data from your CRE workflow to a smart contract on the blockchain.
**What you'll learn:**
- How CRE's secure write mechanism works (and why it's different from traditional web3)
- What a consumer contract is and why you need one
- Which approach to use based on your specific use case
- How to construct Solidity-compatible types in Go
## Understanding how CRE writes work
Before diving into code, it's important to understand how CRE handles onchain writes differently than traditional web3 applications.
### Why CRE doesn't write directly to your contract
In a traditional web3 app, you'd create a transaction and send it directly to your smart contract. **CRE uses a different, more secure approach** for three key reasons:
1. **Decentralization**: Multiple nodes in the Decentralized Oracle Network (DON) need to agree on what data to write
2. **Verification**: The blockchain needs cryptographic proof that the data came from a trusted Chainlink network
3. **Accountability**: There must be a verifiable trail showing which workflow and owner created the data
### The secure write flow (4 steps)
Here's the journey your workflow's data takes to reach the blockchain:
1. **Report generation**: Your workflow generates a ***report***— your data is ABI-encoded and wrapped in a cryptographically signed "package"
2. **DON consensus**: The DON reaches consensus on the report's contents
3. **Forwarder submission**: A designated node submits the report to a Chainlink `KeystoneForwarder` contract
4. **Delivery to your contract**: The Forwarder validates the report's signatures and calls your consumer contract's `onReport()` function with the data
Your workflow code handles this process using the [`evm.Client`](/cre/reference/sdk/evm-client), which manages the interaction with the Forwarder contract. Depending on your approach (covered below), this can be fully automated via generated binding helpers or done manually with direct client calls.
## What you need: A consumer contract
Before you can write data onchain, you need a **consumer contract**. This is the smart contract that will receive your workflow's data.
**What is a consumer contract?**
A consumer contract is **your smart contract** that implements the `IReceiver` interface. This interface defines an `onReport()` function that the Chainlink Forwarder calls to deliver your workflow's data.
Think of it as a mailbox that's designed to receive packages (reports) from Chainlink's secure delivery service (the Forwarder contract).
**Key requirement:**
Your contract must implement the `IReceiver` interface. This single requirement ensures your contract has the necessary `onReport(bytes metadata, bytes report)` function that the Chainlink Forwarder calls to deliver data.
**Getting started:**
- **Don't have a consumer contract yet?** Follow the [Building Consumer Contracts](/cre/guides/workflow/using-evm-client/onchain-write/building-consumer-contracts) guide to create one.
- **Already have one deployed?** Great! Make sure you have its address ready. Depending on which approach you choose (see below), you may also need the contract's ABI to generate bindings.
## Choosing your approach: Which guide should you follow?
Now that you have a consumer contract, the next step depends on **what type of data you're sending** and **what's available in your contract's ABI**. This determines whether you can use the easy automated approach or need to encode data manually.
Use this table to find the guide that matches your needs:
| Your scenario | What you have | Recommended approach | Where to go |
| ----------------------------------------------------------------- | --------------------------------------------------- | ------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Write a struct onchain** | Struct is in the ABI(\*) | Use the `WriteReportFrom` binding helper | [Using WriteReportFrom Helpers](/cre/guides/workflow/using-evm-client/onchain-write/using-write-report-helpers) |
| **Write a struct onchain** | Struct is NOT in the ABI(\*) |