# Chainlink Runtime Environment (CRE) Source: https://docs.chain.link/cre Last Updated: 2025-11-04 ## What is CRE? **Chainlink Runtime Environment (CRE)** is the all-in-one orchestration layer unlocking institutional-grade smart contracts—data-connected, compliance-ready, privacy-preserving, and interoperable across blockchains and existing systems. Using the **CRE SDK** (available in Go and TypeScript), you build **Workflows**. Using the CRE CLI, you compile them into binaries and deploy them to production, where CRE runs them across a Decentralized Oracle Network (DON). - Each workflow is orchestrated by a **Workflow DON (Decentralized Oracle Network)** that monitors for triggers and coordinates execution. - The workflow can then invoke specialized **Capability DONs**—for example, one that fetches offchain data or one that writes to a chain. - During execution, each node in a DON performs the requested task independently. - Their results are then cryptographically verified and aggregated via a Byzantine Fault Tolerant (BFT) consensus protocol. This guarantees a single, correct, and consistent outcome. ## What you can do today ### Build and simulate (available now) You can start building and [simulating](/cre/guides/operations/simulating-workflows) CRE workflows immediately, without any approval: - **Create an account** at [cre.chain.link](https://cre.chain.link) to access the platform - **Install the CRE CLI** on your machine - **Build workflows** using the Go or TypeScript SDKs - **Simulate workflows** to test and debug before deployment Simulation compiles your workflows into [WebAssembly (WASM)](https://webassembly.org/) and runs them on your machine—but makes **real calls** to live APIs and public EVM blockchains. This gives you confidence your workflow will work as expected when deployed to a DON. ### Deploy your workflows (Early Access) Early Access to workflow deployment includes: - **Deploy and run workflows** on a Chainlink DON - **Workflow lifecycle management**: Deploy, activate, pause, update, and delete workflows through the CLI - **Monitoring and debugging**: Access detailed logs, events, and performance metrics in the CRE UI To request Early Access, please share details about your project and use case—this helps us provide better support as you build with CRE. ## How CRE runs your workflows Now that you understand what CRE is, let's explore how it executes your workflows. ### The trigger-and-callback model Workflows use a **trigger-and-callback model** to provide a code-first developer experience. This model is the primary architectural pattern you will use in your workflows. It consists of three simple parts: 1. **A Trigger**: An event source that starts a workflow execution (e.g., `cron.Trigger`). This is the "when" of your workflow. 2. **A Callback**: A function that contains your business logic. It is inside this function that you will use the SDK's clients to invoke capabilities. This is the "what" of your workflow. 3. **The `cre.handler()`**: The glue that connects a single trigger to a single callback. You can define multiple trigger and callback combinations in your workflow. You can also attach the same callback to multiple triggers for reusability. Here's what the trigger-and-callback pattern looks like: ```go cre.Handler( cron.Trigger(&cron.Config{Schedule: "0 */10 * * * *"}), // Trigger fires every 10 minutes onCronTrigger, // your Go callback ) func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (struct{}, error) { // Create SDK clients and call capabilities return struct{}{}, nil } ``` ### Execution lifecycle When a trigger fires, the Workflow DON orchestrates the execution of your callback function on every node in the network. **Each execution is independent and stateless**—your callback runs, performs its work, returns a result, and completes. Inside your callback, you create SDK clients and invoke capabilities. Each capability call is an **asynchronous operation** that returns a `Promise`—a placeholder for a future result. This allows you to pipeline multiple capability calls and run them in parallel. Your callback typically follows this pattern: 1. Invoke multiple capabilities in parallel (each returns a `Promise` immediately) 2. Await the consensus-verified results 3. Use the trusted results in your business logic 4. Optionally perform final actions like writing back to a blockchain For every capability you invoke, CRE handles the underlying process of having a dedicated DON execute the task, reach consensus, and return the verified result. ### Built-in consensus for every operation One of CRE's most powerful features is that **every capability execution automatically includes consensus**. When your workflow invokes a capability (like fetching data from an API or reading from a blockchain), multiple independent nodes perform the operation. Their results are then validated and aggregated through a Byzantine Fault Tolerant (BFT) consensus protocol, ensuring a single, verified outcome. This means your entire workflow—not just the onchain parts—benefits from the same security and reliability guarantees as blockchain transactions. Unlike traditional applications that rely on a single API provider or RPC endpoint, CRE eliminates single points of failure by having multiple nodes independently verify every operation. Learn more about [Consensus Computing in CRE](/cre/concepts/consensus-computing). ## Glossary: Building blocks | Concept | One-liner | | ------------------ | ----------------------------------------------------------------- | | **Workflow** | Compiled WebAssembly (WASM) binary. | | **Handler** | `cre.handler(trigger, callback)` pair; the atom of execution. | | **Trigger** | Event that starts an execution (cron, HTTP, EVM log, …). | | **Callback** | Function that runs when its trigger fires; contains your logic. | | **Runtime** | Object passed to a callback; used to invoke capabilities. | | **Capability** | Decentralized microservice (chain read/write, HTTP Fetch, ...). | | **Workflow DON** | Watches triggers and coordinates the workflow. | | **Capability DON** | Executes a specific capability. | | **Consensus** | BFT protocol that merges node results into one verifiable report. | Full definitions live on **[Key Terms and Concepts](/cre/key-terms)**. ## Why build on CRE? - **Unified cross-domain orchestration**: Seamlessly combine onchain and offchain operations in a single workflow. Read from multiple blockchains, call authenticated APIs, perform computations, and write results back onchain or offchain—all orchestrated by CRE. - **Institutional-grade security by default**: Every operation—API calls, blockchain reads, computations—runs across multiple independent nodes with Byzantine Fault Tolerant consensus. Your workflows inherit the same security guarantees as blockchain transactions. - **One platform, any chain**: Build your logic once and connect to any supported blockchain. No need to deploy separate infrastructure for each chain you support. - **Code-first developer experience**: Write workflows in Go or TypeScript using familiar patterns. The SDK abstracts away the complexity of distributed systems, letting you focus on your business logic. ## Where to go next? ### New to CRE? Start here: 1. **[Create Your Account](/cre/account/creating-account)** - Set up your CRE account (required for all CLI commands) 2. **[Install the CLI](/cre/getting-started/cli-installation)** - Download and install the `cre` command-line tool Then choose your path: - **Learn by building:** [Getting Started Guide](/cre/getting-started/overview) - Step-by-step guide where you build your first workflow, learning core concepts along the way - **Quick start:** [Run the Custom Data Feed Demo](/cre/templates/running-demo-workflow) - See a production-ready workflow in action. Just follow the steps to run a complete, pre-built example ### Already familiar? Jump to what you need: - **[Workflow Guides](/cre/guides/workflow/using-triggers/overview)** - Learn how to use triggers, make API calls, and interact with blockchains - **[Workflow Operations](/cre/guides/operations/simulating-workflows)** - Simulate, deploy, and manage your workflows - **[SDK Reference](/cre/reference/sdk)** - Detailed API documentation for Go and TypeScript SDKs --- # Key Terms and Concepts Source: https://docs.chain.link/cre/key-terms Last Updated: 2025-11-04 This page defines the fundamental terms and concepts for the Chainlink Runtime Environment (CRE). ## High-level concepts ### Chainlink Runtime Environment (CRE) The all-in-one orchestration layer unlocking institutional-grade smart contracts—data-connected, compliance-ready, privacy-preserving, and interoperable across blockchains and existing systems ### Decentralized Oracle Network (DON) A decentralized, peer-to-peer network of independent nodes that work together to execute a specific task. In CRE, there are two primary types of DONs: **Workflow DONs** that orchestrates the workflow, and specialized **Capability DONs** that execute specific tasks like blockchain interactions. ## Workflow architecture ### Workflow A workflow uses the CRE SDK (Go or TypeScript) and comprises one or more [handlers](/cre/key-terms#handler), which define the logic that executes when events ([triggers](/cre/key-terms#trigger)) occur. CRE compiles the workflow to a WASM binary and runs it on a Workflow DON. ### Handler The basic building block of a workflow, created using the `cre.Handler` function. It connects a single **Trigger** event to a single **Callback** function. ### Trigger An event source that initiates the execution of a handler's callback function. Examples include Cron trigger, HTTP trigger, and EVM Log trigger. Learn more in the [Trigger capability page](/cre/capabilities/triggers). ### Callback A function that contains your core logic. It is executed by the Workflow DON every time its corresponding trigger fires. ## The developer's toolkit: The CRE SDK ### `Runtime` & `NodeRuntime` Short-lived objects passed to your callback function during a specific execution. The key difference between `Runtime` and `NodeRuntime` is who is responsible for creating a single, trusted result from the work of many nodes. - **`Runtime`**: Think of it as the "Easy Mode". It is used for operations that are guaranteed to be Byzantine Fault Tolerant (BFT). You ask the network to execute something, and CRE handles the underlying complexity to ensure you get back one final, secure, and trustworthy result. - **`NodeRuntime`**: Think of this as the "Manual Mode". It is used when a BFT guarantee cannot be provided automatically (e.g. calling a standard API). You tell each node to perform a task on its own. Each node returns its own individual answer, and you are responsible for telling the SDK how to combine them into a single, trusted result by providing an aggregation algorithm. This is always used inside a `cre.RunInNodeMode` block. Learn more about [Consensus and Aggregation](/cre/reference/sdk/consensus). ### SDK Clients: `EVMClient` & `HTTPClient` The primary SDK clients you use inside a callback to interact with capabilities. For example, you use an EVM client to read from a smart contract and an HTTP client to make offchain API requests. **Language-specific implementations:** - **Go SDK**: `evm.Client` and `http.Client` - **TypeScript SDK**: `EVMClient` and `HTTPClient` classes ### `Bindings` (Go SDK only) A Go package generated from a smart contract's ABI using the `cre generate-bindings` CLI command. Bindings create a type-safe Go interface for a specific smart contract, abstracting away the low-level complexity of ABI encoding and decoding. Using generated bindings is the recommended best practice for Go workflows, as they provide helper methods for: - Reading from `view`/`pure` functions. - Encoding data structures for onchain writes. - Creating triggers for and decoding event logs. This makes your workflow code cleaner, safer, and easier to maintain. Learn more in the [Generating Contract Bindings](/cre/guides/workflow/using-evm-client/generating-bindings) guide. **Note for TypeScript**: The TypeScript SDK uses [Viem](https://viem.sh/) for type-safe contract interactions with manual ABI definitions instead of generated bindings. ### Async Patterns Asynchronous operations in the SDK (like contract reads or HTTP requests) return a placeholder for a future result: - **Go SDK**: Operations return a `Promise`, and you must call `.Await()` to pause execution and wait for the result. - **TypeScript SDK**: Operations return an object with a `.result()` method that you call to wait for the result. ### `Secrets` Securely managed credentials (e.g., API keys) made available to your workflow at runtime. Secrets can be fetched within a callback using the runtime's secret retrieval method: - **Go SDK**: `runtime.GetSecret()` - **TypeScript SDK**: `runtime.getSecret()` ## Underlying architectural concepts ### Capability A conceptual, decentralized "microservice" that is backed by its own DON. Capabilities are the fundamental building blocks of the CRE platform (e.g., HTTP Fetch, EVM Read). You do not interact with them directly; instead, you use the SDK's developer-facing clients (like `evm.Client`) to invoke them. ### Consensus The mechanism by which a DON comes to a single, reliable, and tamper-proof result, even if individual nodes observe slightly different data. Consensus is what makes the outputs of capabilities secure and trustworthy. ## Where to go next? - **[Getting Started](/cre/getting-started/overview)**: Start building your first workflow. - **[About CRE](/cre)**: Learn more about the vision and high-level architecture of CRE. --- # Service Quotas Source: https://docs.chain.link/cre/service-quotas Last Updated: 2025-11-04 This page documents the service quotas for Chainlink Runtime Environment (CRE) workflows. ## Per-owner quotas These quotas apply to each workflow owner (user account) within an organization. | Quota | Description | Value | | ------------------------------- | ------------------------------------------------------------------------------------------------ | ---------------------------------- | | Workflow Deployment Rate | Maximum rate at which an organization can deploy new workflows | Rate: 1 per minute
Burst: 1 | | Concurrent Workflow Executions | Maximum number of workflows that can execute simultaneously | 5 | | Workflow Trigger Rate | Maximum rate at which triggers can fire for all workflows owned by a single owner (user account) | Rate: 5 per second
Burst: 5 | | Workflow Binary Size | Maximum size of the compiled WASM binary | 100 MB | | Workflow Compressed Binary Size | Maximum size of the compressed WASM binary | 20 MB | | Workflow Configuration Size | Maximum size of the workflow configuration | 1 MB | | Secrets Size | Maximum total size of secrets accessible to a workflow | 1 MB | ## Per-workflow quotas These quotas apply to each individual workflow. ### Trigger quotas | Quota | Description | Value | | ----------------------------- | ---------------------------------------------------------------------- | -------------------------------------- | | Trigger Rate | Maximum rate at which a workflow's triggers can fire | Rate: 1 per 30 seconds
Burst: 3 | | Maximum Triggers per Workflow | Maximum number of triggers that can be registered to a single workflow | 10 | ### Execution quotas | Quota | Description | Value | | ----------------------------- | --------------------------------------------------------------- | ----------------- | | Concurrent Workflow Instances | Maximum number of concurrent executions for a specific workflow | 5
Burst: 5 | | Workflow Timeout | Maximum total execution time for a single workflow run | 5 minutes | | Workflow Memory Allocation | Maximum memory allocated to a workflow | 100 MB | | Response Size | Maximum size of the data a workflow can return | 100 KB | ### Capability call quotas | Quota | Description | Value | | --------------------------- | ------------------------------------------------------------------------------------------------------ | --------- | | Concurrent Capability Calls | Maximum concurrent capability calls (HTTP, EVM read/write, secrets) that can execute within a workflow | 3 | | Capability Call Timeout | Maximum time a single capability call can take to complete | 3 minutes | ### Secrets quotas | Quota | Description | Value | | -------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | ----- | | Secrets Size | Maximum total size of secrets accessible to a workflow | 1 MB | | Concurrent Secrets Fetches | Maximum number of secrets that can be fetched concurrently. [Learn how to fetch multiple secrets](/cre/guides/workflow/secrets). | 5 | ### Consensus quotas | Quota | Description | Value | | --------------------- | ---------------------------------------------------------------- | ------ | | Observation Size | Maximum size of data that can be passed to consensus aggregation | 100 KB | | Total Consensus Calls | Maximum number of consensus calls per workflow execution | 2,000 | ### Logging quotas | Quota | Description | Value | | ------------- | --------------------------------------------------- | ----- | | Log Line Size | Maximum size of a single log line | 1 KB | | Log Events | Maximum number of log events per workflow execution | 1,000 | ## Trigger-specific quotas ### Cron trigger | Quota | Description | Value | | ------------ | --------------------------------------------- | -------------------------------------- | | Trigger Rate | Maximum rate at which a cron trigger can fire | Rate: 1 per 30 seconds
Burst: 1 | ### HTTP trigger | Quota | Description | Value | | ------------ | ---------------------------------------------- | -------------------------------------- | | Trigger Rate | Maximum rate at which an HTTP trigger can fire | Rate: 1 per 30 seconds
Burst: 3 | ### EVM log trigger | Quota | Description | Value | | ---------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------- | | Maximum Log Triggers | Maximum number of EVM log triggers per workflow | 5 | | Event Rate | Maximum rate at which log events can be processed | Rate: 10 per 6 seconds
Burst: 10 | | Filter Addresses | Maximum number of contract addresses that can be monitored | 5 | | Filter Topics per Slot | Maximum number of topic values that can be specified within a single topic position (Topics[0], Topics[1], Topics[2], or Topics[3]). [Learn about topic filtering](/cre/guides/workflow/using-triggers/evm-log-trigger). | 10 | | Event Size | Maximum size of a single log event | 5 KB | ## Capability-specific quotas ### EVM write capability | Quota | Description | Value | | --------------------- | --------------------------------------------------------- | --------- | | Target Chains | Maximum number of destination chains for write operations | 10 | | Report Size | Maximum size of a report payload | 1 KB | | Transaction Gas Quota | Gas quota per EVM transaction | 5,000,000 | ### EVM read capability | Quota | Description | Value | | ------------------------ | ---------------------------------------------------------------- | ----- | | Read Calls per Execution | Maximum number of EVM read calls per workflow execution | 10 | | Log Query Block Quota | Maximum number of blocks that can be queried for historical logs | 100 | | Payload Size | Maximum size of an EVM read request payload | 5 KB | ### HTTP capability | Quota | Description | Value | | ------------------------ | ------------------------------------------------------ | ---------- | | HTTP Calls per Execution | Maximum number of HTTP requests per workflow execution | 5 | | Response Size | Maximum size of an HTTP response | 10 KB | | Connection Timeout | Maximum time to establish an HTTP connection | 10 seconds | | Request Size | Maximum size of an HTTP request payload | 100 KB | | Cache Age | Maximum time HTTP responses can be cached | 10 minutes | ## Quota increases [Contact us](/cre/support-feedback) to discuss quota increases. --- # Support & Feedback Source: https://docs.chain.link/cre/support-feedback Last Updated: 2025-11-04 Need help with CRE? Have feedback, want to report a bug, or request a feature? You can submit a support request directly through the CRE UI. ## How to submit a support request 1. Go to cre.chain.link 2. Log in to your account (if you're not already logged in) 3. In the left sidebar, click **Help** 4. The support form will open as a slide-out panel 5. Select your request type from the dropdown: - **Support Request** - Need help with an issue - **Bug Report** - Found a problem - **Feature Request** - Suggest an improvement - **General Feedback** - Share your thoughts - **Other** - Anything else 6. Describe your issue or feedback 7. Click **Request** to submit ## What to include in your request To help us assist you faster, please include: **For bug reports:** - Steps to reproduce the issue - Expected behavior vs. actual behavior - CRE CLI version (run `cre version` to check) - Error messages or logs (if applicable) - Operating system **For support requests:** - What you're trying to accomplish - What's not working or unclear - Any error messages you're seeing - Relevant code snippets or configuration files **For feature requests:** - Your use case - Why this feature would help you --- # Getting Started: Overview Source: https://docs.chain.link/cre/getting-started/overview Last Updated: 2025-11-04 This multi-part tutorial guides you through building a complete workflow from a blank slate. ### What you'll build You will build a simple but powerful **"Onchain Calculator"** workflow. By the end of this tutorial, your workflow will: 1. Run on a schedule using the **Cron Trigger**. 2. Fetch a random number from a public API using the **HTTP Capability**. 3. Read a value from a smart contract using the **EVM Read Capability**. 4. Combine the two values and write the final result back to the blockchain using the **EVM Write Capability**. This tutorial is designed to teach you the core features of the CRE SDK in a logical progression. You can use any of the supported languages, Go or TypeScript using the language selector. By the end, you'll have a solid understanding of the end-to-end development process for building and simulating workflows that interact with both offchain and onchain data sources. ### Where to go next? - **[Installing the CLI](/cre/getting-started/cli-installation)**: Download and install the `cre` command-line tool. #### Tutorial structure - **[Part 1: Project Setup & Simulation](/cre/getting-started/part-1-project-setup)**: Initialize a new, blank CRE project and run your first "Hello World!" simulation. - **[Part 2: Fetching Offchain Data](/cre/getting-started/part-2-fetching-data)**: Modify your workflow to fetch data from an external API using the `http.Client`. - **[Part 3: Reading an Onchain Value](/cre/getting-started/part-3-reading-onchain-value)**: Generate contract bindings and use the `evm.Client` to read a value from the blockchain. - **[Part 4: Writing Onchain](/cre/getting-started/part-4-writing-onchain)**: Complete the calculator by writing your computed result back to a smart contract on Sepolia. - **[Conclusion](/cre/getting-started/conclusion)**: Review what you've learned and find resources to continue your journey. --- # Installing the CRE CLI Source: https://docs.chain.link/cre/getting-started/cli-installation Last Updated: 2025-11-04 These guides explain how to install the Chainlink Developer Platform CLI (also referred to as the CRE CLI). --- # Installing the CRE CLI on macOS and Linux Source: https://docs.chain.link/cre/getting-started/cli-installation/macos-linux Last Updated: 2025-11-04 This page explains how to install the CRE CLI on macOS or Linux. The recommended version at the time of writing is **v1.0.0**. ## Installation Choose your installation method: - **[Automatic installation](#automatic-installation)** - Quick setup with a single command - **[Manual installation](#manual-installation)** - Download and install the binary yourself ### Automatic installation The easiest way to install the CRE CLI is using the installation script: ```bash curl -sSL https://cre.chain.link/install.sh | sh ``` This script will: - Detect your operating system and architecture automatically - Download the correct binary for your system - Verify the binary's integrity - Install it to `/usr/local/bin` (or prompt you for a custom location) - Make the binary executable After the script completes, verify the installation: ```bash cre version ``` **Expected output:** `cre version v1.0.0` ### Manual installation If you prefer to install manually or the automatic installation doesn't work for your environment, follow these steps: The CRE CLI is publicly available on GitHub. Visit the releases page and download the appropriate binary archive for your operating system and architecture. After downloading the correct file from the releases page, move on to the next step to verify its integrity. #### 1. Verify file integrity Before installing, verify the file integrity using a checksum to ensure the binary hasn't been tampered with: **Check the SHA-256 checksum** Run the following command in the directory where you downloaded the archive (replace the filename with your specific binary): ```bash shasum -a 256 cre_darwin_arm64.zip ``` **Verify against official checksums** Compare the output with the official checksum below: | File | SHA-256 Checksum | | ------------------------ | ---------------------------------------------------------------- | | `cre_darwin_amd64.zip` | be7d595a87ae74ecbbde95a576d4117c88af9d6751191fa7098bd0fe1f75a226 | | `cre_darwin_arm64.zip` | 2b1ca0992d2c7a70ece60623a1490155b74e04041722caf0bcc2f4c795686ebf | | `cre_linux_amd64.tar.gz` | dab1e966fbbf67ec136d7f3ec1236028db93493a067cdc8a772fb105593b2773 | | `cre_linux_arm64.tar.gz` | e1f6a51010f4b2c73825eba2f703a8164972a56643891f91bfeddbfeecc32e34 | If the checksum doesn't match, do not proceed with installation. Contact your Chainlink point of contact for assistance. #### 2. Extract and install 1. **Navigate** to the directory where you downloaded the archive. 2. **Extract the archive** For `.tar.gz` files: ```bash tar -xzf cre_linux_arm64.tar.gz ``` For `.zip` files: ```bash unzip cre_darwin_arm64.zip ``` 3. **Rename the extracted binary to `cre`** ```bash mv cre_v1.0.0_darwin_arm64 cre ``` 4. **Make it executable**: ```bash chmod +x cre ``` **Note (macOS Gatekeeper)**: If you see warnings about "unrecognized developer/source", remove extended attributes: ```bash xattr -c cre ``` #### 3. Add the CLI to your PATH Now that you have the `cre` binary, you need to make it accessible from anywhere on your system. This means you can run `cre` commands from any directory, not just where the binary is located. **Recommended approach: Move to a standard location** The easiest and most reliable method is to move the `cre` binary to a directory that's already in your system's PATH. For example: ```bash sudo mv cre /usr/local/bin/ ``` This command moves the `cre` binary to `/usr/local/bin/`, which is typically included in your PATH by default. **Alternative: Add current directory to PATH** If you prefer to keep the binary in its current location, you can add that directory to your PATH: 1. **Find your current directory:** ```bash pwd ``` Note the full path (e.g., `/Users/yourname/Downloads/cre`) 2. **Add to your shell profile** (choose based on your shell): For **zsh** (default on newer macOS): ```bash echo 'export PATH="/Users/yourname/Downloads/cre:$PATH"' >> ~/.zshrc source ~/.zshrc ``` For **bash**: ```bash echo 'export PATH="/Users/yourname/Downloads/cre:$PATH"' >> ~/.bash_profile source ~/.bash_profile ``` Replace `/Users/yourname/Downloads/cre` with your actual directory path from step 1. 3. **For temporary access** (this session only): ```bash export PATH="$(pwd):$PATH" ``` #### 4. Verify the installation **Test that `cre` is accessible:** Open a **new terminal window** and run: ```bash cre version ``` **Expected output:** You should see version information: `cre version v1.0.0`. **If it doesn't work:** - Make sure you opened a **new terminal window** after making PATH changes - Check the binary location: `which cre` should return `/usr/local/bin/cre` (or your custom path) - Check that the binary has execute permissions: `ls -la /usr/local/bin/cre` - Verify your PATH includes the correct directory: `echo $PATH` #### 5. Confirm your PATH (troubleshooting) If you're having issues, check what directories are in your PATH: ```bash echo "$PATH" | tr ':' '\n' ``` You should see either: - `/usr/local/bin` (if you moved the binary there) - Your custom directory (if you added it to PATH) ## Next steps Now that you have the `cre` CLI installed, you'll need to create a CRE account and authenticate before you can use it. - **[Creating Your Account](/cre/account/creating-account)**: Create your CRE account and set up two-factor authentication - **[Logging in with the CLI](/cre/account/cli-login)**: Authenticate the CLI with your account Once you're authenticated, you're ready to build your first workflow: - **[Getting Started — Part 1: Project Setup & Simulation](/cre/getting-started/part-1-project-setup)**: Initialize a new, blank CRE project and run your first "Hello World!" simulation. --- # Installing the CRE CLI on Windows Source: https://docs.chain.link/cre/getting-started/cli-installation/windows Last Updated: 2025-11-04 This page explains how to install the Chainlink Developer Platform CLI (also referred to as the CRE CLI) on Windows. The recommended version at the time of writing is **v1.0.0**. ## Installation Choose your installation method: - **[Automatic installation](#automatic-installation)** - Quick setup with a PowerShell script - **[Manual installation](#manual-installation)** - Download and install the binary yourself ### Automatic installation The easiest way to install the CRE CLI is using the installation script. Open **PowerShell** and run: ```powershell irm https://cre.chain.link/install.ps1 | iex ``` This script will: - Download the correct binary for Windows - Verify the binary's integrity - Install it to a location in your PATH - Make the binary executable After the script completes, **open a new PowerShell window** and verify the installation: ```powershell cre version ``` **Expected output:** `cre version v1.0.0` ### Manual installation If you prefer to install manually or the automatic installation doesn't work for your environment, follow these steps: The CRE CLI is publicly available on GitHub. Click the button below to access the releases page and download `cre_windows_amd64.zip`. After downloading the file from the releases page, move on to the next step to verify its integrity. #### 1. Verify file integrity Before installing, verify the file integrity using a checksum to ensure the binary hasn't been tampered with. **Check the SHA-256 checksum** Open a PowerShell terminal and run the following command in the directory where you downloaded the archive: ```powershell Get-FileHash cre_windows_amd64.zip -Algorithm SHA256 ``` **Verify against the official checksum** Compare the `Hash` value in the output with the official checksum below: | File | SHA-256 Checksum | | ----------------------- | ---------------------------------------------------------------- | | `cre_windows_amd64.zip` | 72ca89ddc043837e13e7076c3ee3d177f5bcfadd4be83184405aa2cec7eec707 | If the checksum doesn't match, do not proceed with installation. Contact your Chainlink point of contact for assistance. #### 2. Extract and install 1. Navigate to the directory where you downloaded the archive. 2. Right-click the `.zip` file and select **Extract All...**. 3. Choose a permanent location for the extracted folder (e.g., `C:\Program Files\cre-cli`). 4. Inside the extracted folder, rename the file `cre_v1.0.0_windows_amd64.exe` to `cre.exe`. #### 3. Add the CLI to your PATH To run `cre` commands from any directory, you need to add the folder where you saved `cre.exe` to your system's PATH environment variable. 1. Open the **Start Menu** and search for "environment variables". 2. Select **Edit the system environment variables**. 3. In the System Properties window, click the **Environment Variables...** button. 4. In the **System variables** section, find and select the `Path` variable, then click **Edit...**. 5. Click **New** and add the full path to the folder where you saved `cre.exe` (e.g., `C:\Program Files\cre-cli`). 6. Click **OK** on all windows to save your changes. #### 4. Verify the installation Open a new **PowerShell** or **Command Prompt** window and run: ```bash cre version ``` You should see version information: `cre version v1.0.0`. ## Next steps Now that you have the `cre` CLI installed, you'll need to create a CRE account and authenticate before you can use it. - **[Creating Your Account](/cre/account/creating-account)**: Create your CRE account and set up two-factor authentication - **[Logging in with the CLI](/cre/account/cli-login)**: Authenticate the CLI with your account Once you're authenticated, you're ready to build your first workflow: - **[Getting Started — Part 1: Project Setup & Simulation](/cre/getting-started/part-1-project-setup)**: Initialize a new, blank CRE project and run your first "Hello World!" simulation. --- # Conclusion & Next Steps Source: https://docs.chain.link/cre/getting-started/conclusion Last Updated: 2025-11-04 You've built a complete, end-to-end CRE workflow from scratch. You started with an empty project and progressively built a workflow that: - Fetches data from an offchain API with consensus - Reads values from a smart contract - Performs calculations combining onchain and offchain data - Writes results back to the blockchain **This is no small achievement.** You've mastered the core pattern that powers most CRE workflows: the trigger-and-callback model with capabilities for HTTP, EVM, and consensus. ## What's next? Now that you have a working workflow, here's your natural progression from simulation to production and beyond. ### 1. See a complete example Ready to see all these concepts in a more complex, real-world scenario? - **[Run the Custom Data Feed Demo](/cre/templates/running-demo-workflow)** - Explore an advanced template that combines multiple capabilities **Why this matters:** Templates show production-ready patterns. ### 2. Deploy your Calculator workflow to Production You've simulated your workflow locally. **The logical next step is to deploy it to the CRE production environment** so it runs across a Decentralized Oracle Network (DON). **Follow this deployment sequence:** 1. **[Link a Wallet Key](/cre/organization/linking-keys)** - Connect your wallet address to your organization (required before deployment) 2. **[Deploy Your Workflow](/cre/guides/operations/deploying-workflows)** - Push your calculator workflow live 3. **[Monitor Your Workflow](/cre/guides/operations/monitoring-workflows)** - Watch it execute in production and debug any issues **Why this matters:** Deploying moves your workflow from local simulation to production execution across a DON. ### 3. Explore different triggers You used a **Cron trigger** (time-based). **Most production workflows react to real-world events.** **Try these next:** - **[HTTP Trigger](/cre/guides/workflow/using-triggers/http-trigger)** - Let external systems trigger your workflow via API calls - **[EVM Log Trigger](/cre/guides/workflow/using-triggers/evm-log-trigger)** - React to onchain events (e.g., token transfers, contract events) **Why this matters:** Event-driven workflows are more powerful than scheduled ones. They respond instantly to real-world changes. ### 4. Add secrets Your calculator used a public API. **Real workflows often need API keys and other sensitive data.** **Learn how to secure your secrets:** - **[Using Secrets in Simulation](/cre/guides/workflow/secrets/using-secrets-simulation)** - Store secrets in your local environment for development - **[Using Secrets with Deployed Workflows](/cre/guides/workflow/secrets/using-secrets-deployed)** - Store secrets in the Vault DON for production - **[Managing Secrets with 1Password](/cre/guides/workflow/secrets/managing-secrets-1password)** - Best practice: inject secrets at runtime **Why this matters:** Hardcoded credentials are a security risk. CRE's secrets management lets you safely use authenticated APIs and private keys. ### 5. Build your own consumer contract You used a **pre-deployed consumer contract**. **For production workflows, you'll create custom contracts tailored to your use case.** **Learn the secure pattern:** - **[Building Consumer Contracts](/cre/guides/workflow/using-evm-client/onchain-write/building-consumer-contracts)** - Create contracts that safely receive CRE data **Why this matters:** Consumer contracts enforce business logic and validation onchain, enabling trustless and verifiable execution. ## Reference: Deepen Your Understanding Want to dive deeper into specific concepts from the Getting Started guide? Use this section as a quick reference. **Workflow Structure & Triggers** - **[Core SDK Reference](/cre/reference/sdk/core/)** - Fundamental building blocks (`InitWorkflow`, `Handler`, `Runtime`) - **[Triggers Overview](/cre/guides/workflow/using-triggers/overview)** - Compare all available event sources **HTTP & Offchain Data** - **[API Interactions Guide](/cre/guides/workflow/using-http-client/)** - Complete patterns for HTTP requests - **[Consensus & Aggregation](/cre/reference/sdk/consensus)** - All aggregation methods (median, mode, custom) - **[Consensus Computing Concept](/cre/concepts/consensus-computing)** - How CRE's consensus-based execution works **EVM & Onchain Interactions** - **[EVM Client Overview](/cre/guides/workflow/using-evm-client/overview)** - Introduction to smart contract interactions - **[Onchain Read Guide](/cre/guides/workflow/using-evm-client/onchain-read)** - Reading from a smart contract - **[Onchain Write Guide](/cre/guides/workflow/using-evm-client/onchain-write)** - Complete write patterns and report generation **Configuration & Secrets** - **[Project Configuration](/cre/reference/project-configuration/)** - Complete guide to `project.yaml`, `workflow.yaml`, and targets - **[Secrets Guide](/cre/guides/workflow/secrets)** - All secrets management patterns **All Capabilities** - **[Capabilities Overview](/cre/capabilities/)** - See the full list of CRE capabilities and how they work together --- # Using Triggers Source: https://docs.chain.link/cre/guides/workflow/using-triggers/overview Last Updated: 2025-11-04 Triggers are a special type of capability that start your workflows. They are event-driven services that watch for a specific condition to be met. When the condition occurs, the trigger fires and instructs CRE to run the callback function you have registered for that event. A single workflow can contain multiple handlers, allowing you to react to different events with specific logic. This section provides detailed guides for each available trigger type: - **[Cron Trigger](/cre/guides/workflow/using-triggers/cron-trigger)**: Run workflows on a time-based schedule. - **[HTTP Trigger](/cre/guides/workflow/using-triggers/http-trigger)**: Start workflows in response to an HTTP request from an external system. - **[EVM Log Trigger](/cre/guides/workflow/using-triggers/evm-log-trigger)**: Initiate workflows in response to a specific event being emitted by a smart contract. --- # Generating Contract Bindings Source: https://docs.chain.link/cre/guides/workflow/using-evm-client/generating-bindings Last Updated: 2025-11-04 To interact with a smart contract from your Go workflow, you first need to create **bindings**. Bindings are type-safe Go interfaces auto-generated from your contract's ABI. They provide a bridge between your Go code and the EVM. How they work depends on whether you are reading from or writing to the chain: - **For onchain reads**, bindings provide Go functions that directly mirror your contract's `view` and `pure` methods. - **For onchain writes**, bindings provide powerful helper methods to ABI-encode your data structures, preparing them to be sent in a report to a [consumer contract](/cre/guides/workflow/using-evm-client/onchain-write/building-consumer-contracts/). This is a **one-time code generation step** performed using the CRE CLI. ## The generation process The CRE CLI provides an automated binding generator that reads contract ABIs and creates corresponding Go packages. ### Step 1: Add your contract ABI Place your contract's ABI JSON file into the `contracts/evm/src/abi/` directory. For example, to generate bindings for a `PriceUpdater` contract, you would create `contracts/evm/src/abi/PriceUpdater.abi` with your ABI content. ### Step 2: Generate the bindings From your **project root**, run the binding generator: ```bash cre generate-bindings evm ``` This command scans all `.abi` files in `contracts/evm/src/abi/` and generates corresponding Go packages in `contracts/evm/src/generated/`. For each contract, two files are generated: - `.go` — The main binding for interacting with the contract - `_mock.go` — A mock implementation for testing your workflows without deploying contracts ## Using generated bindings ### For onchain reads For `view` or `pure` functions, the generator creates a client with methods that you can call directly. These methods return a `Promise`, which you must `.Await()` to get the result after consensus. **Example: A simple `Storage` contract** If you have a `Storage.abi` for a contract with a `get()` view function, you can use the bindings like this: ```go // Import the generated package for your contract, replacing "" with your project's module name import "/contracts/evm/src/generated/storage" import "github.com/ethereum/go-ethereum/common" // In your workflow function... evmClient := &evm.Client{ ChainSelector: config.ChainSelector } contractAddress := common.HexToAddress(config.ContractAddress) // Create a new contract instance storageContract, err := storage.NewStorage(evmClient, contractAddress, nil) if err != nil { /* ... */ } // Call a read-only method - note that it returns the decoded type directly value, err := storageContract.Get(runtime, big.NewInt(-3)).Await() // -3 = finalized block if err != nil { /* ... */ } // value is already a *big.Int, ready to use! ``` ### For onchain writes For onchain writes, your goal is to send an ABI-encoded report to your [consumer contract](/cre/guides/workflow/using-evm-client/onchain-write/building-consumer-contracts/). The binding generator creates helper methods that handle the entire process: creating the report, sending it for consensus, and delivering it to the chain. #### Signaling the generator To generate the necessary Go types and write helpers, your ABI must include at least one **`public` or `external` function that uses the data `struct` you want to send as a parameter**. The generated helper method is named after the **input struct type**. For example, a struct named `PriceData` will generate a `WriteReportFromPriceData` helper. **Example: A `PriceUpdater` contract ABI** This ABI contains a `PriceData` struct and a public `updatePrices` function. This is all the generator needs. ```solidity // contracts/evm/src/PriceUpdater.sol // This contract can be used purely to generate the bindings. // The actual onchain logic can live elsewhere. contract PriceUpdater { struct PriceData { uint256 ethPrice; uint256 btcPrice; } // The struct type (`PriceData`) determines the generated helper name. // The generator will create a `WriteReportFromPriceData` method. function updatePrices(PriceData memory) public {} } ``` #### Using write bindings in a workflow After running `cre generate-bindings`, you can use the generated `PriceUpdater` client to send a report. The workflow code will look like this: ```go // Import the generated package for your contract, replacing "" with your project's module name import "/contracts/evm/src/generated/price_updater" import "github.com/smartcontractkit/cre-sdk-go/capabilities/blockchain/evm" import "github.com/ethereum/go-ethereum/common" import "math/big" import "fmt" // In your workflow function... // The address should be your PROXY contract's address. contractAddress := common.HexToAddress(config.ProxyAddress) evmClient := &evm.Client{ ChainSelector: config.ChainSelector } // 1. Create a new contract instance using the generated bindings. // Even though it's called `price_updater`, it's configured with your proxy address. priceUpdater, err := price_updater.NewPriceUpdater(evmClient, contractAddress, nil) if err != nil { /* ... */ } // 2. Instantiate the generated Go struct with your data. reportData := price_updater.PriceData{ EthPrice: big.NewInt(4000_000000), BtcPrice: big.NewInt(60000_000000), } // 3. Call the generated WriteReportFrom method on the contract instance. // This method name is derived from the input struct of your contract's function. writePromise := priceUpdater.WriteReportFromPriceData(runtime, reportData, nil) // 4. Await the promise to confirm the transaction has been mined. resp, err := writePromise.Await() if err != nil { return nil, fmt.Errorf("WriteReport await failed: %w", err) } // 5. The response contains the transaction hash. logger := runtime.Logger() logger.Info("Write report transaction succeeded", "txHash", common.BytesToHash(resp.TxHash).Hex()) ``` ### For event logs The binding generator also creates powerful helpers for interacting with your contract's events. You can easily trigger a workflow when an event is emitted and decode the event data into a type-safe Go struct. **Example: A contract with a `UserAdded` event** ```solidity contract UserDirectory { event UserAdded(address indexed userAddress, string userName); function addUser(string calldata userName) external { emit UserAdded(msg.sender, userName); } } ``` #### Triggering and Decoding Events After generating bindings for the `UserDirectory` ABI, you can use the helpers to create a trigger and decode the logs in your handler. ```go import ( "log/slog" "/contracts/evm/src/generated/user_directory" // Replace "" with your project's module name "github.com/smartcontractkit/cre-sdk-go/capabilities/blockchain/evm" "github.com/smartcontractkit/cre-sdk-go/cre" ) // In InitWorkflow, create an instance of the contract binding and use it // to generate a trigger for the "UserAdded" event. func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) { // ... userDirectory, err := user_directory.NewUserDirectory(evmClient, contractAddress, nil) if err != nil { /* ... */ } // Use the generated helper to create a trigger for the UserAdded event. // Set confidence to evm.ConfidenceLevel_CONFIDENCE_LEVEL_FINALIZED to only trigger on finalized blocks. // The last argument (filters) is nil to listen for all UserAdded events. userAddedTrigger, err := userDirectory.LogTriggerUserAddedLog(chainSelector, evm.ConfidenceLevel_CONFIDENCE_LEVEL_FINALIZED, nil) if err != nil { /* ... */ } return cre.Workflow[*Config]{ cre.Handler( userAddedTrigger, onUserAdded, ), }, nil } // The handler function receives the raw event log. func onUserAdded(config *Config, runtime cre.Runtime, log *evm.Log) (string, error) { logger := runtime.Logger() // You must re-create the contract instance to access the decoder. userDirectory, err := user_directory.NewUserDirectory(evmClient, contractAddress, nil) if err != nil { /* ... */ } // Use the generated Codec to decode the raw log into a typed Go struct. decodedLog, err := userDirectory.Codec.DecodeUserAdded(log) if err != nil { return "", fmt.Errorf("failed to decode log: %w", err) } logger.Info("New user added!", "address", decodedLog.UserAddress, "name", decodedLog.UserName) return "ok", nil } ``` ## What the CLI Generates The generator creates a Go package for each ABI file. - **For all contracts**: - `Codec` interface for low-level encoding and decoding. - **For onchain reads**: - A contract **client struct** (e.g., `Storage`) to interact with. - A **constructor function** (e.g., `NewStorage(...)`) to instantiate the client. - **Method wrappers** for each `view`/`pure` function (e.g., `storage.Get(...)`) that return a promise. - **For onchain writes**: - A **Go type** for each `struct` exposed via a public function (e.g., `price_updater.PriceData`). - A `WriteReportFrom` method on the **contract client struct** (e.g., `priceUpdater.WriteReportFromPriceData(...)`). This method handles the full process of generating and sending a report and returns a promise that resolves with the transaction details. - **For events**: - A **Go struct** for each `event` (e.g., `UserAdded`). - A `Decode` method on the `Codec` to parse raw log data into the corresponding Go struct. - A `LogTriggerLog` method on the contract client to easily create a workflow trigger. - A `FilterLogs` method to query historical logs for that event. ## Using mock bindings for testing The `_mock.go` files allow you to test your workflows without deploying or interacting with real contracts. Each mock struct provides: - **Test-friendly constructor**: `NewMock(address, evmMockClient)` creates a mock instance - **Mockable methods**: Set custom function implementations for each contract `view`/`pure` function - **Type safety**: The same input/output types as the real binding ### Complete example: Testing a workflow with mocks Let's say you have a workflow in `my-workflow/main.go` that reads from a `Storage` contract. Create a test file named `main_test.go` in the same directory. ```go // File: my-workflow/main_test.go package main import ( "math/big" "testing" "github.com/ethereum/go-ethereum/common" "github.com/smartcontractkit/cre-sdk-go/capabilities/blockchain/evm" evmmock "github.com/smartcontractkit/cre-sdk-go/capabilities/blockchain/evm/mock" "github.com/stretchr/testify/require" "your-project/contracts/evm/src/generated/storage" ) // Define your config types in the test file to match your workflow's structure // Note: Your main.go likely has //go:build wasip1 (for WASM compilation), // which means those types aren't available when running regular Go tests. // So you need to redefine them here in your test file. type EvmConfig struct { StorageAddress string `json:"storageAddress"` ChainName string `json:"chainName"` } type Config struct { Evms []EvmConfig `json:"evms"` } func TestStorageRead(t *testing.T) { // 1. Set up your config config := &Config{ Evms: []EvmConfig{ { StorageAddress: "0xa17CF997C28FF154eDBae1422e6a50BeF23927F4", ChainName: "ethereum-testnet-sepolia", }, }, } // 2. Create a mock EVM client chainSelector := uint64(evm.EthereumTestnetSepolia) evmMock, err := evmmock.NewClientCapability(chainSelector, t) require.NoError(t, err) // 3. Create a mock Storage contract and set up mock behavior storageAddress := common.HexToAddress(config.Evms[0].StorageAddress) storageMock := storage.NewStorageMock(storageAddress, evmMock) // 4. Mock the Get() function to return a controlled value storageMock.Get = func() (*big.Int, error) { return big.NewInt(42), nil } // 5. Now when your workflow code creates a Storage contract with this evmMock, // it will automatically use the mocked Get() function. // The mock is registered with the evmMock, so any contract at this address // will use the mock behavior you defined. // In a real test, you would call your workflow function here and verify results. // Example: // result, err := onCronTrigger(config, runtime, &cron.Payload{}) // require.NoError(t, err) // require.Equal(t, big.NewInt(42), result.StorageValue) // For this demo, we just verify the mock was set up require.NotNil(t, storageMock) t.Logf("Mock set up successfully - Get() will return 42") } ``` ### Running your tests From your project root, run: ```bash # Test a specific workflow go test ./my-workflow # Test with verbose output (shows t.Logf messages) go test -v ./my-workflow # Test all workflows in your project go test ./... ``` **Expected output with `-v` flag:** ```bash === RUN TestStorageRead main_test.go:55: Mock Storage contract set up at 0xa17CF997C28FF154eDBae1422e6a50BeF23927F4 main_test.go:56: When Storage.Get() is called, it will return: 42 --- PASS: TestStorageRead (0.00s) PASS ok onchain-calculator/my-calculator-workflow 0.257s ``` The test passes, confirming your mock contract is set up correctly. In a real workflow test, you would call your workflow function and verify it produces the expected results using the mocked contract. ### Best practices for workflow testing 1. **Name test files correctly**: Use `_test.go` (e.g., `main_test.go`) and place them in your workflow directory 2. **Test function naming**: Start test functions with `Test` (e.g., `TestMyWorkflow`, `TestCronTrigger`) 3. **Mock all external dependencies**: Use mock contracts for EVM calls and mock HTTP clients for API requests 4. **Test different scenarios**: Create separate test functions for success cases, error cases, and edge cases ### Complete reference example For a comprehensive example showing how to test workflows with multiple triggers (cron, HTTP, EVM log) and multiple mock contracts, see the Custom Data Feed demo workflow's `workflow_test.go` file. To generate this example: 1. Run `cre init` from your project directory 2. Select **Golang** as your language 3. Choose the **"Custom data feed: Updating on-chain data periodically using offchain API data"** template 4. After initialization completes, examine the generated `workflow_test.go` file in your workflow directory This generated test file demonstrates real-world patterns for testing complex workflows with multiple capabilities and mock contracts. ## Best practices 1. **Regenerate when needed**: Re-run the generator if you update your contract ABIs. 2. **Handle errors**: Always check for errors at each step. 3. **Organize ABIs**: Keep your ABI files clearly named in the `contracts/evm/src/abi/` directory. 4. **Use mocks in tests**: Leverage the generated mock bindings to test your workflows in isolation without needing deployed contracts. ## Where to go next Now that you know how to generate bindings, you can use them to [read data from](/cre/guides/workflow/using-evm-client/onchain-read) or [write data to](/cre/guides/workflow/using-evm-client/onchain-write) your contracts, or [trigger workflows from events](/cre/guides/workflow/using-triggers/evm-log-trigger). --- # Building Consumer Contracts Source: https://docs.chain.link/cre/guides/workflow/using-evm-client/onchain-write/building-consumer-contracts Last Updated: 2025-11-04 When your workflow [writes data to the blockchain](/cre/guides/workflow/using-evm-client/onchain-write), it doesn't call your contract directly. Instead, it submits a signed report to a Chainlink `KeystoneForwarder` contract, which then calls your contract. This guide explains how to build a consumer contract that can securely receive and process data from a CRE workflow. **In this guide:** - [Core Concepts: The Onchain Data Flow](#1-core-concepts-the-onchain-data-flow) - [The IReceiver Standard](#2-the-ireceiver-standard) - [Using IReceiverTemplate](#3-using-ireceivertemplate) - [Advanced Usage](#4-advanced-usage-optional) - [Complete Examples](#5-complete-examples) ## 1. Core Concepts: The Onchain Data Flow 1. **Workflow Execution**: Your workflow [produces a final, signed report](/cre/guides/workflow/using-evm-client/onchain-write/writing-data-onchain). 2. **EVM Write**: The EVM capability sends this report to the Chainlink-managed `KeystoneForwarder` contract. 3. **Forwarder Validation**: The `KeystoneForwarder` validates the report's signatures. 4. **Callback to Your Contract**: If the report is valid, the forwarder calls a designated function (`onReport`) on your consumer contract to deliver the data. ## 2. The `IReceiver` Standard To be a valid target for the `KeystoneForwarder`, your consumer contract must satisfy two main requirements: ### 2.1 Implement the `IReceiver` Interface The `KeystoneForwarder` needs a standardized function to call. This is defined by the `IReceiver` interface, which mandates an `onReport` function. ```solidity interface IReceiver is IERC165 { function onReport(bytes calldata metadata, bytes calldata report) external; } ``` - `metadata`: Contains information about the workflow (ID, name, owner). - `report`: The raw, ABI-encoded data payload from your workflow. ### 2.2 Support ERC165 Interface Detection [ERC165](https://eips.ethereum.org/EIPS/eip-165) is a standard that allows contracts to publish the interfaces they support. The `KeystoneForwarder` uses this to check if your contract supports the `IReceiver` interface before sending a report. ## 3. Using `IReceiverTemplate` ### 3.1 Overview While you can implement these standards manually, we provide an abstract contract, `IReceiverTemplate.sol`, that does the heavy lifting for you. Inheriting from it is the recommended best practice. **Key features:** - **Optional Permission Controls**: Choose your security level—enable forwarder address checks, workflow ID validation, workflow owner verification, or any combination - **Flexible and Updatable**: All permission settings can be configured and updated via setter functions after deployment - **Simplified Logic**: You only need to implement `_processReport(bytes calldata report)` with your business logic - **Built-in Access Control**: Includes OpenZeppelin's `Ownable` for secure permission management - **ERC165 Support**: Includes the necessary `supportsInterface` function - **Metadata Access**: Helper function to decode workflow ID, name, and owner for custom validation logic ### 3.2 Contract Source Code ```solidity // SPDX-License-Identifier: MIT pragma solidity ^0.8.0; import {IERC165} from "./IERC165.sol"; import {IReceiver} from "./IReceiver.sol"; import {Ownable} from "@openzeppelin/contracts/access/Ownable.sol"; /// @title IReceiverTemplate - Abstract receiver with optional permission controls /// @notice Provides flexible, updatable security checks for receiving workflow reports /// @dev All permission fields default to zero (disabled). Use setter functions to enable checks. abstract contract IReceiverTemplate is IReceiver, Ownable { // Optional permission fields (all default to zero = disabled) address public forwarderAddress; // If set, only this address can call onReport address public expectedAuthor; // If set, only reports from this workflow owner are accepted bytes10 public expectedWorkflowName; // If set, only reports with this workflow name are accepted bytes32 public expectedWorkflowId; // If set, only reports from this specific workflow ID are accepted // Custom errors error InvalidSender(address sender, address expected); error InvalidAuthor(address received, address expected); error InvalidWorkflowName(bytes10 received, bytes10 expected); error InvalidWorkflowId(bytes32 received, bytes32 expected); /// @notice Constructor sets msg.sender as the owner /// @dev All permission fields are initialized to zero (disabled by default) constructor() Ownable(msg.sender) {} /// @inheritdoc IReceiver /// @dev Performs optional validation checks based on which permission fields are set function onReport(bytes calldata metadata, bytes calldata report) external override { // Security Check 1: Verify caller is the trusted Chainlink Forwarder (if configured) if (forwarderAddress != address(0) && msg.sender != forwarderAddress) { revert InvalidSender(msg.sender, forwarderAddress); } // Security Checks 2-4: Verify workflow identity - ID, owner, and/or name (if any are configured) if (expectedWorkflowId != bytes32(0) || expectedAuthor != address(0) || expectedWorkflowName != bytes10(0)) { (bytes32 workflowId, bytes10 workflowName, address workflowOwner) = _decodeMetadata(metadata); if (expectedWorkflowId != bytes32(0) && workflowId != expectedWorkflowId) { revert InvalidWorkflowId(workflowId, expectedWorkflowId); } if (expectedAuthor != address(0) && workflowOwner != expectedAuthor) { revert InvalidAuthor(workflowOwner, expectedAuthor); } if (expectedWorkflowName != bytes10(0) && workflowName != expectedWorkflowName) { revert InvalidWorkflowName(workflowName, expectedWorkflowName); } } _processReport(report); } /// @notice Updates the forwarder address that is allowed to call onReport /// @param _forwarder The new forwarder address (use address(0) to disable this check) function setForwarderAddress(address _forwarder) external onlyOwner { forwarderAddress = _forwarder; } /// @notice Updates the expected workflow owner address /// @param _author The new expected author address (use address(0) to disable this check) function setExpectedAuthor(address _author) external onlyOwner { expectedAuthor = _author; } /// @notice Updates the expected workflow name /// @param _name The new expected workflow name (use bytes10(0) to disable this check) function setExpectedWorkflowName(bytes10 _name) external onlyOwner { expectedWorkflowName = _name; } /// @notice Updates the expected workflow ID /// @param _id The new expected workflow ID (use bytes32(0) to disable this check) function setExpectedWorkflowId(bytes32 _id) external onlyOwner { expectedWorkflowId = _id; } /// @notice Extracts all metadata fields from the onReport metadata parameter /// @param metadata The metadata in bytes format /// @return workflowId The unique identifier of the workflow (bytes32) /// @return workflowName The name of the workflow (bytes10) /// @return workflowOwner The owner address of the workflow function _decodeMetadata(bytes memory metadata) internal pure returns (bytes32 workflowId, bytes10 workflowName, address workflowOwner) { // Metadata structure: // - First 32 bytes: length of the byte array (standard for dynamic bytes) // - Offset 32, size 32: workflow_id (bytes32) // - Offset 64, size 10: workflow_name (bytes10) // - Offset 74, size 20: workflow_owner (address) assembly { workflowId := mload(add(metadata, 32)) workflowName := mload(add(metadata, 64)) workflowOwner := shr(mul(12, 8), mload(add(metadata, 74))) } } /// @notice Abstract function to process the report data /// @param report The report calldata containing your workflow's encoded data /// @dev Implement this function with your contract's business logic function _processReport(bytes calldata report) internal virtual; /// @inheritdoc IERC165 function supportsInterface(bytes4 interfaceId) public pure virtual override returns (bool) { return interfaceId == type(IReceiver).interfaceId || interfaceId == type(IERC165).interfaceId; } } ``` ### 3.3 Quick Start **Step 1: Inherit and implement your business logic** The simplest way to use `IReceiverTemplate` is to inherit from it and implement the `_processReport` function: ```solidity // SPDX-License-Identifier: MIT pragma solidity ^0.8.26; import { IReceiverTemplate } from "./IReceiverTemplate.sol"; contract MyConsumer is IReceiverTemplate { uint256 public storedValue; event ValueUpdated(uint256 newValue); // Simple constructor - no parameters needed constructor() IReceiverTemplate() {} // Implement your business logic here function _processReport(bytes calldata report) internal override { uint256 newValue = abi.decode(report, (uint256)); storedValue = newValue; emit ValueUpdated(newValue); } } ``` ### 3.4 Configuring Security **Step 2: Configure permissions (optional)** After deploying your contract, the owner can enable any combination of security checks using the setter functions. **Configuration examples:** ```solidity // Example: Enable forwarder check only myConsumer.setForwarderAddress(0xF8344CFd5c43616a4366C34E3EEE75af79a74482); // Ethereum Sepolia // Example: Enable workflow ID check myConsumer.setExpectedWorkflowId(0x1234...); // Your specific workflow ID // Example: Enable workflow owner and name checks myConsumer.setExpectedAuthor(0xYourAddress...); myConsumer.setExpectedWorkflowName(0x6d795f776f726b666c6f77); // "my_workflo" in hex (bytes10 = 10 chars max) // Example: Disable a check later (set to zero) myConsumer.setExpectedWorkflowName(bytes10(0)); ``` **What the template handles for you:** - Validates the caller address (if `forwarderAddress` is set) - Validates the workflow ID (if `expectedWorkflowId` is set) - Validates the workflow owner (if `expectedAuthor` is set) - Validates the workflow name (if `expectedWorkflowName` is set) - Validates the ERC165 interface detection - Validates the Access control via OpenZeppelin's `Ownable` - Calls your `_processReport` function with validated data **What you implement:** - Your business logic in `_processReport` - (Optional) Configure permissions after deployment using setter functions ## 4. Advanced Usage (Optional) ### 4.1 Custom Validation Logic You can override `onReport` to add your own validation logic before or after the standard checks: ```solidity import { IReceiverTemplate } from "./IReceiverTemplate.sol"; contract AdvancedConsumer is IReceiverTemplate { uint256 public minReportInterval = 1 hours; uint256 public lastReportTime; error ReportTooFrequent(uint256 timeSinceLastReport, uint256 minInterval); // Add custom validation before parent's checks function onReport(bytes calldata metadata, bytes calldata report) external override { // Custom check: Rate limiting if (block.timestamp < lastReportTime + minReportInterval) { revert ReportTooFrequent(block.timestamp - lastReportTime, minReportInterval); } // Call parent implementation for standard permission checks super.onReport(metadata, report); lastReportTime = block.timestamp; } function _processReport(bytes calldata report) internal override { // Your business logic here uint256 value = abi.decode(report, (uint256)); // ... store or process the value ... } // Allow owner to update rate limit function setMinReportInterval(uint256 _interval) external onlyOwner { minReportInterval = _interval; } } ``` ### 4.2 Using Metadata Fields in Your Logic The `_decodeMetadata` helper function is available for use in your `_processReport` implementation. This allows you to access workflow metadata for custom business logic: ```solidity contract MetadataAwareConsumer is IReceiverTemplate { mapping(bytes32 => uint256) public reportCountByWorkflow; function _processReport(bytes calldata report) internal override { // Access the metadata to get workflow ID bytes calldata metadata = msg.data[4:]; // Skip function selector (bytes32 workflowId, , ) = _decodeMetadata(metadata); // Use workflow ID in your business logic reportCountByWorkflow[workflowId]++; // Process the report data uint256 value = abi.decode(report, (uint256)); // ... your logic here ... } } ``` ### 4.3 Working with Simulation When you run `cre workflow simulate`, your workflow interacts with a **`MockForwarder`** contract that does not provide the `workflow_name` or `workflow_owner` metadata. This means consumer contracts with `IReceiverTemplate`'s default validation **will fail during simulation**. **To test your consumer contract with simulation:** Override the `onReport` function to bypass validation checks: ```solidity function onReport(bytes calldata, bytes calldata report) external override { _processReport(report); // Skips validation checks } ``` **For deployed workflows:** Deployed workflows use the real **`KeystoneForwarder`** contract, which provides full metadata. You can enable all permission checks (forwarder address, workflow ID, owner, name) for production deployments. ## 5. Complete Examples ### Example 1: Simple Consumer Contract This example inherits from `IReceiverTemplate` to store a temperature value. ```solidity // SPDX-License-Identifier: MIT pragma solidity ^0.8.26; import { IReceiverTemplate } from "./IReceiverTemplate.sol"; contract TemperatureConsumer is IReceiverTemplate { int256 public currentTemperature; event TemperatureUpdated(int256 newTemperature); // Simple constructor - no parameters needed constructor() IReceiverTemplate() {} function _processReport(bytes calldata report) internal override { int256 newTemperature = abi.decode(report, (int256)); currentTemperature = newTemperature; emit TemperatureUpdated(newTemperature); } } ``` **Configuring permissions after deployment:** ```solidity // Enable forwarder check for production temperatureConsumer.setForwarderAddress(0xF8344CFd5c43616a4366C34E3EEE75af79a74482); // Ethereum Sepolia // Enable workflow ID check for highest security temperatureConsumer.setExpectedWorkflowId(0xYourWorkflowId...); ``` ### Example 2: The Proxy Pattern For more complex scenarios, it's best to separate your Chainlink-aware code from your core business logic. The **Proxy Pattern** is a robust architecture that uses two contracts to achieve this: - **A Logic Contract**: Holds the state and the core functions of your application. It knows nothing about the Forwarder contract or the `onReport` function. - **A Proxy Contract**: Acts as the secure entry point. It inherits from `IReceiverTemplate` and forwards validated reports to the Logic Contract. This separation makes your business logic more modular and reusable. #### The Logic Contract (`ReserveManager.sol`) This contract, our "vault", holds the state and the `updateReserves` function. For security, it only accepts calls from its trusted Proxy. It also includes an owner-only function to update the proxy address, making the system upgradeable without requiring a migration. ```solidity // SPDX-License-Identifier: MIT pragma solidity ^0.8.19; import { Ownable } from "@openzeppelin/contracts/access/Ownable.sol"; contract ReserveManager is Ownable { struct UpdateReserves { uint256 ethPrice; uint256 btcPrice; } address public proxyAddress; uint256 public lastEthPrice; uint256 public lastBtcPrice; uint256 public lastUpdateTime; event ReservesUpdated(uint256 ethPrice, uint256 btcPrice, uint256 updateTime); modifier onlyProxy() { require(msg.sender == proxyAddress, "Caller is not the authorized proxy"); _; } constructor() Ownable(msg.sender) {} function setProxyAddress(address _proxyAddress) external onlyOwner { proxyAddress = _proxyAddress; } function updateReserves(UpdateReserves memory data) external onlyProxy { lastEthPrice = data.ethPrice; lastBtcPrice = data.btcPrice; lastUpdateTime = block.timestamp; emit ReservesUpdated(data.ethPrice, data.btcPrice, block.timestamp); } } ``` #### The Proxy Contract (`UpdateReservesProxy.sol`) This contract, our "bouncer", is the only contract that interacts with the Chainlink platform. It inherits `IReceiverTemplate` to validate incoming reports and then calls the `ReserveManager`. ```solidity // SPDX-License-Identifier: MIT pragma solidity ^0.8.19; import { ReserveManager } from "./ReserveManager.sol"; import { IReceiverTemplate } from "./keystone/IReceiverTemplate.sol"; contract UpdateReservesProxy is IReceiverTemplate { ReserveManager public s_reserveManager; constructor(address reserveManagerAddress) { s_reserveManager = ReserveManager(reserveManagerAddress); } /// @inheritdoc IReceiverTemplate function _processReport(bytes calldata report) internal override { ReserveManager.UpdateReserves memory updateReservesData = abi.decode(report, (ReserveManager.UpdateReserves)); s_reserveManager.updateReserves(updateReservesData); } } ``` **Configuring permissions after deployment:** ```solidity // Enable forwarder check (recommended) updateReservesProxy.setForwarderAddress(0xF8344CFd5c43616a4366C34E3EEE75af79a74482); // Ethereum Sepolia // Enable workflow ID check for production (highest security) updateReservesProxy.setExpectedWorkflowId(0xYourWorkflowId...); ``` #### How it Works The deployment and configuration process involves these steps: 1. **Deploy the Logic Contract**: Deploy `ReserveManager.sol`. The wallet that deploys this contract becomes its `owner`. 2. **Deploy the Proxy Contract**: Deploy `UpdateReservesProxy.sol`, passing the address of the deployed `ReserveManager` contract to its constructor. 3. **Link the Contracts**: The `owner` of the `ReserveManager` contract must call its `setProxyAddress` function, passing in the address of the `UpdateReservesProxy` contract. This authorizes the proxy to call the logic contract. 4. **Configure Permissions** (Recommended): The `owner` of the proxy should call setter functions to enable security checks: ```solidity updateReservesProxy.setForwarderAddress(0xF8344CFd5c43616a4366C34E3EEE75af79a74482); updateReservesProxy.setExpectedWorkflowId(0xYourWorkflowId...); ``` 5. **Configure Workflow**: In your workflow's `config.json`, use the address of the **Proxy Contract** as the receiver address. 6. **Execution Flow**: When your workflow runs: - The Chainlink Forwarder calls `onReport` on your **Proxy** - The Proxy validates the report (forwarder address, workflow ID, etc.) - The Proxy's `_processReport` function calls the `updateReserves` function on your **Logic Contract** - Because the caller is the trusted proxy, the `onlyProxy` check passes, and your state is securely updated 7. **(Optional) Upgrade**: If you later need to deploy a new proxy, the owner can: - Deploy the new proxy contract - Call `setProxyAddress` on the `ReserveManager` to point it to the new proxy's address - Update the workflow configuration to use the new proxy address #### End-to-End Sequence ## Where to go next? Now that you know how to build a consumer contract, the next step is to call it from your workflow. - **[Onchain Write](/cre/guides/workflow/using-evm-client/onchain-write)**: Learn how to use the `EVMClient` to send data to your new consumer contract. --- # Using WriteReportFrom Helpers Source: https://docs.chain.link/cre/guides/workflow/using-evm-client/onchain-write/using-write-report-helpers Last Updated: 2025-11-04 This guide explains how to write data to a smart contract using the `WriteReportFrom()` helper methods that are automatically generated from your contract's ABI. This is the recommended and simplest approach for most users. **Use this approach when:** - You're sending a **struct** to your consumer contract - The struct appears in a `public` or `external` function's signature (as a parameter or return value); this is required for the binding generator to detect it in your contract's ABI and create the helper method **Don't meet these requirements?** See the [Onchain Write](/cre/guides/workflow/using-evm-client/onchain-write#choosing-your-approach-which-guide-should-you-follow) page to find the right approach for your scenario. ## Prerequisites Before you begin, ensure you have: 1. **A consumer contract** deployed that implements the `IReceiver` interface - See [Building Consumer Contracts](/cre/guides/workflow/using-evm-client/onchain-write/building-consumer-contracts) if you need to create one 2. **Generated bindings** from your consumer contract's ABI - See [Generating Bindings](/cre/guides/workflow/using-evm-client/generating-bindings) if you haven't created them yet ## What the helper does for you The `WriteReportFrom()` helper method automates the entire onchain write process: 1. **ABI-encodes your struct** into bytes 2. **Generates a cryptographically signed report** via `runtime.GenerateReport()` 3. **Submits the report to the blockchain** via `evm.Client.WriteReport()` 4. **Returns a promise** with the transaction details All of this happens in a single method call, making your workflow code clean and simple. ## The write pattern Writing to contracts using binding helpers follows this simple pattern: 1. **Create an EVM client** with your target chain selector 2. **Instantiate the contract binding** with the consumer contract's address 3. **Prepare your data** using the generated struct type 4. **Call the write helper** and await the result Let's walk through each step with a complete example. ## Step-by-step example Assume you have a consumer contract with a struct that looks like this: ```solidity struct UpdateReserves { uint256 totalMinted; uint256 totalReserve; } // This function makes the struct appear in the ABI function processReserveUpdate(UpdateReserves memory update) public { // ... logic } ``` ### Step 1: Create an EVM client First, create an EVM client configured for the chain where your consumer contract is deployed: ```go import ( "github.com/smartcontractkit/cre-sdk-go/capabilities/blockchain/evm" ) func updateReserves(config *Config, runtime cre.Runtime, evmConfig EvmConfig, totalSupply *big.Int, totalReserveScaled *big.Int) error { logger := runtime.Logger() // Create EVM client with your target chain evmClient := &evm.Client{ ChainSelector: evmConfig.ChainSelector, // e.g., 16015286601757825753 for Sepolia } ``` ### Step 2: Instantiate the contract binding Create an instance of your [generated binding](/cre/guides/workflow/using-evm-client/generating-bindings), pointing it at your consumer contract's address: ```go import ( "contracts/evm/src/generated/reserve_manager" "github.com/ethereum/go-ethereum/common" ) // Convert the address string from your config to common.Address contractAddress := common.HexToAddress(evmConfig.ConsumerAddress) // Create the binding instance reserveManager, err := reserve_manager.NewReserveManager(evmClient, contractAddress, nil) if err != nil { return fmt.Errorf("failed to create reserve manager: %w", err) } ``` ### Step 3: Prepare your data Create an instance of the generated struct type with your data: ```go // Use the generated struct type from your bindings updateData := reserve_manager.UpdateReserves{ TotalMinted: totalSupply, // *big.Int TotalReserve: totalReserveScaled, // *big.Int } logger.Info("Prepared data for onchain write", "totalMinted", totalSupply.String(), "totalReserve", totalReserveScaled.String()) ``` ### Step 4: Call the write helper and await Call the generated `WriteReportFrom()` method and await the result: ```go // Call the generated helper - it handles encoding, report generation, and submission writePromise := reserveManager.WriteReportFromUpdateReserves(runtime, updateData, nil) logger.Info("Waiting for write report response") // Await the transaction result resp, err := writePromise.Await() if err != nil { logger.Error("WriteReport failed", "error", err) return fmt.Errorf("failed to write report: %w", err) } // Log the successful transaction txHash := common.BytesToHash(resp.TxHash).Hex() logger.Info("Write report transaction succeeded", "txHash", txHash) return nil } ``` ## Understanding the response The write helper returns an `evm.WriteReportReply` struct with comprehensive transaction details: ```go type WriteReportReply struct { TxStatus TxStatus // SUCCESS, REVERTED, or FATAL ReceiverContractExecutionStatus *ReceiverContractExecutionStatus // Contract execution status TxHash []byte // Transaction hash TransactionFee *pb.BigInt // Fee paid in Wei ErrorMessage *string // Error message if failed } ``` **Key fields to check:** - **`TxStatus`**: Indicates whether the transaction succeeded, reverted, or had a fatal error - **`TxHash`**: The transaction hash you can use to verify on a block explorer (e.g., Etherscan) - **`TransactionFee`**: The total gas cost paid for the transaction in Wei - **`ReceiverContractExecutionStatus`**: Whether your consumer contract's `onReport()` function executed successfully - **`ErrorMessage`**: If the transaction failed, this field contains details about what went wrong ## Complete example Here's a complete, runnable workflow function that demonstrates the end-to-end pattern: ```go package main import ( "contracts/evm/src/generated/reserve_manager" "fmt" "log/slog" "math/big" "github.com/ethereum/go-ethereum/common" "github.com/smartcontractkit/cre-sdk-go/capabilities/blockchain/evm" "github.com/smartcontractkit/cre-sdk-go/cre" ) type EvmConfig struct { ConsumerAddress string `json:"consumerAddress"` ChainSelector uint64 `json:"chainSelector"` } type Config struct { // Add other config fields from your workflow here } func updateReserves(config *Config, runtime cre.Runtime, evmConfig EvmConfig, totalSupply *big.Int, totalReserveScaled *big.Int) error { logger := runtime.Logger() logger.Info("Updating reserves", "totalSupply", totalSupply, "totalReserveScaled", totalReserveScaled) // Create EVM client with chain selector evmClient := &evm.Client{ ChainSelector: evmConfig.ChainSelector, } // Create contract binding contractAddress := common.HexToAddress(evmConfig.ConsumerAddress) reserveManager, err := reserve_manager.NewReserveManager(evmClient, contractAddress, nil) if err != nil { return fmt.Errorf("failed to create reserve manager: %w", err) } logger.Info("Writing report", "totalSupply", totalSupply, "totalReserveScaled", totalReserveScaled) // Call the write method writePromise := reserveManager.WriteReportFromUpdateReserves(runtime, reserve_manager.UpdateReserves{ TotalMinted: totalSupply, TotalReserve: totalReserveScaled, }, nil) logger.Info("Waiting for write report response") // Await the transaction resp, err := writePromise.Await() if err != nil { logger.Error("WriteReport await failed", "error", err, "errorType", fmt.Sprintf("%T", err)) return fmt.Errorf("failed to write report: %w", err) } logger.Info("Write report transaction succeeded", "txHash", common.BytesToHash(resp.TxHash).Hex()) return nil } // NOTE: This is a placeholder. You would need a full workflow with InitWorkflow, // a trigger, and a callback that calls this `updateReserves` function. func main() { // wasm.NewRunner(cre.ParseJSON[Config]).Run(InitWorkflow) } ``` ## Configuring gas limits By default, the SDK automatically estimates gas limits for your transactions. However, for complex transactions or to ensure sufficient gas, you can explicitly set a gas limit: ```go // Create a gas configuration gasConfig := &evm.GasConfig{ GasLimit: 1000000, // Adjust based on your contract's needs } // Pass it as the third argument to the write helper writePromise := reserveManager.WriteReportFromUpdateReserves(runtime, updateData, gasConfig) ``` ## Best practices 1. **Always check errors**: Both the write call and the `.Await()` can fail—handle both error paths 2. **Log transaction details**: Include transaction hashes in your logs for debugging and monitoring 3. **Validate response status**: Check the `TxStatus` field to ensure the transaction succeeded 4. **Override gas limits when needed**: For complex transactions, set explicit gas limits higher than the automatic estimates to avoid "out of gas" errors 5. **Monitor contract execution**: Check `ReceiverContractExecutionStatus` to ensure your consumer contract processed the data correctly ## Troubleshooting **Transaction failed with "out of gas"** - Increase the `GasLimit` in your `GasConfig` - Check if your consumer contract's logic is more complex than expected **"WriteReport await failed" error** - Check that your consumer contract address is correct - Verify you're using the correct chain selector - Ensure your account has sufficient funds for gas **Transaction succeeded but contract didn't update** - Check the `ReceiverContractExecutionStatus` field - Review your consumer contract's `onReport()` logic for validation failures - Verify the struct fields match what your contract expects ## Learn more - **[Onchain Write Overview](/cre/guides/workflow/using-evm-client/onchain-write)**: Understand all onchain write approaches - **[Building Consumer Contracts](/cre/guides/workflow/using-evm-client/onchain-write/building-consumer-contracts)**: Create secure consumer contracts - **[Generating Bindings](/cre/guides/workflow/using-evm-client/generating-bindings)**: Generate type-safe contract bindings - **[EVM Client Reference](/cre/reference/sdk/evm-client)**: Complete API documentation --- # Generating Reports: Single Values Source: https://docs.chain.link/cre/guides/workflow/using-evm-client/onchain-write/generating-reports-single-values Last Updated: 2025-11-04 This guide shows how to manually generate a report containing a single value (like `uint256`, `address`, or `bool`). This is useful when you need to send a simple value onchain but don't have a struct or binding helper available. **Use this approach when:** - You're sending a **single primitive value** (like `uint256`, `address`, `bool`, `bytes32`) to your consumer contract - You don't have (or need) binding helpers for your contract **Don't meet these requirements?** See the [Onchain Write](/cre/guides/workflow/using-evm-client/onchain-write/overview-go#choosing-your-approach-which-guide-should-you-follow) page to find the right approach for your scenario. ## Prerequisites - Familiarity with [Working with Solidity input types](/cre/guides/workflow/using-evm-client/onchain-write#working-with-solidity-input-types) ## What this guide covers Manually generating a report for a single value involves two main steps: 1. **ABI-encode the value** into bytes using the `go-ethereum/accounts/abi` package 2. **Generate a cryptographically signed report** using `runtime.GenerateReport()` The resulting report can then be: - Submitted to the blockchain via `evm.Client.WriteReport()` (see [Submitting Reports Onchain](/cre/guides/workflow/using-evm-client/onchain-write/submitting-reports-onchain)) - Sent to an HTTP endpoint via `http.Client` (see [Submitting Reports via HTTP](/cre/guides/workflow/using-http-client/submitting-reports-http)) ## Step-by-step example ### 1. Create your value Start with a Go value that you want to send. For example, a `*big.Int` for a Solidity `uint256`: ```go import "math/big" myValue := big.NewInt(123456789) logger.Info("Value to send", "value", myValue.String()) ``` ### 2. ABI-encode the value Use the `ethereum/go-ethereum/accounts/abi` package to encode your value as a Solidity type: ```go import "github.com/ethereum/go-ethereum/accounts/abi" // Create the Solidity type definition uint256Type, err := abi.NewType("uint256", "", nil) if err != nil { return fmt.Errorf("failed to create type: %w", err) } // Create an arguments array with your type args := abi.Arguments{{Type: uint256Type}} // Pack (encode) your value encodedValue, err := args.Pack(myValue) if err != nil { return fmt.Errorf("failed to encode value: %w", err) } ``` ### 3. Generate the report Use `runtime.GenerateReport()` to create a signed, consensus-verified report from the encoded bytes: ```go reportPromise := runtime.GenerateReport(&cre.ReportRequest{ EncodedPayload: encodedValue, EncoderName: "evm", SigningAlgo: "ecdsa", HashingAlgo: "keccak256", }) report, err := reportPromise.Await() if err != nil { return fmt.Errorf("failed to generate report: %w", err) } logger.Info("Successfully generated report") ``` **Field explanations:** - `EncodedPayload`: The ABI-encoded bytes from step 2 - `EncoderName`: Always `"evm"` for Ethereum reports - `SigningAlgo`: Always `"ecdsa"` for Ethereum - `HashingAlgo`: Always `"keccak256"` for Ethereum #### Understanding the report The `runtime.GenerateReport()` function returns a `*cre.Report` object. This report contains: - **Your ABI-encoded data** (the payload) - **Cryptographic signatures** from the DON nodes - **Metadata** about the workflow (ID, name, owner) - **Consensus proof** that the data was agreed upon by the network This report is designed to be passed directly to either: - `evm.Client.WriteReport()` for onchain delivery - `http.Client` for offchain delivery ### 4. Submit the report Now that you have a generated report, choose where to send it: - **[Submit it to the blockchain](/cre/guides/workflow/using-evm-client/onchain-write/submitting-reports-onchain)** via `evm.Client.WriteReport()` - **[Send it via HTTP](/cre/guides/workflow/using-http-client/submitting-reports-http)** via `http.Client` for offchain delivery ## Complete working example Here's a workflow that generates a report from a single `uint256` value: ```go //go:build wasip1 package main import ( "fmt" "log/slog" "math/big" "github.com/ethereum/go-ethereum/accounts/abi" "github.com/smartcontractkit/cre-sdk-go/capabilities/scheduler/cron" "github.com/smartcontractkit/cre-sdk-go/cre" "github.com/smartcontractkit/cre-sdk-go/cre/wasm" ) type Config struct { Schedule string `json:"schedule"` } type MyResult struct { OriginalValue string EncodedHex string } func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) { return cre.Workflow[*Config]{ cre.Handler(cron.Trigger(&cron.Config{Schedule: config.Schedule}), onCronTrigger), }, nil } func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) { logger := runtime.Logger() // Step 1: Create a value myValue := big.NewInt(123456789) logger.Info("Generated value", "value", myValue.String()) // Step 2: ABI-encode the value as uint256 uint256Type, err := abi.NewType("uint256", "", nil) if err != nil { return nil, fmt.Errorf("failed to create type: %w", err) } args := abi.Arguments{{Type: uint256Type}} encodedValue, err := args.Pack(myValue) if err != nil { return nil, fmt.Errorf("failed to encode value: %w", err) } logger.Info("ABI-encoded value", "hex", fmt.Sprintf("0x%x", encodedValue)) // Step 3: Generate report reportPromise := runtime.GenerateReport(&cre.ReportRequest{ EncodedPayload: encodedValue, EncoderName: "evm", SigningAlgo: "ecdsa", HashingAlgo: "keccak256", }) report, err := reportPromise.Await() if err != nil { return nil, fmt.Errorf("failed to generate report: %w", err) } logger.Info("Report generated successfully") // At this point, you would typically submit the report: // - To the blockchain: see "Submitting Reports Onchain" guide // - Via HTTP: see "Submitting Reports via HTTP" guide // For this example, we'll just return the encoded data for verification _ = report // Report is ready to use // Return results return &MyResult{ OriginalValue: myValue.String(), EncodedHex: fmt.Sprintf("0x%x", encodedValue), }, nil } func main() { wasm.NewRunner(cre.ParseJSON[Config]).Run(InitWorkflow) } ``` ## Best practices 1. **Always check errors**: Both encoding and report generation can fail—handle both error paths 2. **Use the correct Solidity type string**: Type mismatches will cause ABI encoding failures. Verify your type strings match your contract exactly 3. **Log the encoded data**: For debugging, log the hex-encoded bytes to verify your data is encoded correctly: ```go logger.Info("ABI-encoded value", "hex", fmt.Sprintf("0x%x", encodedValue)) ``` 4. **Refer to go-ethereum documentation**: For complex types, consult the [go-ethereum ABI package documentation](https://pkg.go.dev/github.com/ethereum/go-ethereum/accounts/abi) ## Troubleshooting **"failed to create type" error** - Verify the type string exactly matches Solidity syntax. - For arrays, use `uint256[]` for dynamic arrays or `uint256[3]` for fixed-size arrays. - Check the [go-ethereum type documentation](https://pkg.go.dev/github.com/ethereum/go-ethereum/accounts/abi) for supported types. **"failed to encode value" error** - Ensure your Go value matches the Solidity type (e.g., `*big.Int` for `uint256`, `common.Address` for `address`). Find a list of mappings [here](/cre/guides/workflow/using-evm-client/onchain-read#solidity-to-go-type-mappings). - For integers, use `big.NewInt()` for values that fit in `int64`, or `new(big.Int).SetString()` for larger values. - Verify you're packing the value with `args.Pack(myValue)`, not passing it directly. **Report generation succeeds but onchain submission fails** - This guide only covers report generation. See [Submitting Reports Onchain](/cre/guides/workflow/using-evm-client/onchain-write/submitting-reports-onchain) for troubleshooting submission issues. ## Learn more - **[Onchain Write Overview](/cre/guides/workflow/using-evm-client/onchain-write)**: Understand all onchain write approaches - **[Submitting Reports Onchain](/cre/guides/workflow/using-evm-client/onchain-write/submitting-reports-onchain)**: Submit your generated report to the blockchain - **[Generating Reports: Structs](/cre/guides/workflow/using-evm-client/onchain-write/generating-reports-structs)**: Manually encode and generate reports for struct data - **[Building Consumer Contracts](/cre/guides/workflow/using-evm-client/onchain-write/building-consumer-contracts)**: Create contracts that can receive your reports - **[EVM Client Reference](/cre/reference/sdk/evm-client)**: Complete API documentation --- # Generating Reports: Structs Source: https://docs.chain.link/cre/guides/workflow/using-evm-client/onchain-write/generating-reports-structs Last Updated: 2025-11-04 This guide shows how to generate a report containing a struct with multiple fields. There are two approaches depending on whether you have generated bindings for your contract. ## Choosing your approach Use this table to determine which method applies to your situation: | Situation | Binding Helper Available? | Recommended Approach | Section | | ------------------------------------------------------------------------------------------------ | ---------------------------------------------- | --------------------------------- | ----------------------------------------------- | | Struct appears in a `public` or `external` function's signature (as a parameter or return value) | Yes, `Codec.EncodeStruct()` exists | Use the binding's encoding helper | [Using Binding Helpers](#using-binding-helpers) | | Struct is NOT in the ABI | No helper available | Manual tuple encoding | [Manual Encoding](#manual-encoding-advanced) | **Don't meet these requirements?** If you're sending a single value instead of a struct, see [Generating Reports: Single Values](/cre/guides/workflow/using-evm-client/onchain-write/generating-reports-single-values). For other approaches, see the [Onchain Write](/cre/guides/workflow/using-evm-client/onchain-write#choosing-your-approach-which-guide-should-you-follow) hub page. ## Using binding helpers If you have generated bindings for a contract that includes your struct in its ABI, the binding generator creates an `EncodeStruct()` method on the `Codec`. This is the simplest and recommended approach. ### When this applies This method is available when your struct appears in a **public or external function's signature** (as a parameter or return value). These function types appear in the contract's ABI, which allows the binding generator to detect the struct and automatically create the encoding helper. **Example contract:** ```solidity contract MyContract { struct PaymentData { address recipient; uint256 amount; uint256 nonce; } // Because this public function uses PaymentData, // the binding generator creates an encoding helper function processPayment(PaymentData memory data) public {} } ``` ### Step 1: Identify the helper method After running `cre generate-bindings`, your binding will include: ```go type MyContractCodec interface { // ... other methods EncodePaymentDataStruct(in PaymentData) ([]byte, error) // ... } ``` ### Step 2: Use the helper ```go import "my-project/contracts/evm/src/generated/my_contract" // Create your struct paymentData := my_contract.PaymentData{ Recipient: common.HexToAddress("0x742d35Cc6634C0532925a3b844Bc9e7595f0bEb"), Amount: big.NewInt(1000000000000000000), Nonce: big.NewInt(42), } // Create contract instance to access the Codec contract, err := my_contract.NewMyContract(evmClient, contractAddress, nil) if err != nil { return err } // Use the encoding helper encodedStruct, err := contract.Codec.EncodePaymentDataStruct(paymentData) if err != nil { return fmt.Errorf("failed to encode struct: %w", err) } ``` ### Step 3: Generate the report ```go reportPromise := runtime.GenerateReport(&cre.ReportRequest{ EncodedPayload: encodedStruct, EncoderName: "evm", SigningAlgo: "ecdsa", HashingAlgo: "keccak256", }) report, err := reportPromise.Await() if err != nil { return fmt.Errorf("failed to generate report: %w", err) } ``` #### Understanding the report The `runtime.GenerateReport()` function returns a `*cre.Report` object. This report contains: - **Your ABI-encoded struct data** (the payload) - **Cryptographic signatures** from the DON nodes - **Metadata** about the workflow (ID, name, owner) - **Consensus proof** that the data was agreed upon by the network This report is designed to be passed directly to either: - `evm.Client.WriteReport()` for onchain delivery - `http.Client` for offchain delivery The report can now be [submitted onchain](/cre/guides/workflow/using-evm-client/onchain-write/submitting-reports-onchain) or [sent via HTTP](/cre/guides/workflow/using-http-client/submitting-reports-http). ## Manual encoding If your struct is **not** in the contract's ABI, you won't have a binding helper and must manually create the tuple type and encode it. ### When to use this approach - You're working with a custom struct that doesn't appear in any `public` or `external` function's signature - You're encoding data for a third-party contract without bindings - You need full control over the encoding process ### Step-by-step example Let's manually encode a `PaymentData` struct: ```solidity struct PaymentData { address recipient; uint256 amount; uint256 nonce; } ``` ### 1. Define the Go struct Create a Go struct that matches your Solidity struct: ```go import ( "math/big" "github.com/ethereum/go-ethereum/common" ) type PaymentData struct { Recipient common.Address Amount *big.Int Nonce *big.Int } ``` ### 2. Create your struct instance ```go paymentData := PaymentData{ Recipient: common.HexToAddress("0x742d35Cc6634C0532925a3b844Bc9e7595f0bEb"), Amount: big.NewInt(1000000000000000000), // 1 ETH in wei Nonce: big.NewInt(42), } ``` ### 3. Create the tuple type Define the struct's fields as a tuple using `abi.NewType()`: ```go import "github.com/ethereum/go-ethereum/accounts/abi" tupleType, err := abi.NewType( "tuple", "", []abi.ArgumentMarshaling{ {Name: "recipient", Type: "address"}, {Name: "amount", Type: "uint256"}, {Name: "nonce", Type: "uint256"}, }, ) if err != nil { return fmt.Errorf("failed to create tuple type: %w", err) } ``` **Important:** The field names and types must match your Solidity struct exactly. ### 4. ABI-encode the struct ```go args := abi.Arguments{ {Name: "paymentData", Type: tupleType}, } encodedStruct, err := args.Pack(paymentData) if err != nil { return fmt.Errorf("failed to encode struct: %w", err) } ``` ### 5. Generate the report ```go reportPromise := runtime.GenerateReport(&cre.ReportRequest{ EncodedPayload: encodedStruct, EncoderName: "evm", SigningAlgo: "ecdsa", HashingAlgo: "keccak256", }) report, err := reportPromise.Await() if err != nil { return fmt.Errorf("failed to generate report: %w", err) } logger.Info("Report generated successfully") ``` ### Complete working example Here's a full workflow that generates a report from a struct: ```go //go:build wasip1 package main import ( "fmt" "log/slog" "math/big" "github.com/ethereum/go-ethereum/accounts/abi" "github.com/ethereum/go-ethereum/common" "github.com/smartcontractkit/cre-sdk-go/capabilities/scheduler/cron" "github.com/smartcontractkit/cre-sdk-go/cre" "github.com/smartcontractkit/cre-sdk-go/cre/wasm" ) type Config struct { Schedule string `json:"schedule"` } // Go struct matching Solidity struct type PaymentData struct { Recipient common.Address Amount *big.Int Nonce *big.Int } type MyResult struct { EncodedHex string } func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) { return cre.Workflow[*Config]{ cre.Handler(cron.Trigger(&cron.Config{Schedule: config.Schedule}), onCronTrigger), }, nil } func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) { logger := runtime.Logger() // Step 1: Create struct instance paymentData := PaymentData{ Recipient: common.HexToAddress("0x742d35Cc6634C0532925a3b844Bc9e7595f0bEb"), Amount: big.NewInt(1000000000000000000), // 1 ETH Nonce: big.NewInt(42), } logger.Info("Created payment data", "recipient", paymentData.Recipient.Hex(), "amount", paymentData.Amount.String()) // Step 2: Create tuple type matching Solidity struct tupleType, err := abi.NewType( "tuple", "", []abi.ArgumentMarshaling{ {Name: "recipient", Type: "address"}, {Name: "amount", Type: "uint256"}, {Name: "nonce", Type: "uint256"}, }, ) if err != nil { return nil, fmt.Errorf("failed to create tuple type: %w", err) } // Step 3: Encode the struct args := abi.Arguments{{Name: "paymentData", Type: tupleType}} encodedStruct, err := args.Pack(paymentData) if err != nil { return nil, fmt.Errorf("failed to encode struct: %w", err) } logger.Info("Encoded struct", "hex", fmt.Sprintf("0x%x", encodedStruct)) // Step 4: Generate report reportPromise := runtime.GenerateReport(&cre.ReportRequest{ EncodedPayload: encodedStruct, EncoderName: "evm", SigningAlgo: "ecdsa", HashingAlgo: "keccak256", }) report, err := reportPromise.Await() if err != nil { return nil, fmt.Errorf("failed to generate report: %w", err) } logger.Info("Report generated successfully") // At this point, you would typically submit the report: // - To the blockchain: see "Submitting Reports Onchain" guide // - Via HTTP: see "Submitting Reports via HTTP" guide // For this example, we'll just return the encoded data for verification _ = report // Report is ready to use return &MyResult{ EncodedHex: fmt.Sprintf("0x%x", encodedStruct), }, nil } func main() { wasm.NewRunner(cre.ParseJSON[Config]).Run(InitWorkflow) } ``` ## Best practices 1. **Always check errors**: Both encoding and report generation can fail—handle both error paths 2. **Use binding helpers when available**: The `Codec.EncodeStruct()` helper is simpler and less error-prone than manual encoding 3. **Match Solidity types exactly**: For manual encoding, ensure your tuple definition matches your Solidity struct field-by-field, including order and types 4. **Log the encoded data**: For debugging, log the hex-encoded bytes to verify your struct is encoded correctly: ```go logger.Info("ABI-encoded struct", "hex", fmt.Sprintf("0x%x", encodedStruct)) ``` ## Troubleshooting **"failed to create tuple type" error** - Verify the field types in your `ArgumentMarshaling` match Solidity exactly (e.g., `uint256`, not `uint` or `int`) - Ensure field names match - Check that nested types are properly defined if you have complex structs **"failed to encode struct" error** - Verify your Go struct fields match the Solidity struct in order and type - Ensure you're using the correct Go types (e.g., `*big.Int` for `uint256`, `common.Address` for `address`). A list of mappings can be found [here](/cre/guides/workflow/using-evm-client/onchain-read#solidity-to-go-type-mappings). - Check that all fields are populated (Go's zero values might not match what you expect) **Binding helper not found** - Confirm your struct is used in a `public` or `external` function parameter in your contract - Verify you've run `cre generate-bindings` after updating your contract - Check the generated binding file—the method should be named `EncodeStruct()` **Report generation succeeds but onchain submission fails** - This guide only covers report generation. See [Submitting Reports Onchain](/cre/guides/workflow/using-evm-client/onchain-write/submitting-reports-onchain) for troubleshooting submission issues ## Learn more - **[Onchain Write Overview](/cre/guides/workflow/using-evm-client/onchain-write)**: Understand all onchain write approaches - **[Submitting Reports Onchain](/cre/guides/workflow/using-evm-client/onchain-write/submitting-reports-onchain)**: Submit your generated report to the blockchain - **[Generating Reports: Single Values](/cre/guides/workflow/using-evm-client/onchain-write/generating-reports-single-values)**: Generate reports for single primitive values - **[Using WriteReportFrom Helpers](/cre/guides/workflow/using-evm-client/onchain-write/using-write-report-helpers)**: Use the all-in-one helper that handles encoding, generation, and submission - **[Building Consumer Contracts](/cre/guides/workflow/using-evm-client/onchain-write/building-consumer-contracts)**: Create contracts that can receive your reports - **[EVM Client Reference](/cre/reference/sdk/evm-client)**: Complete API documentation --- # Submitting Reports Onchain Source: https://docs.chain.link/cre/guides/workflow/using-evm-client/onchain-write/submitting-reports-onchain Last Updated: 2025-11-04 This guide shows how to manually submit a generated report to the blockchain using the low-level `evm.Client.WriteReport()` method. **Use this approach when:** - You've already generated a report using `runtime.GenerateReport()` (from [single value](/cre/guides/workflow/using-evm-client/onchain-write/generating-reports-single-values) or [struct](/cre/guides/workflow/using-evm-client/onchain-write/generating-reports-structs) generation) - You need fine-grained control over the submission process - You don't have (or can't use) the `WriteReportFrom()` binding helper ## Prerequisites You must have: - A generated report ready to submit (from [single value](/cre/guides/workflow/using-evm-client/onchain-write/generating-reports-single-values) or [struct](/cre/guides/workflow/using-evm-client/onchain-write/generating-reports-structs) generation) - A [consumer contract](/cre/guides/workflow/using-evm-client/onchain-write/building-consumer-contracts) address that implements the `IReceiver` interface ## Step-by-step example ### Step 1: Create an EVM client ```go import "github.com/smartcontractkit/cre-sdk-go/capabilities/blockchain/evm" evmClient := &evm.Client{ ChainSelector: config.ChainSelector, // e.g., 16015286601757825753 for Sepolia } ``` ### Step 2: Prepare submission parameters ```go import "github.com/ethereum/go-ethereum/common" // Receiver contract address (must implement IReceiver interface) receiverAddress := common.HexToAddress(config.ReceiverAddress) // Optional gas configuration gasConfig := &evm.GasConfig{ GasLimit: config.GasLimit, // e.g., 1000000 } ``` ### Step 3: Submit the report ```go writePromise := evmClient.WriteReport(runtime, &evm.WriteCreReportRequest{ Receiver: receiverAddress.Bytes(), Report: report, // The report from runtime.GenerateReport() GasConfig: gasConfig, }) resp, err := writePromise.Await() if err != nil { return fmt.Errorf("failed to write report: %w", err) } // Extract transaction hash txHash := fmt.Sprintf("0x%x", resp.TxHash) logger.Info("Report submitted successfully", "txHash", txHash) ``` ### Understanding the response The `WriteReportReply` struct provides comprehensive transaction details: ```go type WriteReportReply struct { TxStatus TxStatus // SUCCESS, REVERTED, or FATAL ReceiverContractExecutionStatus *ReceiverContractExecutionStatus // Contract execution status TxHash []byte // Transaction hash TransactionFee *pb.BigInt // Fee paid in Wei ErrorMessage *string // Error message if failed } ``` **Key fields to check:** - **`TxStatus`**: Indicates whether the transaction succeeded, reverted, or had a fatal error - **`TxHash`**: The transaction hash you can use to verify on a block explorer (e.g., Etherscan) - **`TransactionFee`**: The total gas cost paid for the transaction in Wei - **`ReceiverContractExecutionStatus`**: Whether your consumer contract's `onReport()` function executed successfully - **`ErrorMessage`**: If the transaction failed, this field contains details about what went wrong ## Best practices When submitting reports onchain, follow these practices to ensure reliability and observability: 1. **Log transaction details**: Always log the transaction hash for debugging and monitoring. This allows you to track your submission on block explorers and troubleshoot issues. ```go txHash := fmt.Sprintf("0x%x", resp.TxHash) logger.Info("Report submitted successfully", "txHash", txHash, "status", resp.TxStatus) ``` 2. **Handle gas configuration**: Provide explicit gas limits for complex transactions to avoid out-of-gas errors. Adjust based on your contract's complexity and the data size. ```go gasConfig := &evm.GasConfig{ GasLimit: 500000, // Adjust based on your needs } ``` 3. **Monitor transaction status**: Always check the `TxStatus` field in the response to ensure your transaction was successful. Handle `REVERTED` and `FATAL` statuses appropriately. ```go if resp.TxStatus != evm.TxStatusSuccess { return fmt.Errorf("transaction failed with status: %v, error: %s", resp.TxStatus, *resp.ErrorMessage) } ``` ## Complete example Here's a full workflow that generates a report from a single value and submits it onchain: ```go //go:build wasip1 package main import ( "fmt" "log/slog" "math/big" "github.com/ethereum/go-ethereum/accounts/abi" "github.com/ethereum/go-ethereum/common" "github.com/smartcontractkit/cre-sdk-go/capabilities/blockchain/evm" "github.com/smartcontractkit/cre-sdk-go/capabilities/scheduler/cron" "github.com/smartcontractkit/cre-sdk-go/cre" "github.com/smartcontractkit/cre-sdk-go/cre/wasm" ) type Config struct { Schedule string `json:"schedule"` ReceiverAddress string `json:"receiverAddress"` ChainSelector uint64 `json:"chainSelector"` GasLimit uint64 `json:"gasLimit"` } type MyResult struct { TxHash string } func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) { return cre.Workflow[*Config]{ cre.Handler(cron.Trigger(&cron.Config{Schedule: config.Schedule}), onCronTrigger), }, nil } func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) { logger := runtime.Logger() // Step 1: Create and encode a value myValue := big.NewInt(123456789) logger.Info("Created value to encode", "value", myValue.String()) uint256Type, err := abi.NewType("uint256", "", nil) if err != nil { return nil, fmt.Errorf("failed to create type: %w", err) } args := abi.Arguments{{Type: uint256Type}} encodedValue, err := args.Pack(myValue) if err != nil { return nil, fmt.Errorf("failed to encode value: %w", err) } logger.Info("Encoded value", "hex", fmt.Sprintf("0x%x", encodedValue)) // Step 2: Generate report reportPromise := runtime.GenerateReport(&cre.ReportRequest{ EncodedPayload: encodedValue, EncoderName: "evm", SigningAlgo: "ecdsa", HashingAlgo: "keccak256", }) report, err := reportPromise.Await() if err != nil { return nil, fmt.Errorf("failed to generate report: %w", err) } logger.Info("Report generated successfully") // Step 3: Create EVM client evmClient := &evm.Client{ ChainSelector: config.ChainSelector, } // Step 4: Submit report onchain receiverAddress := common.HexToAddress(config.ReceiverAddress) gasConfig := &evm.GasConfig{GasLimit: config.GasLimit} writePromise := evmClient.WriteReport(runtime, &evm.WriteCreReportRequest{ Receiver: receiverAddress.Bytes(), Report: report, GasConfig: gasConfig, }) logger.Info("Submitting report onchain...") resp, err := writePromise.Await() if err != nil { return nil, fmt.Errorf("failed to submit report: %w", err) } // Check transaction status if resp.TxStatus != evm.TxStatusSuccess { errorMsg := "unknown error" if resp.ErrorMessage != nil { errorMsg = *resp.ErrorMessage } return nil, fmt.Errorf("transaction failed with status %v: %s", resp.TxStatus, errorMsg) } txHash := fmt.Sprintf("0x%x", resp.TxHash) logger.Info("Report submitted successfully", "txHash", txHash, "fee", resp.TransactionFee) return &MyResult{TxHash: txHash}, nil } func main() { wasm.NewRunner(cre.ParseJSON[Config]).Run(InitWorkflow) } ``` **Configuration file** (`config.json`): ```json { "schedule": "0 */1 * * * *", "receiverAddress": "0xYourReceiverContractAddress", "chainSelector": 16015286601757825753, "gasLimit": 1000000 } ``` ## Broadcasting transactions By default, `cre workflow simulate` performs a dry run without broadcasting transactions to the network. To execute real onchain transactions, use the `--broadcast` flag: ```bash cre workflow simulate my-workflow --broadcast --target staging-settings ``` See the [CLI Reference](/cre/reference/cli#cre-workflow-simulate) for more details. ## Troubleshooting **"failed to submit report" or transaction fails to broadcast** - Verify your consumer contract address is correct and deployed on the target chain - Check that you're using the correct chain selector for your target blockchain - Verify network connectivity and RPC endpoint availability **Transaction succeeds but `TxStatus` is `REVERTED`** - Check the `ErrorMessage` field for details about why the transaction reverted - Verify your consumer contract implements the `IReceiver` interface correctly (see [Building Consumer Contracts](/cre/guides/workflow/using-evm-client/onchain-write/building-consumer-contracts)) - Review your consumer contract's `onReport()` validation logic—it may be rejecting the report - Ensure the report data format matches what your consumer contract expects **"out of gas" error or transaction runs out of gas** - Increase the `GasLimit` in your `GasConfig` - Check if your consumer contract's `onReport()` function has unexpectedly complex logic - Review the transaction on a block explorer to see the actual gas used **`ReceiverContractExecutionStatus` indicates failure** - Your consumer contract's `onReport()` function executed but encountered an error - Review the contract's event logs and error messages on a block explorer - Check that your contract's validation logic (e.g., forwarder checks, workflow ID checks) is correctly configured - Verify the decoded data in your contract matches the expected struct/value format **"invalid receiver address" or address-related errors** - Confirm the receiver address is a valid Ethereum address format - Verify the contract is deployed at that address on the target chain - Use `common.HexToAddress()` to properly convert address strings ## Learn more - **[Onchain Write Overview](/cre/guides/workflow/using-evm-client/onchain-write)**: Understand all onchain write approaches - **[Using WriteReportFrom Helpers](/cre/guides/workflow/using-evm-client/onchain-write/using-write-report-helpers)**: Use the simpler all-in-one helper for struct submission - **[Generating Reports: Single Values](/cre/guides/workflow/using-evm-client/onchain-write/generating-reports-single-values)**: Generate reports for single primitive values - **[Generating Reports: Structs](/cre/guides/workflow/using-evm-client/onchain-write/generating-reports-structs)**: Generate reports for struct data - **[Building Consumer Contracts](/cre/guides/workflow/using-evm-client/onchain-write/building-consumer-contracts)**: Create contracts that can receive reports - **[EVM Client Reference](/cre/reference/sdk/evm-client)**: Complete API documentation --- # API Interactions Source: https://docs.chain.link/cre/guides/workflow/using-http-client Last Updated: 2025-11-04 The CRE SDK provides an HTTP client that allows your workflows to interact with external APIs. Use it to fetch offchain data, send results to other systems, or trigger external events. These guides will walk you through the common use cases for the HTTP client. ## Guides - **[Making GET Requests](/cre/guides/workflow/using-http-client/get-request)**: Learn how to fetch data from a public API using a `GET` request. - **[Making POST Requests](/cre/guides/workflow/using-http-client/post-request)**: Learn how to send data to an external endpoint using a `POST` request. - **[Submitting Reports via HTTP](/cre/guides/workflow/using-http-client/submitting-reports-http)**: Learn how to submit cryptographically signed reports to an external HTTP endpoint. --- # Managing Secrets Source: https://docs.chain.link/cre/guides/workflow/secrets Last Updated: 2025-11-04 Secrets are sensitive values like API keys, private keys, database URLs, and authentication tokens that your workflow needs to access at runtime. CRE provides different approaches for managing secrets depending on whether you're developing locally or running workflows in production. This guide helps you choose the right approach for your use case. ## Which guide do I need? Your workflow environment determines how you manage secrets: ### 1. Local development and simulation **When to use:** You're testing and debugging workflows on your local machine using `cre workflow simulate`. **How it works:** - Secrets declared in `secrets.yaml` - Values provided via `.env` file or environment variables - Secrets injected locally by the CLI - **No Vault DON required** **→ Follow this guide:** [Using Secrets in Simulation](/cre/guides/workflow/secrets/using-secrets-simulation) ### 2. Deployed workflows **When to use:** Your workflow is deployed to the Workflow DON. **How it works:** - Secrets stored in the **Vault DON** (decentralized secret storage) - Managed via `cre secrets` CLI commands (`create`, `update`, `delete`, `list`) - Your workflow retrieves secrets from the Vault at runtime - **Vault DON required** **→ Follow this guide:** [Using Secrets with Deployed Workflows](/cre/guides/workflow/secrets/using-secrets-deployed) ### 3. Secure secret management (Best practice) **When to use:** Any environment where you want to avoid storing secrets in plaintext `.env` files. **How it works:** - Use **1Password CLI** to store and inject secrets - Secrets never stored in plaintext on your filesystem - Works for both simulation and production **→ Follow this guide:** [Managing Secrets with 1Password CLI](/cre/guides/workflow/secrets/managing-secrets-1password) ## Quick comparison | Aspect | Local Simulation | Deployed Workflows | | ------------------ | ------------------------------------ | ---------------------------------- | | **Environment** | Your local machine | Workflow DON | | **Secret storage** | `.env` file or environment variables | Vault DON | | **CLI commands** | None (automatic via simulation) | `cre secrets create/update/delete` | | **Workflow code** | `runtime.GetSecret()` | `runtime.GetSecret()` (same API) | | **Authentication** | Not required | `cre login` required | | **Use case** | Development and testing | Deployed workflows | ## How secrets work in your workflow Regardless of where secrets are stored (locally or in the Vault), your workflow code uses the same API to access them: The CRE runtime automatically handles retrieving the secret from the appropriate source based on your environment. ## Getting started 1. **For local development:** Start with [Using Secrets in Simulation](/cre/guides/workflow/secrets/using-secrets-simulation) to learn the basics 2. **For deployed workflows:** Once your workflow is ready to deploy, follow [Using Secrets with Deployed Workflows](/cre/guides/workflow/secrets/using-secrets-deployed) 3. **For enhanced security:** Implement [1Password CLI integration](/cre/guides/workflow/secrets/managing-secrets-1password) to eliminate plaintext secrets ## Reference For detailed CLI command documentation, see: - [Secrets Management CLI Reference](/cre/reference/cli/secrets) — Complete documentation for `cre secrets` commands --- # Using Secrets with Deployed Workflows Source: https://docs.chain.link/cre/guides/workflow/secrets/using-secrets-deployed Last Updated: 2025-11-04 When your workflow is deployed, it cannot access your local `.env` file or environment variables. Instead, secrets must be stored in the **Vault DON**—a decentralized, secure secret storage system that your deployed workflows can access at runtime. This guide explains how to manage secrets for deployed workflows using the `cre secrets` CLI commands. ## Prerequisites Before managing secrets for deployed workflows, ensure you have: 1. **CRE CLI installed**: See the [Installation Guide](/cre/getting-started/cli-installation/macos-linux) 2. **Authentication**: You must be logged in with `cre login` 3. **Owner address configured**: Your `workflow-owner-address` must be set in your project configuration ## How secrets work with deployed workflows The workflow is similar to local development, but with a critical difference in where secrets are stored: 1. **Declare**: Define secret identifiers in a YAML file 2. **Store**: Push secrets to the Vault DON using `cre secrets create` 3. **Use**: Your deployed workflow accesses secrets from the Vault using `runtime.GetSecret()` **Key difference from simulation:** - **Local simulation**: Secrets read from your environment variables or `.env` file on your machine - **Deployed workflows**: Secrets retrieved from Vault DON by the workflow ## Step-by-step guide ### Step 1: Create a secrets YAML file Create a YAML file at the root of your project that declares the secrets you want to store. **Example `production-secrets.yaml`:** ```yaml secretsNames: API_KEY: - API_KEY_VALUE DATABASE_URL: - DATABASE_URL_VALUE ``` **Structure:** - `secretsNames` — Top-level key containing all secrets - Each secret has: - **Key** (e.g., `API_KEY`) — The identifier your workflow code will use - **Value** — An array containing the environment variable name that holds the actual value ### Step 2: Provide secret values as environment variables Set the actual secret values as environment variables. These can be provided in two ways: **Option A: Export in your shell** ```bash export API_KEY_VALUE="your-actual-api-key" export DATABASE_URL_VALUE="postgresql://user:pass@host:5432/db" ``` **Option B: Use a `.env` file** Create a `.env` file (or add to your existing one): ```bash # .env API_KEY_VALUE=your-actual-api-key DATABASE_URL_VALUE=postgresql://user:pass@host:5432/db ``` The `cre` CLI will automatically load variables from `.env` when you run the commands. ### Step 3: Upload secrets to the Vault DON Use the `cre secrets create` command to upload your secrets to the Vault: ```bash cre secrets create production-secrets.yaml --target production-settings ``` **What happens:** 1. The CLI reads your YAML file and environment variables 2. It registers the request onchain (for authorization) 3. It submits the secrets to the Vault DON 4. The secrets are stored securely and associated with your owner address **Example output:** ```bash {"level":"info","owner":"","digest":"041eb7a8...","time":"2025-10-22T00:14:56+02:00","message":"IsRequestAllowlisted query succeeded"} {"level":"info","digest":"041eb7a8...","deadline":"2025-10-23T22:14:56Z","time":"2025-10-22T00:14:59+02:00","message":"AllowlistRequest submitted"} Digest allowlisted; proceeding to gateway POST Secret created: secret_id=API_KEY, owner=, namespace=main Secret created: secret_id=DATABASE_URL, owner=, namespace=main ``` ### Step 4: Use secrets in your workflow code Your workflow code uses the same API to access secrets, whether running in local simulation or deployed to a workflow DON. The CRE runtime automatically retrieves secrets from the appropriate source. **Important:** - The secret identifier (`"API_KEY"`) must match what you declared in your YAML file - Secrets are fetched at runtime from the Vault DON - The namespace parameter is optional—defaults to `"main"` if omitted - The same code works for both simulation (reads from `.env`) and production (reads from Vault) ### Step 5: Verify secrets are stored You can list all secrets stored in the Vault for your owner address: ```bash cre secrets list --target production-settings ``` **Example output:** ``` {"level":"info","owner":"","digest":"225d8b6f...","time":"2025-10-22T19:10:12-05:00","message":"IsRequestAllowlisted query succeeded"} {"level":"info","digest":"225d8b6f...","deadline":"2025-10-25T00:10:12Z","time":"2025-10-22T19:10:16-05:00","message":"AllowlistRequest submitted"} Digest allowlisted; proceeding to gateway POST: owner=, requestID=f9148fcb-3e4e-45bf-bbde-2124ddd577e4, digest=0x225d8b6f... Secret identifier: secret_id=API_KEY, owner=, namespace=main Secret identifier: secret_id=DATABASE_URL, owner=, namespace=main ``` ## Managing secrets lifecycle ### Updating secrets To update existing secrets, use the `cre secrets update` command: ```bash # Update your environment variable with the new value export API_KEY_VALUE="new-api-key-value" # Update the secret in the Vault cre secrets update production-secrets.yaml --target production-settings ``` **Example output:** ``` {"level":"info","owner":"","digest":"10854ac2...","time":"2025-10-22T19:12:32-05:00","message":"IsRequestAllowlisted query succeeded"} {"level":"info","digest":"10854ac2...","deadline":"2025-10-25T00:12:32Z","time":"2025-10-22T19:12:40-05:00","message":"AllowlistRequest submitted"} Digest allowlisted; proceeding to gateway POST: owner=, requestID=7433514f-4008-46dd-822a-633732b64ec9, digest=0x10854ac2... Secret updated: secret_id=API_KEY, owner=, namespace=main Secret updated: secret_id=DATABASE_URL, owner=, namespace=main ``` ### Deleting secrets To remove secrets from the Vault: **Step 1: Create a deletion YAML file** (`secrets-to-delete.yaml`): ```yaml secretsNames: - API_KEY - DATABASE_URL ``` **Step 2: Run the delete command:** ```bash cre secrets delete secrets-to-delete.yaml --target production-settings ``` ## About namespaces When you look at CLI outputs, you'll notice secrets are organized by **namespaces**. A namespace is simply a way to group related secrets together. ## Using with multi-sig wallets All `cre secrets` commands support the `--unsigned` flag for multi-sig wallet operations. This generates raw transaction data instead of sending transactions directly. For complete multi-sig setup and usage, see [Using Multi-sig Wallets](/cre/guides/operations/using-multisig-wallets). ## Troubleshooting ### "Secret not found" error in deployed workflow **Problem:** Your workflow throws a "secret not found" error when calling `runtime.GetSecret()`. **Solution:** 1. Verify the secret exists: `cre secrets list --target production-settings` 2. Check that the secret ID in your code matches exactly 3. Recreate the secret if necessary: `cre secrets create ...` ### "Timeout expired" error **Problem:** The CLI returns a timeout error when creating/updating secrets. **Solution:** The onchain authorization has expired. Re-run the command to create a new authorization. ### Different secrets for simulation vs. production **Problem:** You want different secret values when simulating vs. running in production. **Solution:** - For simulation: Store values in your local `.env` file - For production: Use `cre secrets create` with different values - The secret IDs stay the same—only the values differ ## Learn more - **[Secrets CLI Reference](/cre/reference/cli/secrets)** — Complete CLI command documentation - **[Using Secrets in Simulation](/cre/guides/workflow/secrets/using-secrets-simulation)** — For local development - **[Managing Secrets with 1Password](/cre/guides/workflow/secrets/managing-secrets-1password)** — Best practice for secure secret management - **[Using Multi-sig Wallets](/cre/guides/operations/using-multisig-wallets)** — For multi-sig secret operations --- # Managing Secrets with 1Password CLI Source: https://docs.chain.link/cre/guides/workflow/secrets/managing-secrets-1password Last Updated: 2025-11-04 While using a `.env` file or exporting environment variables is convenient for initial testing, the recommended best practice for managing sensitive data like private keys and API tokens is to use a dedicated secrets manager. This guide explains how to use **1Password CLI** to securely inject secrets into your workflow's environment at runtime, ensuring your secrets are never stored in plaintext on your filesystem. ## Prerequisites Before you begin, ensure you have: 1. **Installed 1Password CLI:** Follow the [1Password CLI installation guide](https://developer.1password.com/docs/cli/get-started/). 2. **Stored Your Secret in 1Password:** Save the secret you need (e.g., your `CRE_ETH_PRIVATE_KEY`) in a vault that your 1Password CLI is configured to access. ## Step 1: Get the secret reference A secret reference is a unique URI that points to a specific field in an item in your 1Password vault. 1. Open the 1Password desktop app. 2. Find the item containing your secret. 3. Right-click on the specific field (e.g., the `private key` field). 4. Select **Copy Secret Reference**. Your clipboard will now contain a reference, which is a safe, non-secret string that looks like this: `op:////` ## Step 2: Use the secret reference in your `.env` file Open your project's `.env` file and replace the plaintext secret with the secret reference you just copied. **Before:** ```bash # .env CRE_ETH_PRIVATE_KEY=0x123...abc ``` **After:** ```bash # .env CRE_ETH_PRIVATE_KEY="op://Private/Sepolia-Dev-Key/private key" ``` ## Step 3: Run commands with `op run` The `op run` command is a utility that loads the secrets from your references into the environment and then executes your command, ensuring the secrets only exist in memory for the duration of the process. ### For local simulation To run your workflow simulation, prefix your command with `op run --env-file ../.env --`: ```bash op run --env-file ../.env -- cre workflow simulate my-workflow --target staging-settings ``` ### For deployed workflows To upload secrets to the Vault DON, use the same pattern: ```bash op run --env-file .env -- cre secrets create production-secrets.yaml --target production-settings ``` **What's happening here?** - `op run` scans the `.env` file for any `op://` references. - It securely authenticates with 1Password to fetch the real secret values. - It injects these values as environment variables into a new, temporary sub-shell. - It then executes your `cre` command within that secure sub-shell. - When the command finishes, the sub-shell is destroyed, and the secrets vanish from the environment. By following this pattern, you can manage your secrets securely without ever exposing them in plaintext. For more advanced use cases, see the official [1Password CLI documentation](https://developer.1password.com/docs/cli/secret-references). --- # Simulating Workflows Source: https://docs.chain.link/cre/guides/operations/simulating-workflows Last Updated: 2025-11-04 Workflow simulation is a local execution environment that **compiles your workflow to WebAssembly (WASM)** and runs it on **your machine**. It allows you to test and debug your workflow logic before deploying it. The simulator makes real calls to public testnets and live HTTP endpoints, giving you high confidence that your code will work as expected when deployed. ## When to use simulation Use workflow simulation to: - **Test workflow logic during development**: Validate that your code behaves correctly before deploying. - **Debug errors in a controlled environment**: Catch and fix issues locally without deploying to a live network. - **Test different trigger types**: Manually select and test how your workflow responds to [cron](/cre/guides/workflow/using-triggers/cron-trigger), [HTTP](/cre/guides/workflow/using-triggers/http-trigger), or [EVM log](/cre/guides/workflow/using-triggers/evm-log-trigger) triggers. - **Verify onchain interactions**: Test [read](/cre/guides/workflow/using-evm-client/onchain-read) and [write](/cre/guides/workflow/using-evm-client/onchain-write/overview) operations against real testnets. ## Basic usage The `cre workflow simulate` command compiles your workflow and executes it locally. **Basic syntax:** ```bash cre workflow simulate [flags] ``` **Example:** ```bash cre workflow simulate my-workflow --target staging-settings ``` ### What happens during simulation 1. **Compilation**: The CLI compiles your workflow to WebAssembly (WASM). 2. **Trigger selection**: You're prompted to select which trigger to test (cron, HTTP, or EVM log). 3. **Execution**: The workflow runs locally, making real calls to configured RPCs and HTTP endpoints. 4. **Output**: The simulator displays logs from your workflow and the final execution result. ### Prerequisites Before running a simulation: - **CRE account & authentication**: You must have a CRE account and be logged in with the CLI. See [Create your account](/cre/account/creating-account) and [Log in with the CLI](/cre/account/cli-login) for instructions. - **CRE CLI installed**: You must have the CRE CLI installed on your machine. See [CLI Installation](/cre/getting-started/cli-installation) for instructions. - **Project configuration**: You must run the command from your project root directory. - **Valid `workflow.yaml`**: Your workflow directory must contain a `workflow.yaml` file with correct paths to your workflow code, config, and secrets (optional). - **RPC URLs configured**: If your workflow interacts with blockchains, configure RPC endpoints in your `project.yaml` for the target you're using. Without this, the simulator cannot register the EVM capability and your workflow will fail. See [Project Configuration](/cre/reference/project-configuration) for setup instructions. - **Private key**: Set `CRE_ETH_PRIVATE_KEY` in your `.env` file if your workflow performs onchain writes. ## Interactive vs non-interactive modes ### Interactive mode (default) In interactive mode, the simulator prompts you to select a trigger and provide necessary inputs. **Example:** ```bash cre workflow simulate my-workflow --target staging-settings ``` **What you'll see:** ``` Workflow compiled 🚀 Workflow simulation ready. Please select a trigger: 1. cron-trigger@1.0.0 Trigger 2. http-trigger@1.0.0-alpha Trigger 3. evm:ChainSelector:16015286601757825753@1.0.0 LogTrigger Enter your choice (1-3): ``` Select a trigger by entering its number, and follow any additional prompts for trigger-specific inputs. ### Non-interactive mode Non-interactive mode allows you to run simulations without prompts, making it ideal for CI/CD pipelines or automated testing. **Requirements:** - Use the `--non-interactive` flag - Specify `--trigger-index` (0-based index of the trigger to run) - Provide trigger-specific flags as needed (see [Trigger-specific configuration](#trigger-specific-configuration)) **Example:** ```bash cre workflow simulate my-workflow --non-interactive --trigger-index 0 --target staging-settings ``` ## The `--broadcast` flag By default, the simulator performs a **dry run** for onchain write operations. It prepares the transaction but does not broadcast it to the blockchain. To actually broadcast transactions during simulation, use the `--broadcast` flag: ```bash cre workflow simulate my-workflow --broadcast --target staging-settings ``` **Use case:** Use `--broadcast` when you want to test the complete end-to-end flow, including actual onchain state changes, on a testnet. ## Trigger-specific configuration Different trigger types require different inputs for simulation. ### Cron trigger [Cron triggers](/cre/guides/workflow/using-triggers/cron-trigger) do not require additional configuration. When selected, they execute immediately. **Interactive example:** ```bash cre workflow simulate my-workflow --target staging-settings ``` Select the cron trigger when prompted (if multiple triggers are defined) **Non-interactive example:** ```bash # Assuming the cron trigger is the first trigger defined in your workflow (index 0) cre workflow simulate my-workflow --non-interactive --trigger-index 0 --target staging-settings ``` ### HTTP trigger [HTTP triggers](/cre/guides/workflow/using-triggers/http-trigger) require a JSON payload. **Interactive mode:** When you select an HTTP trigger, the simulator prompts you to provide JSON input. You can: - Enter the JSON directly - Provide a file path (e.g., `./payload.json`) **Non-interactive mode:** Use the `--http-payload` flag with: - A JSON string: `--http-payload '{"key":"value"}'` - A file path: `--http-payload @./payload.json` (with or without `@` prefix) **Example:** ```bash cre workflow simulate my-workflow --non-interactive --trigger-index 1 --http-payload @./http_trigger_payload.json --target staging-settings ``` ### EVM log trigger [EVM log triggers](/cre/guides/workflow/using-triggers/evm-log-trigger) require a transaction hash and event index to fetch a specific log event from the blockchain. **Interactive mode:** When you select an EVM log trigger, the simulator prompts you for: 1. **Transaction hash** (e.g., `0x420721d7d00130a03c5b525b2dbfd42550906ddb3075e8377f9bb5d1a5992f8e`) 2. **Event index** (0-based index of the log in the transaction, e.g., `0`) The simulator fetches the log from the configured RPC and passes it to your workflow. **Non-interactive mode:** Use the `--evm-tx-hash` and `--evm-event-index` flags: ```bash cre workflow simulate my-workflow \ --non-interactive \ --trigger-index 2 \ --evm-tx-hash 0x420721d7d00130a03c5b525b2dbfd42550906ddb3075e8377f9bb5d1a5992f8e \ --evm-event-index 0 \ --target staging-settings ``` ## Additional flags ### `--engine-logs` (`-g`) Enables detailed engine logging for debugging purposes. This shows internal logs from the workflow execution engine. ```bash cre workflow simulate my-workflow --engine-logs --target staging-settings ``` ### `--target` (`-T`) Specifies which target environment to use from your configuration files. This determines which RPC URLs, settings, and secrets are loaded. ```bash cre workflow simulate my-workflow --target staging-settings ``` ### `--verbose` (`-v`) Enables debug-level logging for the CLI itself (not the workflow). Useful for troubleshooting CLI issues. ```bash cre workflow simulate my-workflow --verbose --target staging-settings ``` ## Understanding the output When you run a simulation, you'll see the following output: ### 1. Compilation confirmation ``` Workflow compiled ``` This indicates your workflow was successfully compiled to WASM. ### 2. Trigger selection menu (interactive mode only) If your workflow has multiple triggers, you'll see a menu: ``` 🚀 Workflow simulation ready. Please select a trigger: 1. cron-trigger@1.0.0 Trigger 2. http-trigger@1.0.0-alpha Trigger 3. evm:ChainSelector:16015286601757825753@1.0.0 LogTrigger Enter your choice (1-3): ``` If your workflow has only one trigger, it will run automatically without this prompt. ### 3. User logs Logs from your workflow code (e.g., `logger.Info()` calls) appear with timestamps: ``` 2025-10-24T19:07:27Z [USER LOG] Running CronTrigger 2025-10-24T19:07:27Z [USER LOG] fetching por url https://api.example.com 2025-10-24T19:07:27Z [USER LOG] ReserveInfo { "totalReserve": 494515082.75 } ``` ### 4. Final execution result The simulator displays the value returned by your workflow: ``` Workflow Simulation Result: { "result": 47 } ``` ### 5. Transaction details (if your workflow writes onchain) If your workflow performs onchain writes, the simulator will show transaction information: **Without `--broadcast` (dry run):** The transaction is prepared but not sent. You'll see a zero address (`0x0000...`) as the transaction hash: ``` 2025-10-24T23:01:50Z [USER LOG] Write report transaction succeeded: 0x0000000000000000000000000000000000000000000000000000000000000000 ``` **With `--broadcast`:** The transaction is actually sent to the blockchain. You'll see a real transaction hash: ``` 2025-10-24T17:55:48Z [USER LOG] Write report transaction succeeded: 0x1013abc0b6f345fad15b19a56cabbbaab2a2aa94f81eb3a709058adf18a4f23f ``` ## Limitations While simulation provides high confidence in your workflow's behavior, it has some limitations: - **Single-node execution**: Simulation runs on a single node (your local machine) rather than across a DON. There is no actual consensus or quorum, it is simulated. - **Manual trigger execution**: Time-based triggers (cron) execute immediately when selected, not on a schedule. You must manually initiate each simulation run. - **Simplified environment**: The simulation environment mimics production but is not identical. Some edge cases or network conditions may only appear in a deployed environment. Despite these limitations, simulation is an essential tool for catching bugs, validating logic, and testing integrations before deploying to production. ## Next steps - **Deploy your workflow**: Once you're confident your workflow works correctly, see [Deploying Workflows](/cre/guides/operations/deploying-workflows). - **CLI reference**: For a complete list of flags and options, see the [CLI Workflow Commands reference](/cre/reference/cli/workflow/). --- # Deploying Workflows Source: https://docs.chain.link/cre/guides/operations/deploying-workflows Last Updated: 2025-11-04 When you deploy a workflow, you take your locally tested code and register it with the onchain Workflow Registry contract. This makes your workflow "live" so it can activate and respond to triggers across a [Decentralized Oracle Network (DON)](/cre/key-terms#decentralized-oracle-network-don). ## Prerequisites Before you can deploy a workflow, you must have: - **Early Access approval**: Workflow deployment is currently in Early Access. Request access here if you haven't already. - **[Logged in](/cre/reference/cli/authentication#cre-login)**: Authenticated with the platform by running `cre login`. To check if you are logged in, run `cre whoami`. - **[Linked your key](/cre/reference/cli/account#cre-account-link-key)**: Linked your EOA or multi-sig wallet to your account by running `cre account link-key`. - **A funded wallet**: The account you are deploying from must be funded with ETH on Ethereum Mainnet to pay the gas fees for the onchain registration transaction to the Workflow Registry contract. ## The deployment process The `cre workflow deploy` command handles the entire end-to-end process for you: 1. **Compiles** your workflow to a WASM binary. 2. **Uploads** the compiled binary and any associated configuration files (like your config file or `secrets.yaml`) to the CRE Storage Service. 3. **Registers** the workflow onchain by submitting a transaction to the Workflow Registry contract. This transaction contains the metadata for your workflow, including its name, owner, and the URL of its artifacts in the storage service. ### Step 1: Ensure your configuration is correct Before deploying, ensure your `workflow.yaml` file is correctly configured. The `workflow-name` is required under the `user-workflow` section for your target environment. If you are deploying from a multi-sig wallet, specify your multi-sig address in the `workflow-owner-address` field. If you are deploying from a standard EOA, you can leave this field unchanged—the owner will be automatically derived from the `CRE_ETH_PRIVATE_KEY` in your `.env` file. For more details on configuration, see the [Project Configuration](/cre/reference/project-configuration) reference. ### Step 2: Run the deploy command **From your project root directory**, run the `deploy` command with the path to your workflow folder. ```bash cre workflow deploy [flags] ``` Example command to target the `production-settings` environment: ```bash cre workflow deploy my-workflow --target production-settings ``` **Available flags:** | Flag | Description | | ---------------- | --------------------------------------------------------------------------------------- | | `--target` | Sets the target environment from your configuration files (e.g., `production-settings`) | | `--auto-start` | Activate the workflow immediately after deployment (default: `true`) | | `--output` | The output file for the compiled WASM binary (default: `"./binary.wasm.br.b64"`) | | `--unsigned` | Return the raw transaction instead of broadcasting it to the network | | `--yes` | Skip confirmation prompts and proceed with the operation | | `--project-root` | Path to the project root directory | | `--env` | Path to your `.env` file (default: `".env"`) | | `--verbose` | Enable verbose logging to print `DEBUG` level logs | ### Step 3: Monitor the output The CLI will provide detailed logs of the deployment process, including the compilation, upload to the CRE Storage Service, and the final onchain transaction. ```bash > cre workflow deploy my-workflow --target production-settings Deploying Workflow : my-workflow Target : production-settings Owner Address : Compiling workflow... Workflow compiled successfully Verifying ownership... Workflow owner link status: owner=, linked=true Key ownership verified Uploading files... ✔ Loaded binary from: ./binary.wasm.br.b64 ✔ Uploaded binary to: https://storage.cre.example.com/artifacts//binary.wasm ✔ Loaded config from: ./config.json ✔ Uploaded config to: https://storage.cre.example.com/artifacts//config Preparing deployment transaction... Preparing transaction for workflowID: Transaction details: Chain Name: ethereum-testnet-sepolia To: 0xF3f93fc4dc177748E7557568b5354cB009e3818a Function: UpsertWorkflow Inputs: [0]: my-workflow [1]: my-workflow [2]: [3]: 0 [4]: zone-a [5]: https://storage.cre.example.com/artifacts//binary.wasm [6]: https://storage.cre.example.com/artifacts//config [7]: 0x [8]: false Data: b377bfc50000000000000000000000000000000000... Estimated Cost: Gas Price: 0.00100001 gwei Total Cost: 0.00000079 ETH ? Do you want to execute this transaction?: ▸ Yes No Transaction confirmed View on explorer: https://sepolia.etherscan.io/tx/0x58599f6...d916b [OK] Workflow deployed successfully Details: Contract address: 0xF3f93fc4dc177748E7557568b5354cB009e3818a Transaction hash: 0x58599f6...d916b Workflow Name: my-workflow Workflow ID: Binary URL: https://storage.cre.example.com/artifacts//binary.wasm Config URL: https://storage.cre.example.com/artifacts//config ``` ## Verifying your deployment After a successful deployment, you can verify that your workflow was registered correctly by checking the Workflow Registry contract on a block explorer. The CLI output will provide the transaction hash for the registration. The `WorkflowRegistry` contract for the `production-settings` environment is deployed on **Ethereum Sepolia** at the address `0xF3f93fc4dc177748E7557568b5354cB009e3818a`. ## Using multi-sig wallets The `deploy` command supports multi-sig wallets through the `--unsigned` flag. When using this flag, the CLI generates raw transaction data that you can submit through your multi-sig wallet interface instead of sending the transaction directly. For complete setup instructions, configuration requirements, and step-by-step guidance, see [Using Multi-sig Wallets](/cre/guides/operations/using-multisig-wallets). ## Next steps - [Activating & Pausing Workflows](/cre/guides/operations/activating-pausing-workflows): Learn how to control workflow execution - [Monitoring Workflows](/cre/guides/operations/monitoring-workflows): Track your workflow's execution and performance - [Updating Deployed Workflows](/cre/guides/operations/updating-deployed-workflows): Deploy new versions of your workflow --- # Activating & Pausing Workflows Source: https://docs.chain.link/cre/guides/operations/activating-pausing-workflows Last Updated: 2025-11-04 After deploying a workflow, you can control its operational state using the `cre workflow activate` and `cre workflow pause` commands. These commands modify the workflow's status in the Workflow Registry contract, determining whether it can respond to triggers. **Workflow states:** - **Active** — The workflow can respond to its configured triggers and execute - **Paused** — The workflow cannot respond to triggers and will not execute ## Prerequisites Before activating or pausing workflows, ensure you have: - **A [deployed workflow](/cre/guides/operations/deploying-workflows)**: You must have a workflow that has been successfully deployed to the Workflow Registry. - **Workflow ownership**: You must be the owner of the workflow (the account that originally deployed it). Only the workflow owner can activate or pause it. - **Local workflow folder**: You must run these commands from your project directory. The CLI reads the workflow name and configuration from your `workflow.yaml` file to identify which workflow to activate or pause. - **[Logged in](/cre/reference/cli/authentication#cre-login)**: Authenticated with the platform by running `cre login`. To check your authentication status, run `cre whoami`. - **A funded wallet**: The account you are using must be funded with ETH on Ethereum Mainnet to pay the gas fees for the onchain transaction to the Workflow Registry contract. ## Activating a workflow The `cre workflow activate` command changes a paused workflow's status to active, allowing its triggers to fire and the workflow to execute. ### When to activate You typically use `activate` in these scenarios: - **After pausing a workflow**: To resume execution after maintenance or debugging - **Manual deployment**: When you deployed with `--auto-start=false` ### Usage Run the command from your project root: ```bash cre workflow activate my-workflow --target production-settings ``` The CLI identifies which workflow to activate based on: - `workflow-name` from your `workflow.yaml` file - `workflow-owner-address` (either from `workflow.yaml` or derived from your private key in `.env`) ### What happens during activation 1. The CLI fetches the workflow matching your workflow name and owner address 2. It validates that the workflow is currently paused 3. If valid, it sends an onchain transaction to change the status to active ### Example output ```bash > cre workflow activate my-workflow --target production-settings Activating Workflow : my-workflow Target : production-settings Owner Address : Activating workflow: Name=my-workflow, Owner=, WorkflowID= Transaction details: Chain Name: ethereum-testnet-sepolia To: 0xF3f93fc4dc177748E7557568b5354cB009e3818a Function: ActivateWorkflow Inputs: [0]: [1]: zone-a Data: 530979d6000000000000000000000000... Estimated Cost: Gas Price: 0.00100000 gwei Total Cost: 0.00000038 ETH ? Do you want to execute this transaction?: ▸ Yes No Transaction confirmed: 0xd5b94bd...87498b View on explorer: https://sepolia.etherscan.io/tx/0xd5b94bd...87498b [OK] Workflow activated successfully Contract address: 0xF3f93fc4dc177748E7557568b5354cB009e3818a Transaction hash: 0xd5b94bd...87498b Workflow Name: my-workflow Workflow ID: ``` ## Pausing a workflow The `cre workflow pause` command changes an active workflow's status to paused, preventing its triggers from firing and stopping execution. ### When to pause Pause workflows when you need to: - **Perform maintenance**: Temporarily stop execution while updating dependencies or configuration - **Debug issues**: Halt execution to investigate errors or unexpected behavior - **Temporarily halt operations**: Stop workflow execution without permanently deleting it ### Usage Run the command from your project root: ```bash cre workflow pause my-workflow --target production-settings ``` ### What happens during pausing 1. The CLI fetches the workflow matching your workflow name and owner address 2. It validates that the workflow is currently active 3. If valid, it sends an onchain transaction to change the status to paused ### Example output ```bash > cre workflow pause my-workflow --target production-settings Pausing Workflow : my-workflow Target : production-settings Owner Address : Fetching workflows to pause... Name=my-workflow, Owner= Processing batch pause... count=1 Transaction details: Chain Name: ethereum-testnet-sepolia To: 0xF3f93fc4dc177748E7557568b5354cB009e3818a Function: BatchPauseWorkflows Inputs: [0]: [] Data: d8b80738000000000000000000000000... Estimated Cost: Gas Price: 0.00100000 gwei Total Cost: 0.00000021 ETH ? Do you want to execute this transaction?: ▸ Yes No Transaction confirmed View on explorer: https://sepolia.etherscan.io/tx/0x2e09a66...db66e [OK] Workflows paused successfully Details: Contract address: 0xF3f93fc4dc177748E7557568b5354cB009e3818a Transaction hash: 0x2e09a66...db66e Workflow Name: my-workflow Workflow ID: ``` ## Using multi-sig wallets Both `activate` and `pause` commands support multi-sig wallets through the `--unsigned` flag. When using this flag, the CLI generates raw transaction data that you can submit through your multi-sig wallet interface instead of sending the transaction directly. For complete setup instructions, configuration requirements, and step-by-step guidance, see [Using Multi-sig Wallets](/cre/guides/operations/using-multisig-wallets). ## Learn more - [Deploying Workflows](/cre/guides/operations/deploying-workflows) — Learn how to deploy workflows to the registry - [Updating Deployed Workflows](/cre/guides/operations/updating-deployed-workflows) — Update your workflow code and configuration - [Deleting Workflows](/cre/guides/operations/deleting-workflows) — Permanently remove workflows from the registry - [CLI Reference: Workflow Commands](/cre/reference/cli/workflow) — Complete command reference with all flags and options --- # Updating Deployed Workflows Source: https://docs.chain.link/cre/guides/operations/updating-deployed-workflows Last Updated: 2025-11-04 When you update a deployed workflow, you redeploy it with the same workflow name. The new deployment replaces the previous version in the Workflow Registry contract. Currently, CRE does not maintain version history—each deployment overwrites the previous one. ## Prerequisites Before updating a deployed workflow, ensure you have: - **A [deployed workflow](/cre/guides/operations/deploying-workflows)**: The workflow must already exist in the Workflow Registry. - **Workflow ownership**: You must be the owner of the workflow (the account that originally deployed it). Only the workflow owner can update it. - **Local workflow folder**: You must run this command from your project directory. The CLI reads the workflow name and configuration from your `workflow.yaml` file to identify which workflow to update. - **[Logged in](/cre/reference/cli/authentication#cre-login)**: Authenticated with the platform by running `cre login`. To check if you are logged in, run `cre whoami`. - **A funded wallet**: The account must be funded with ETH on Ethereum Mainnet to pay the gas fees for the onchain transaction to the Workflow Registry contract. ## Updating a workflow To update a workflow, simply redeploy it using the same workflow name: ```bash cre workflow deploy my-workflow --target production-settings ``` ### What happens during an update 1. **Compilation**: Your updated workflow code is compiled to WASM 2. **Upload**: The new binary and configuration files are uploaded to the CRE Storage Service 3. **Registration**: A new registration transaction is sent to the Workflow Registry contract 4. **Replacement**: The previous version is replaced with the new deployment ### Auto-start behavior By default, `cre workflow deploy` uses `--auto-start=true`, which means the updated workflow is automatically activated after deployment. If your workflow was previously paused and you want it to remain paused after the update, use `--auto-start=false`: ```bash cre workflow deploy my-workflow --auto-start=false --target production-settings ``` ## Best practices for updates 1. **Test locally first**: Always test your changes using `cre workflow simulate` before deploying to production 2. **Pause before updating** (optional): If you want to ensure no triggers fire during the update, pause the workflow first using `cre workflow pause` 3. **Monitor after deployment**: Check that the updated workflow executes correctly after deployment 4. **Keep track of changes**: Maintain your own version control (e.g., Git tags) to track workflow versions ## Using multi-sig wallets The `deploy` command supports multi-sig wallets through the `--unsigned` flag. When using this flag, the CLI generates raw transaction data that you can submit through your multi-sig wallet interface instead of sending the transaction directly. For complete setup instructions, configuration requirements, and step-by-step guidance, see [Using Multi-sig Wallets](/cre/guides/operations/using-multisig-wallets). ## Learn more - [Deploying Workflows](/cre/guides/operations/deploying-workflows) — Learn about the initial deployment process - [Activating & Pausing Workflows](/cre/guides/operations/activating-pausing-workflows) — Control workflow execution state - [Deleting Workflows](/cre/guides/operations/deleting-workflows) — Remove workflows from the registry --- # Deleting Workflows Source: https://docs.chain.link/cre/guides/operations/deleting-workflows Last Updated: 2025-11-04 Deleting a workflow permanently removes it from the Workflow Registry contract. This action cannot be undone, and the workflow will no longer be able to respond to triggers. ## Prerequisites Before deleting a workflow, ensure you have: - **A [deployed workflow](/cre/guides/operations/deploying-workflows)**: The workflow must exist in the Workflow Registry. - **Workflow ownership**: You must be the owner of the workflow (the account that originally deployed it). Only the workflow owner can delete it. - **Local workflow folder**: You must run this command from your project directory. The CLI reads the workflow name and configuration from your `workflow.yaml` file to identify which workflow to delete. - **[Logged in](/cre/reference/cli/authentication#cre-login)**: Authenticated with the platform by running `cre login`. To check if you are logged in, run `cre whoami`. - **A funded wallet**: The account must be funded with ETH on Ethereum Mainnet to pay the gas fees for the onchain transaction to the Workflow Registry contract. ## Deleting a workflow To delete a workflow, run the `cre workflow delete` command from your project root: ```bash cre workflow delete my-workflow --target production-settings ``` The CLI identifies which workflow to delete based on: - `workflow-name` from your `workflow.yaml` file - `workflow-owner-address` (either from `workflow.yaml` or derived from your private key in `.env`) ### What happens during deletion 1. The CLI fetches all workflows matching your workflow name and owner address 2. It displays details about the workflow(s) to be deleted 3. It prompts you to confirm by typing the workflow name 4. Once confirmed, it sends an onchain transaction to delete the workflow from the Workflow Registry ### Example output ```bash > cre workflow delete my-workflow --target production-settings Deleting Workflow : my-workflow Target : production-settings Owner Address : Found 1 workflow(s) to delete for name: my-workflow 1. Workflow ID: 00f0379a2df46ad2c5af070f5871da89f589f8bff8af76ff6a44bb59bec88bf4 Owner: DON Family: zone-a Tag: my-workflow Binary URL: https://storage.cre.example.com/artifacts/00f0379a.../binary.wasm Workflow Status: PAUSED Are you sure you want to delete the workflow 'my-workflow'? This action cannot be undone. To confirm, type the workflow name: my-workflow: my-workflow Deleting 1 workflow(s)... Transaction details: Chain Name: ethereum-testnet-sepolia To: 0xF3f93fc4dc177748E7557568b5354cB009e3818a Function: DeleteWorkflow Inputs: [0]: 0x00f0379a2df46ad2c5af070f5871da89f589f8bff8af76ff6a44bb59bec88bf4 Data: 695e134000f0379a2df46ad2c5af070f5871da89f589f8bff8af76ff6a44bb59bec88bf4 Estimated Cost: Gas Price: 0.00100001 gwei Total Cost: 0.00000015 ETH ? Do you want to execute this transaction?: ▸ Yes No Transaction confirmed View on explorer: https://sepolia.etherscan.io/tx/0xf059c32...fec7d [OK] Deleted workflow ID: 00f0379a2df46ad2c5af070f5871da89f589f8bff8af76ff6a44bb59bec88bf4 Workflows deleted successfully. ``` ### Skipping the confirmation prompt If you want to skip the interactive confirmation prompt (e.g., in automated scripts), use the `--yes` flag: ```bash cre workflow delete my-workflow --yes --target production-settings ``` ## Using multi-sig wallets The `delete` command supports multi-sig wallets through the `--unsigned` flag. When using this flag, the CLI generates raw transaction data that you can submit through your multi-sig wallet interface instead of sending the transaction directly. For complete setup instructions, configuration requirements, and step-by-step guidance, see [Using Multi-sig Wallets](/cre/guides/operations/using-multisig-wallets). ## Learn more - [Deploying Workflows](/cre/guides/operations/deploying-workflows) — Deploy new workflows to the registry - [Activating & Pausing Workflows](/cre/guides/operations/activating-pausing-workflows) — Control workflow execution state - [Updating Deployed Workflows](/cre/guides/operations/updating-deployed-workflows) — Update existing workflows --- # Using Multi-sig Wallets Source: https://docs.chain.link/cre/guides/operations/using-multisig-wallets Last Updated: 2025-11-04 This guide explains how to use multi-sig wallets with CRE CLI commands for deploying, activating, pausing, updating, and deleting workflows. ## How multi-sig works with CRE CLI When managing workflows with a multi-sig wallet, the CRE CLI can generate raw transaction data that you submit through your multi-sig wallet interface. Instead of the CLI signing and sending the transaction directly, it prepares the transaction data for you to sign offline through your multi-sig wallet. **The workflow:** 1. Run a CRE CLI command with the `--unsigned` flag 2. The CLI generates the raw transaction data 3. You submit this data to your multi-sig wallet interface 4. Signers approve the transaction 5. Once enough signatures are collected, execute the transaction onchain ## Prerequisites Before using multi-sig wallets with CRE CLI commands, ensure you have: ### 1. Authenticated with the CLI You must be logged in to use any CRE CLI commands. Run `cre whoami` in your terminal to verify you're logged in, or run `cre login` to authenticate. See [Logging in with the CLI](/cre/account/cli-login) for detailed instructions. ### 2. Configure your multi-sig address Add your multi-sig wallet address to your `project.yaml` or `workflow.yaml` under the target you're using: ```yaml production-settings: user-workflow: workflow-owner-address: "" workflow-name: "my-workflow" ``` ### 3. Keep your private key in `.env` Even when using `--unsigned`, the CLI still requires `CRE_ETH_PRIVATE_KEY` in your `.env` file: ```bash CRE_ETH_PRIVATE_KEY=your-private-key-here ``` ## Using the `--unsigned` flag Add the `--unsigned` flag to any workflow management command: - **Deploy**: ```bash cre workflow deploy my-workflow --unsigned --target production-settings ``` - **Activate**: ```bash cre workflow activate my-workflow --unsigned --target production-settings ``` - **Pause**: ```bash cre workflow pause my-workflow --unsigned --target production-settings ``` - **Delete**: ```bash cre workflow delete my-workflow --unsigned --target production-settings ``` ## Example output When you run a command with `--unsigned`, the CLI generates transaction data instead of sending the transaction: ```bash > cre workflow activate my-workflow --unsigned --target production-settings Activating Workflow : my-workflow Target : production-settings Owner Address : Activating workflow: Name=my-workflow, Owner=, WorkflowID= --unsigned flag detected: transaction not sent on-chain. Generating call data for offline signing and submission in your preferred tool: MSIG workflow activation transaction prepared! To Activate my-workflow with workflowID: Next steps: 1. Submit the following transaction on the target chain: Chain: ethereum-testnet-sepolia Contract Address: 0xF3f93fc4dc177748E7557568b5354cB009e3818a 2. Use the following transaction data: 530979d600f0379a2df46ad2c5af070f5871da89f589f8bff8af76ff6a44bb59bec88bf4000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000067a6f6e652d610000000000000000000000000000000000000000000000000000 ``` ## Submitting the transaction to your multi-sig wallet Once you have the transaction data, follow these steps: ### 1. Open your multi-sig wallet interface Access your multi-sig wallet (e.g., Gnosis Safe) and navigate to the transaction creation page. ### 2. Create a new transaction Enter the transaction details from the CLI output: - **To address**: Use the contract address from the output (e.g., `0xF3f93fc4dc177748E7557568b5354cB009e3818a`) - **Value**: `0` (no ETH is being sent) - **Data**: Paste the full transaction data from the CLI output ### 3. Submit for signatures Submit the transaction for approval. The transaction will require the configured number of signatures from your multi-sig signers. ### 4. Execute the transaction Once enough signatures are collected, execute the transaction onchain. The multi-sig wallet will broadcast the signed transaction to the blockchain. ## Troubleshooting ### Error: "WorkflowOwner must be a valid Ethereum address" This error occurs when: - The `workflow-owner-address` is not set in your configuration - The `workflow-owner-address` contains a placeholder value like `"(optional) Multi-signature contract address"` **Solution:** Update your `project.yaml` or `workflow.yaml` with your actual multi-sig address: ```yaml production-settings: user-workflow: workflow-owner-address: "0x123..." # Your actual multi-sig address ``` ### Error: "failed to read keys: invalid length, need 256 bits" This error occurs when `CRE_ETH_PRIVATE_KEY` is missing or empty in your `.env` file. **Solution:** Ensure your `.env` file contains a valid private key: ```bash CRE_ETH_PRIVATE_KEY=0x1234567890abcdef... ``` Remember, this key is only used for blockchain client initialization, not for signing the multi-sig transaction. ## Learn more - [Deploying Workflows](/cre/guides/operations/deploying-workflows) — Deploy workflows to the registry - [Activating & Pausing Workflows](/cre/guides/operations/activating-pausing-workflows) — Control workflow execution state - [Updating Deployed Workflows](/cre/guides/operations/updating-deployed-workflows) — Update workflow code and configuration - [Deleting Workflows](/cre/guides/operations/deleting-workflows) — Remove workflows from the registry --- # Monitoring & Debugging Workflows Source: https://docs.chain.link/cre/guides/operations/monitoring-workflows Last Updated: 2025-11-04 After deploying a workflow, you can monitor its execution history, performance metrics, and logs through the CRE web interface. This guide walks you through the monitoring dashboard and debugging tools available for your deployed workflows. ## Prerequisites - **Deployed workflow**: You must have at least one workflow deployed to your organization. See [Deploying Workflows](/cre/guides/operations/deploying-workflows) for instructions. ## Accessing the workflows dashboard 1. **Log in to the CRE UI** at cre.chain.link 2. **Navigate to the Workflows section** by clicking **Workflows** in the left sidebar, or visit cre.chain.link/workflows directly The Workflows dashboard displays three main sections: - **Recent executions**: Shows the most recent workflow runs across all your workflows - **Activity chart**: Visualizes successful and unsuccessful executions over the selected time period - **All workflows table**: Lists all deployed workflows with their status and execution history ## Understanding the workflows dashboard ### Recent executions The top section displays the most recent workflow executions across your organization: - **Execution ID**: A unique identifier for each workflow run (shortened for display) - **Workflow name**: The name of the workflow that executed - **Status**: Success or Failure indicator - **Timestamp**: When the execution occurred Click on any execution ID to view detailed logs and events for that specific run. ### Activity chart The performance chart shows execution trends over the selected time period: - **Green bars**: Successful executions - **Red bars**: Unsuccessful executions - **Height**: Total number of executions for that day Use the dropdown menu to adjust the time range. ### All workflows table The main table displays all workflows in your organization: | Column | Description | | -------------------------------- | --------------------------------------------------------------------------------------------------- | | **Name** | The workflow name as defined in your `workflow.yaml` | | **Status** | Current workflow state: `Pending` (active and ready to execute) | | **Last Execution** | Timestamp of the most recent execution, or `N/A` if not yet triggered | | **Execution Results (last 24h)** | Visual bar showing successful (green) vs. failed (red) executions in the past 24 hours, with counts | Click on any workflow row to view its detailed performance and execution history. ## Viewing workflow details When you click on a workflow, you'll see a dedicated page with comprehensive monitoring data. ### Performance section The Performance chart shows execution trends for this specific workflow over the selected time period. This helps you identify patterns, track reliability, and spot anomalies. ### Overview section The Overview panel displays key workflow metadata: | Field | Description | | ------------------------ | ----------------------------------------------------------------------------------------------- | | **Status** | Current workflow state (e.g., `PENDING`, `PAUSED`) | | **Workflow ID** | Unique identifier in the Workflow Registry (truncated, click to copy full ID) | | **Workflow Owner** | The wallet address that deployed and owns this workflow (truncated, click to copy full address) | | **Total Runs** | Cumulative number of executions since deployment | | **Registered On** | Timestamp when the workflow was first deployed | | **Total Workflow Spend** | Cumulative credits consumed by all executions | ### Execution history The **Execution** tab shows a detailed table of all workflow runs: | Column | Description | | ------------------ | ----------------------------------------------------------------------------------- | | **Execution ID** | Unique identifier for this specific run (click to view details) | | **Workflow ID** | The workflow that executed (useful if viewing executions across multiple workflows) | | **Status** | Execution result: Success, Failure, or In Progress | | **Time Triggered** | When the workflow execution started | | **Credits Used** | Cost of this execution in CRE credits | Use the **ALL** dropdown filter to view all executions, only successful ones, or only failures. ### Deployments tab The **Deployments** tab shows the history of workflow deployments and updates. ## Debugging individual executions Click on any **Execution ID** to view detailed debugging information for that specific run. ### Events tab The **Events** tab displays the events triggered by this execution in sequential order: - **Event**: The type of event (e.g., `trigger`, `evm:ChainSelector`, `consensus`, etc.) - **Status**: Whether the event succeeded, failed, or is in progress (`Success`, `Failure`, `In progress`) - **Time Triggered**: When the event occurred **What you'll see:** - Trigger activation - Capability execution steps (HTTP requests, EVM calls, etc.) - Consensus operations ### Logs tab The **Logs** tab shows only user-emitted logs from your workflow code: - Each log entry displays a timestamp, log level (e.g., `INFO`), and your custom message - This is where you'll see output from `runtime.log()` (TypeScript) or `logger.Info()` (Go) - System messages and internal events are excluded for clarity - Logs appear in chronological order, showing the complete execution flow of your workflow **Example log output:** ``` time=2025-10-26T13:19:57.055Z level=INFO msg="Successfully fetched offchain value" result=4 time=2025-10-26T13:19:57.055Z level=INFO msg="Successfully read onchain value" result=22 time=2025-10-26T13:19:57.055Z level=INFO msg="Final calculated result" result=26 time=2025-10-26T13:19:57.055Z level=INFO msg="Updating calculator result" consumerAddress=0x00307d6d1f88... time=2025-10-26T13:19:57.055Z level=INFO msg="Writing report to consumer contract" offchainValue=4 onchainValue=22 time=2025-10-26T13:19:57.055Z level=INFO msg="Waiting for write report response" ``` ## Monitoring best practices 1. **Check the dashboard regularly**: Review the Activity chart to spot trends in failures or performance degradation 2. **Investigate failures immediately**: When you see red bars in the chart or failed executions, click through to view logs and identify the root cause 3. **Use descriptive log messages**: Include context in your log statements to make debugging easier: ```typescript // TypeScript runtime.log(`Successfully fetched offchain value: ${offchainValue}`) runtime.log(`Final calculated result: ${finalResult}`) ``` ```go // Go logger.Info("Successfully fetched offchain value", "result", offchainValue) logger.Info("Final calculated result", "result", finalResult) ``` 4. **Monitor execution frequency**: If a cron-triggered workflow shows fewer executions than expected, verify your cron schedule and workflow status 5. **Track credit usage**: Monitor the Total Workflow Spend to understand your workflow's cost over time ## Common debugging scenarios ### Workflow not executing **Symptoms:** No recent executions in the dashboard **Possible causes:** - Workflow is paused (check the Status field) - Trigger is not firing (e.g., invalid cron schedule, no matching onchain events) - Workflow was recently deployed and hasn't been triggered yet **Resolution:** - Verify the workflow Status is `PENDING` (active) - Check your trigger configuration in the code - Use [Workflow Simulation](/cre/guides/operations/simulating-workflows) to test locally *** ### Execution failures **Symptoms:** Red bars in Activity chart, failed executions in table **Possible causes:** - API request failures (HTTP errors, timeouts) - Onchain reverts (contract calls failing) - Invalid configuration or missing secrets - Logic errors in workflow code **Resolution:** 1. Click on the failed Execution ID 2. Check the **Events** tab to see which step failed 3. Review the **Logs** tab for error messages from your code 4. Fix the issue in your code and [update the workflow](/cre/guides/operations/updating-deployed-workflows) ## Related guides - **[Deploying Workflows](/cre/guides/operations/deploying-workflows)** - Deploy your first workflow - **[Activating & Pausing Workflows](/cre/guides/operations/activating-pausing-workflows)** - Control workflow execution - **[Updating Deployed Workflows](/cre/guides/operations/updating-deployed-workflows)** - Fix issues and deploy updates - **[Simulating Workflows](/cre/guides/operations/simulating-workflows)** - Test workflows locally before deploying --- # Account Source: https://docs.chain.link/cre/account Last Updated: 2025-11-04 Your CRE account is required to use the CRE CLI. You must be logged in to run any CLI commands, including simulating workflows, deploying workflows, and managing deployed workflows. This section covers everything you need to know about creating and managing your account. ## What you'll need To use CRE, you need: 1. **A CRE account** - Created through the web interface at cre.chain.link (to create a new organization) or via an invitation link from an organization Owner (to join an existing organization) 2. **Two-factor authentication** - Set up during account creation for security 3. **CLI authentication** - Connect your CLI to your account using `cre login` ## Account guides - **[Creating Your Account](/cre/account/creating-account)** - Step-by-step guide to creating a new CRE account through the CRE UI - **[Logging in with the CLI](/cre/account/cli-login)** - Authenticate your CLI with your CRE account to run commands - **[Managing Authentication](/cre/account/managing-auth)** - Check your login status, handle session expiration, and log out ## Security features Your CRE account includes several security features: - **Two-factor authentication (2FA)** - Required during login for an additional layer of security - **Recovery codes** - Provided during account setup to regain access if you lose your authenticator device - **Session management** - CLI sessions automatically expire after a period of inactivity ## Related guides Once you have your account set up and authenticated: - **[Understanding Organizations](/cre/organization/understanding-organizations)** - Learn about CRE organizations, roles, and permissions - **[Linking Wallet Keys](/cre/organization/linking-keys)** - Connect your wallet to deploy workflows --- # Creating Your Account Source: https://docs.chain.link/cre/account/creating-account Last Updated: 2025-11-04 Before you can use CRE, you need to create an account. An account is required to log in with the CRE CLI and run any CLI commands, including [simulating](/cre/guides/operations/simulating-workflows) workflows. There are two ways to create an account: 1. **Create a new organization**: Sign up directly on the CRE UI. You'll become the *Owner* of a new organization. 2. **Join an existing organization**: Accept an invitation from an existing organization Owner. You'll become a *Member* of that organization automatically after account creation. This guide walks you through the account creation process for both scenarios. ## Prerequisites - A valid email address - Access to your email inbox to receive verification codes (and invitation email, if joining an existing organization) ## Step 1: Navigate to the CRE UI There are two ways to begin the account creation process: ### Option A: Create a new organization In this option, you'll create a new organization and become the *Owner* of that organization. Go to cre.chain.link and click the **"Create an account"** button. ### Option B: Join an existing organization In this option, you'll join an existing organization and become a *Member* of that organization. If you've received an invitation email from an organization Owner, click the **"Accept Invitation"** button in the email. This will redirect you to the account creation page. After choosing either option, continue with the following steps to complete your account creation. ## Step 2: Enter your information Fill in the required information: 1. **Email address**: Enter a valid email address (if not already pre-filled) 2. **Country**: Select your country from the dropdown 3. **Terms and policies**: Review and accept the Terms of Service and Privacy Policy Click **"Continue"** to proceed. ## Step 3: Verify your email Check your email inbox for a message from Chainlink containing a 6-digit verification code. Enter this code in the verification screen and click **Continue**. ## Step 4: Set your password Create a secure password for your account. Your password must meet the security requirements displayed on the screen. ## Step 5: Set up two-factor authentication (2FA) To secure your account, you'll need to set up two-factor authentication. You'll be presented with two authentication method options: 1. **Fingerprint or Face Recognition** - Use biometric authentication on your device 2. **Google Authenticator or similar** - Use an authenticator app ### Using an authenticator app If you choose the authenticator app option: 1. Click **"Google Authenticator or similar"** 2. Open your preferred authenticator app (such as Google Authenticator, Authy, or 1Password) 3. Scan the QR code displayed on the screen 4. Enter the 6-digit one-time code generated by your authenticator app 5. Click **"Continue"** ## Step 6: Save your recovery code Your recovery code is essential for regaining access to your account if you lose access to your authenticator device. 1. Copy the recovery code displayed on the screen 2. Store it securely in a password manager or offline location 3. Check the box "I have safely recorded this code" 4. Click **"Continue"** to complete account creation After completing these steps, you'll be redirected to your CRE dashboard. ## What's next? Once your account is created: 1. **[Log in to the CRE CLI](/cre/account/cli-login)** - Authenticate your CLI session 2. **If you created a new organization (Owner)**: [Invite team members](/cre/organization/inviting-members) to collaborate on workflows --- # Logging in with the CLI Source: https://docs.chain.link/cre/account/cli-login Last Updated: 2025-11-04 To deploy and manage workflows with the CRE CLI, you need to authenticate your CLI session with your CRE account. This guide walks you through the login process. ## Prerequisites - [CRE CLI installed](/cre/getting-started/cli-installation) on your machine - [CRE account created](/cre/account/creating-account) - Access to your authenticator app for two-factor authentication ## Login process ### Step 1: Initiate login from the terminal Open your terminal and run the login command: ```bash cre login ``` This command will automatically open your default web browser to begin the authentication process. ### Step 2: Enter your email address In the browser window that opens, enter the email address associated with your CRE account and click **"Continue"**. ### Step 3: Enter your password Enter your account password and click **"Continue"**. ### Step 4: Complete two-factor authentication You'll be prompted to complete two-factor authentication based on the method you configured during account creation: - **If you set up an authenticator app**: Open your authenticator app and enter the 6-digit one-time code it generates for your CRE account. - **If you set up biometric authentication**: Use your fingerprint or face recognition as prompted by your device. Example of entering a one-time code from an authenticator app: ### Step 5: Confirm successful login Once authenticated, you'll see a confirmation message in your browser: You can now close the browser window and return to your terminal. Your terminal will display: ```bash Login completed successfully ``` Your CLI session is authenticated and ready to use. --- # Managing Authentication Source: https://docs.chain.link/cre/account/managing-auth Last Updated: 2025-11-04 This guide covers how to manage your CLI authentication session, including logging in, checking your status, handling session expiration, and logging out. ## Logging in To authenticate your CLI with your CRE account, use the `cre login` command. This opens a browser window where you'll enter your credentials and complete two-factor authentication. For detailed login instructions, see the [Logging in with the CLI](/cre/account/cli-login) guide. ## Session expiration Your CLI session remains authenticated until you explicitly log out or until your session expires. When your session expires, you'll need to log in again. If you attempt to run a command with an expired session, you'll see an error: ```bash Error: failed to attach credentials: failed to load credentials: you are not logged in, try running cre login ``` To resolve this, simply run `cre login` again to re-authenticate. ## Checking authentication status To verify that you're logged in and view your account details, use the `cre whoami` command: ```bash cre whoami ``` This command displays your account information: ```bash Account details retrieved: Email: email@domain.com Organization ID: org_mEMRknbVURM9DWsB ``` If you're not logged in, you'll receive an error message prompting you to run `cre login`. ## Logging out To explicitly end your CLI session and remove your stored credentials, use the `cre logout` command: ```bash cre logout ``` After logging out, you'll need to run `cre login` again to authenticate future CLI commands. --- # Organization Source: https://docs.chain.link/cre/organization Last Updated: 2025-11-04 CRE organizations enable teams to collaborate on workflow development and deployment. An organization provides a shared workspace where multiple members can deploy workflows, link wallet addresses, and monitor execution activity together. ## Organization guides - **[Understanding Organizations](/cre/organization/understanding-organizations)** - Learn how organizations work, including roles, permissions, and shared resources - **[Linking Wallet Keys](/cre/organization/linking-keys)** - Connect your wallet address to deploy and manage workflows - **[Inviting Team Members](/cre/organization/inviting-members)** - Add colleagues to your organization (Owner only) --- # Understanding Organizations Source: https://docs.chain.link/cre/organization/understanding-organizations Last Updated: 2025-11-04 ## What is an organization? A CRE organization is a collaborative workspace that allows multiple team members to deploy, manage, and monitor workflows together. When you create a CRE account, you either start a new organization (becoming the *Owner*) or join an existing one (becoming a *Member*). An organization serves as a container for: - **Multiple team members** with different roles (Owner and Members) - **Linked wallet addresses** from all team members - **Deployed workflows** visible to all organization members - **Shared monitoring** of workflow executions and activity Organizations enable teams to collaborate on CRE workflows while maintaining individual control over their own wallet addresses and secrets. ## Organization structure ### Single Owner model Every organization has exactly **one Owner**—the person who first created the organization account. The Owner role: - Cannot be transferred or changed - Has full administrative control - Can invite new Members to join - Can link wallet addresses and deploy workflows ### Multiple Members Members are users invited by the Owner to join the organization. Each member: - Joins automatically after accepting the invitation and creating their account - Can link their own wallet addresses to the organization - Can deploy and manage their own workflows - Views all organization workflows in the shared dashboard ### Shared visibility All organization members can: 1. **View linked wallet keys** - Use `cre account list-key` to see all addresses linked to the organization by any member 2. **Monitor all workflows** - Access cre.chain.link/workflows to view: - All deployed workflows across the organization - Recent execution history - Workflow status (Active, Paused, Pending) - Execution success/failure metrics - Activity graphs and trends ## Organization roles CRE uses a simple role-based access control system with two roles: | Role | Description | Permissions | How to obtain | | ---------- | ----------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------- | | **Owner** | The person who first created the organization account |
  • Full access to all organization resources and workflows
  • Invite new members to the organization
  • Manage organization settings
  • Deploy, activate, pause, update, and delete workflows
| Automatically assigned when you create a new organization by signing up directly | | **Member** | Users invited to join an existing organization |
  • View all organization workflows
  • Link wallet addresses to the organization
  • Deploy and manage workflows under their linked addresses
  • Access workflow execution data
| Invited by the Owner through the [invitation process](/cre/organization/inviting-members) | **Key points:** - There is only one Owner per organization - Owner permissions cannot be transferred - Only Owners can invite new Members - Members automatically join after accepting the invitation and creating an account ## Learn more - **[Inviting Team Members](/cre/organization/inviting-members)** - How Owners can add Members to the organization - **[Linking Wallet Keys](/cre/organization/linking-keys)** - How to link your wallet address to deploy workflows - **[Creating Your Account](/cre/account/creating-account)** - How to create an account and join or create an organization - **[Deploying Workflows](/cre/guides/operations/deploying-workflows)** - Deploy your first workflow after linking a key --- # Linking Wallet Keys Source: https://docs.chain.link/cre/organization/linking-keys Last Updated: 2025-11-04 Before you can deploy workflows, you must link a public key address to your CRE organization. This process registers your wallet address onchain in the Workflow Registry contract—the smart contract on Ethereum Mainnet that stores and manages all CRE workflows—associating it with your organization and allowing you to deploy and manage workflows. ## What is key linking? Key linking is the process of connecting a blockchain wallet address to your CRE organization. Once linked, this address becomes a **workflow owner address** that can deploy, update, and delete workflows in the Workflow Registry. **Key benefits:** - Multiple team members can link their own addresses to the same organization - Each linked address can independently deploy and manage workflows - Addresses are labeled for easy identification (e.g., "Production Wallet", "Dev Wallet") - All linked addresses are visible to organization members via cre account list-key **Important constraint:** - **One organization per address**: Each wallet address can only be linked to one CRE organization at a time. If you need to use the same address with a different organization, you must first [unlink it](#unlinking-a-key) from the current organization. However, an organization can have multiple wallet addresses linked to it, allowing team members to use their own addresses or enabling separation between development, staging, and production environments. ## Prerequisites Before linking a key, ensure you have: - **CRE CLI installed and authenticated**: See [CLI Installation](/cre/getting-started/cli-installation) and [Logging in with the CLI](/cre/account/cli-login) - **A CRE project directory**: You must run the command from a project directory that contains a `project.yaml` file - **Private key in `.env`**: Set `CRE_ETH_PRIVATE_KEY=` (without `0x` prefix) in your `.env` file - **Funded wallet**: Your wallet must have ETH on Ethereum Mainnet to pay for gas fees (the Workflow Registry contract is deployed on Ethereum Mainnet) - **Unlinked address**: The wallet address must not already be linked to another CRE organization. Each address can only be associated with one organization at a time. ## Linking your first key The easiest way to link a key is to let the deployment process handle it automatically. When you first try to [deploy a workflow](/cre/guides/operations/deploying-workflows), the CLI will detect that your address isn't linked and prompt you to link it. ### Automatic linking during deployment 1. Navigate to your project directory (where your `.env` file is located) 2. Attempt to deploy a workflow: ```bash cre workflow deploy my-workflow --target production-settings ``` 3. The CLI will detect that your address isn't linked and prompt you: ```bash Verifying ownership... Workflow owner link status: owner=, linked=false Owner not linked. Attempting auto-link: owner= Linking web3 key to your CRE organization Target : production-settings ✔ Using Address : ✔ Provide a label for your owner address: █ ``` 4. Enter a descriptive label for your address 5. Review the transaction details and confirm The CLI will submit the transaction and continue with the deployment once the key is linked. ### Manual linking You can also link a key manually before attempting to deploy: ```bash cre account link-key --target production-settings ``` **Interactive flow:** 1. The CLI derives your public address from the private key in `.env` 2. You're prompted to provide a label 3. The CLI checks if the address is already linked 4. Transaction details are displayed (chain, contract address, estimated gas cost) 5. You confirm to execute the transaction 6. The transaction is submitted and you receive a block explorer link **Example output:** ```bash Linking web3 key to your CRE organization Target : production-settings ✔ Using Address : Provide a label for your owner address: Checking existing registrations... ✓ No existing link found for this address Starting linking: owner=, label= Contract address validation passed Transaction details: Chain Name: ethereum-mainnet To: 0x4Ac54353FA4Fa961AfcC5ec4B118596d3305E7e5 # Workflow Registry contract address Function: LinkOwner ... Estimated Cost: Gas Price: 0.12450327 gwei Total Cost: 0.00001606 ETH ? Do you want to execute this transaction?: ▸ Yes No ``` After confirming, you'll see: ```bash Transaction confirmed View on explorer: https://etherscan.io/tx/ [OK] web3 address linked to your CRE organization successfully → You can now deploy workflows using this address ``` ## Viewing linked keys To see all addresses linked to your organization: ```bash cre account list-key ``` **Example output:** ```bash Workflow owners retrieved successfully: Linked Owners: 1. JohnProd Owner Address: Status: VERIFICATION_STATUS_SUCCESSFULL Verified At: 2025-10-21T17:22:24.394249Z Chain Selector: 5009297550715157269 # Chain selector for Ethereum Mainnet Contract Address: 0x4Ac54353FA4Fa961AfcC5ec4B118596d3305E7e5 # Workflow Registry contract address 2. JaneProd Owner Address: Status: VERIFICATION_STATUS_SUCCESSFULL Verified At: 2025-10-21T17:22:24.394249Z Chain Selector: 5009297550715157269 # Chain selector for Ethereum Mainnet Contract Address: 0x4Ac54353FA4Fa961AfcC5ec4B118596d3305E7e5 # Workflow Registry contract address ``` **Understanding the output:** - **Label**: The friendly name you provided (e.g., "JohnProd", "JaneProd") - **Owner Address**: The public address linked to your organization - **Status**: `VERIFICATION_STATUS_SUCCESSFULL` (linked and verified) - **Verified At**: Timestamp when the link was confirmed onchain - **Chain Selector**: The chain identifier where the Workflow Registry contract is deployed - **Contract Address**: The Workflow Registry contract address ## Linking multiple addresses Your organization can have multiple wallet addresses linked to it simultaneously, but remember that each individual address can only be linked to one organization at a time. This is useful for: - **Separation of concerns**: Different addresses for development, staging, and production - **Team collaboration**: Each team member uses their own address - **Multi-sig wallets**: Link a multi-sig address alongside individual addresses To link another address: 1. Update your `.env` file with the new private key 2. Run `cre account link-key --target ` again 3. Provide a unique label to distinguish this address ## Unlinking a key If you need to remove a linked address from your organization, you can use the `cre account unlink-key` command. This is useful when: - Rotating addresses for security reasons - Removing addresses that are no longer in use - Cleaning up test or development addresses To unlink a key: 1. Ensure your `.env` file contains the private key of the address you want to unlink 2. Run the unlink command: ```bash cre account unlink-key --target production-settings ``` 3. Confirm the operation when prompted The CLI will submit an onchain transaction to remove the address from the Workflow Registry. After the transaction is confirmed, the address and all its associated workflows will be deleted. ## Non-interactive mode For automation or CI/CD pipelines, use the `--yes` flag to skip confirmation prompts: ```bash cre account link-key --owner-label "CI Pipeline Wallet" --yes --target production-settings ``` ## Using multi-sig wallets If you're using a multi-sig wallet, you'll need to use the `--unsigned` flag to generate raw transaction data that you can then submit through your multi-sig interface (such as Safe). ### Prerequisites for multi-sig 1. Configure your multi-sig address in `project.yaml` under the `account` section: ```yaml production-settings: account: workflow-owner-address: "" # ... other settings ``` 2. Ensure your `.env` file contains the private key of any signer from the multi-sig wallet (used only for signature generation, not for sending transactions) ### Linking a multi-sig address Run the `link-key` command with the `--unsigned` flag: ```bash cre account link-key --owner-label "SafeWallet" --target production-settings --unsigned ``` **Example output:** ```bash Linking web3 key to your CRE organization Target : production-settings ✔ Using Address : Checking existing registrations... ✓ No existing link found for this address Starting linking: owner=, label=SafeWallet Contract address validation passed --unsigned flag detected: transaction not sent on-chain. Generating call data for offline signing and submission in your preferred tool: Ownership linking initialized successfully! Next steps: 1. Submit the following transaction on the target chain: Chain: ethereum-mainnet Contract Address: 0x4Ac54353FA4Fa961AfcC5ec4B118596d3305E7e5 2. Use the following transaction data: dc1019690000000000000000000000000000000000000000000000000000000068fd2f9465259a804e880ee30de0fcc2b81ee25d598ee1601e13ace2c2ec10202869706800000000000000000000000000000000000000000000000000000000000000600000000000000000000000000000000000000000000000000000000000000041bd0f40824a1fdce10ee1091703833fb3d4497b3f681f6edee6b159d217326185407ce16eb1c668c90786421b053d4d25401f422aa90d156c35659d7c3e2e13221b00000000000000000000000000000000000000000000000000000000000000 Linked successfully ``` ### Submitting through your multi-sig interface 1. **Copy the transaction data** provided in the CLI output 2. **Open your multi-sig interface** (e.g., Safe app at [https://app.safe.global](https://app.safe.global)) 3. **Create a new transaction** with: - **To address**: 0x4Ac54353FA4Fa961AfcC5ec4B118596d3305E7e5 (Workflow Registry contract) - **Value**: 0 (no ETH transfer) - **Data**: Paste the transaction data from the CLI output (add 0x prefix if required by your multi-sig interface) 4. **Submit and collect signatures** from the required number of signers 5. **Execute the transaction** once you have enough signatures **Note**: If your multi-sig interface requires the RegistryWorkflow contract ABI, you can copy it from Etherscan. ### Verifying the multi-sig link After the multi-sig transaction is executed onchain, you can verify the link status: ```bash cre account list-key ``` Initially, you'll see the address with a `VERIFICATION_STATUS_PENDING` status: ```bash Workflow owners retrieved successfully: Linked Owners: 1. SafeWallet Owner Address: Status: VERIFICATION_STATUS_PENDING Verified At: Chain Selector: 5009297550715157269 # Chain selector for Ethereum Mainnet Contract Address: 0x4Ac54353FA4Fa961AfcC5ec4B118596d3305E7e5 # Workflow Registry contract address ``` Once the transaction is confirmed onchain, the status will change to `VERIFICATION_STATUS_SUCCESSFULL` and the `Verified At` timestamp will be populated. ## Learn more - **[Understanding Organizations](/cre/organization/understanding-organizations)** - Learn about organization structure and shared resources - **[Using Multi-sig Wallets](/cre/guides/operations/using-multisig-wallets)** - Advanced guide for multi-sig wallet workflows - **[Account Management CLI Reference](/cre/reference/cli/account)** - Complete reference for `cre account` commands - **[Deploying Workflows](/cre/guides/operations/deploying-workflows)** - Deploy your first workflow after linking a key --- # Inviting Team Members Source: https://docs.chain.link/cre/organization/inviting-members Last Updated: 2025-11-04 To collaborate with your team on CRE workflows, you can invite members to your organization. This guide walks you through the invitation process. ## Prerequisites - You must be logged in to cre.chain.link - You must be the Owner of your organization - The email addresses you're inviting must belong to whitelisted domains ## Step 1: Navigate to Organization settings In the left sidebar, click on **"Organization"**. ## Step 2: Go to the Members tab Click on the **"Members"** tab to view your organization's members. ## Step 3: Start the invitation process Click the **"Invite"** button in the top right corner. ## Step 4: Add team member details Enter the **name** and **email address** of the person you want to invite. ### Inviting multiple members at once To invite more than one person: 1. Click the **"Add person"** button (visible in the screenshot above) 2. Enter the name and email for each additional person 3. Repeat as needed ## Step 5: Send invitations Once you've added all the people you want to invite, click the **"Submit"** button in the bottom right corner. The invited members will receive an email invitation to join your organization. ## What happens next? After you send invitations: 1. **Email notification**: Each invited person receives an email with instructions to join your organization 2. **Account creation**: If they don't have a CRE account yet, they'll be invited to create one 3. **Automatic membership**: Once they complete account creation, they'll automatically become members of your organization—no additional acceptance step required 4. **Access**: They'll immediately have access to your organization's workflows and resources ## Related guides - [Understanding Organizations](/cre/organization/understanding-organizations) - Learn about organization roles and permissions - [Linking Wallet Keys](/cre/organization/linking-keys) - Help new members connect their wallets --- # Capabilities Overview Source: https://docs.chain.link/cre/capabilities Last Updated: 2025-11-04 At the core of the Chainlink Runtime Environment (CRE) is the concept of **Capabilities**. A capability is a modular, decentralized service that performs a specific task. Think of them as the individual "bricks" that you can use to build custom workflows. Each capability is powered by its own independent Decentralized Oracle Network (DON), which is optimized for that specific task, ensuring security and reliable performance. ## Invoking Capabilities via the SDK As a developer, you do not interact with these capability DONs directly. Instead, you invoke them through the developer-friendly interfaces provided by the **CRE SDKs** ([Go](/cre/reference/sdk/core-go) and [TypeScript](/cre/reference/sdk/core-ts)), such as the [`evm.Client`](/cre/reference/sdk/evm-client) or the [`http.Client`](/cre/reference/sdk/http-client). The SDK handles the low-level complexity of communicating with the correct DON and processing the consensus-verified result, allowing you to focus on your business logic. ## Available Capabilities This section provides a high-level, conceptual overview of the capabilities currently available in CRE. - **[Triggers](/cre/capabilities/triggers)**: Event sources that start your workflow executions. - **[HTTP](/cre/capabilities/http)**: Fetch and post data from external APIs with decentralized consensus. - **[EVM Read & Write](/cre/capabilities/evm-read-write)**: Interact with smart contracts on EVM-compatible blockchains with decentralized consensus. All execution capabilities (HTTP, EVM) automatically use [built-in consensus](/cre/concepts/consensus-computing) to validate results across multiple nodes, ensuring security and reliability. --- # The Trigger Capability Source: https://docs.chain.link/cre/capabilities/triggers Last Updated: 2025-11-04 **Triggers** are a special type of capability that initiate the execution of your workflow. They are event-driven services that constantly watch for a specific condition to be met. When the condition occurs, the trigger fires and instructs CRE to run the callback function you have registered for that event. Learn more about the [trigger-and-callback model](/cre/#the-trigger-and-callback-model). ## Trigger types CRE provides several types of triggers to start your workflows: - **Time-based:** The `Cron` trigger fires at a specific time or on a recurring schedule (e.g., "every 5 minutes"). - **Request-based:** The `HTTP` trigger fires when an external system makes an HTTP request to your workflow's endpoint. HTTP triggers require authorization keys when deployed to ensure only authorized addresses can trigger your workflow. - **Onchain Events:** The `EVM Log` trigger fires when a specific event is emitted by a smart contract on a supported blockchain. ## Learn more - **[Using Triggers Guides](/cre/guides/workflow/using-triggers/overview)**: Learn how to use the SDK to register handlers for the different trigger types. - **[Triggers SDK Reference](/cre/reference/sdk/triggers/overview)**: See the detailed API reference for trigger configurations and payloads. --- # The HTTP Capability Source: https://docs.chain.link/cre/capabilities/http Last Updated: 2025-11-04 The **HTTP** capability is a decentralized service that allows your workflow to securely interact with any external, offchain API. It can be used to both **fetch data from** and **send data to** other systems. ## Why use a Capability for HTTP Requests? When your workflow needs to interact with an external API, the HTTP capability provides decentralized execution across multiple independent nodes. When you use the SDK's `http.Client` to make a request (like a `GET` or a `POST`), you are invoking this capability. CRE instructs each node in a dedicated DON to make the same API request. The nodes' individual responses are then **validated through a consensus protocol**, which ensures they agree on the result before returning it to your workflow. This provides cryptographically verified, tamper-proof execution for your offchain data operations. ## Learn more - **[API Interactions Guide](/cre/guides/workflow/using-http-client)**: Learn how to use the SDK to invoke the HTTP capability. - **[HTTP Client SDK Reference](/cre/reference/sdk/http-client)**: See the detailed API reference for the `http.Client`. --- # The EVM Read & Write Capabilities Source: https://docs.chain.link/cre/capabilities/evm-read-write Last Updated: 2025-11-04 The **EVM Read & Write** capabilities provide a secure and reliable service for your workflow to interact with smart contracts on any EVM-compatible blockchain. - **EVM Read:** Allows your workflow to call `view` and `pure` functions on a smart contract to read its state. - **EVM Write:** Allows your workflow to call state-changing functions on a smart contract to write data to the blockchain. ## How it works When you use the SDK's [`evm.Client`](/cre/reference/sdk/evm-client) to interact with a contract, you are invoking these underlying capabilities. This provides a simpler and more reliable developer experience compared to crafting raw RPC calls. The SDKs simplify contract interactions with type-safe ABI handling (Go uses generated bindings, TypeScript uses viem), while the underlying CRE infrastructure manages consensus and transaction submission. ### Key features - **Type-safe interactions**: SDKs provide type safety for contract calls (Go bindings offer compile-time safety, TypeScript uses viem for runtime type checking) - **Decentralized execution**: Read and write operations are executed across multiple nodes in a DON with cryptographic verification - **Decentralized consensus**: Multiple DON nodes independently verify read results and validate write operations before onchain submission - **Chain selector support**: Target specific blockchains using Chainlink's chain selector system - **Block number flexibility**: Read from finalized, latest, or a specific block number - **Error handling**: Comprehensive error reporting for failed calls and transactions ### Understanding EVM Write operations For **EVM Write** operations, your workflow doesn't write directly to your contract. Instead, the data follows a secure multi-step flow: 1. **Your workflow** generates a cryptographically signed report with your ABI-encoded data 2. **The EVM Write capability** submits this report to a Chainlink-managed `KeystoneForwarder` contract 3. **The forwarder** validates the report's cryptographic signatures 4. **The forwarder** calls your consumer contract's `onReport(bytes metadata, bytes report)` function to deliver the data This architecture ensures decentralized consensus, cryptographic verification, and accountability. Your consumer contract must implement the `IReceiver` interface to receive data from the forwarder. Learn more in the [Onchain Write guide](/cre/guides/workflow/using-evm-client/onchain-write/overview-ts). ## Learn more - **[EVM Chain Interactions Guide](/cre/guides/workflow/using-evm-client/overview)**: Learn how to read from and write to smart contracts - **[EVM Client SDK Reference](/cre/reference/sdk/evm-client)**: Detailed API reference for the `evm.Client` --- # Consensus Computing in CRE Source: https://docs.chain.link/cre/concepts/consensus-computing Last Updated: 2025-11-04 **Consensus computing** is the foundational computing paradigm that makes CRE secure and reliable. It ensures that every operation your workflow performs—whether fetching data from an API or reading from a blockchain—is verified by multiple independent nodes before producing a final result. ## What is consensus computing? Consensus computing is when a decentralized network of nodes must form consensus as part of executing code and storing information. Unlike traditional computing where you trust a single server or service, consensus computing provides unique guarantees: - **Tamper-resistance**: No single node can manipulate results - **High availability**: The network continues operating even if individual nodes fail - **Trust minimization**: You don't need to trust any single entity - **Verifiability**: All results are cryptographically verified Blockchains pioneered consensus computing for maintaining asset ledgers and executing smart contracts. CRE extends this paradigm to **any offchain operation**—API calls, computations, and more. ## How CRE uses consensus In CRE, **every execution capability automatically includes decentralized consensus**. Here's how it works: 1. **Independent execution**: When your workflow invokes a capability (like `http.Client` or `evm.Client`), each node in the dedicated capability DON performs the operation independently 2. **Result collection**: Each node produces its own result based on what it observed 3. **Consensus protocol**: The DON applies a Byzantine Fault Tolerant (BFT) consensus protocol to validate and aggregate the individual results 4. **Verified output**: A single, consensus-verified result is returned to your workflow This process happens automatically for every capability call. You don't need to write any special code—consensus is built into the CRE runtime environment. ## Why this matters for your workflows ### Protection against node failures and manipulation CRE's consensus model protects against individual node failures and malicious behavior. When multiple independent nodes execute the same operation: - **Node-level resilience**: If some nodes fail or go offline, the network continues operating - **Byzantine Fault Tolerance**: Even if some nodes are compromised and return incorrect results, the honest majority ensures the correct outcome - **Execution consistency**: All nodes must execute your workflow logic identically, preventing manipulation by individual operators ### Validated and verified results Every result your workflow receives has been cryptographically verified and validated across multiple nodes. This provides strong guarantees that: - The operation was executed correctly - The result matches what independent observers agreed upon - No single node operator can manipulate your workflow's execution or outputs ### Unified security model With CRE, your **entire institutional-grade smart contract**—not just the onchain parts—benefits from consensus computing. This means: - **API responses** are validated across multiple nodes before your workflow uses them - **Blockchain reads** are verified by multiple nodes - **Blockchain writes** are validated by multiple nodes before being submitted onchain - **Computation results** within your workflow are executed consistently across all nodes Your workflow inherits the same security and reliability guarantees as blockchain transactions, but for any offchain operation. ## Consensus in practice ### HTTP capability When your workflow makes an API request using `http.Client`, the HTTP capability DON executes your request across multiple nodes. Their responses are validated through a consensus protocol before returning a result to your workflow. This ensures: - Execution consistency across all nodes - Protection against individual node compromise or failure - Detection of inconsistent API responses (e.g., due to load balancing or timing) See the [HTTP Capability](/cre/capabilities/http) page for details. ### EVM Read & Write capability When your workflow reads from or writes to a blockchain using `evm.Client`, the EVM capability DON performs the operation across multiple nodes: - **For reads**: Multiple nodes independently query the blockchain, and consensus validates their responses match - **For writes**: Multiple nodes agree on the transaction data before submitting it onchain This ensures: - Execution consistency across all nodes - Protection against individual node compromise or failure - Validated blockchain data before use in your workflow See the [EVM Read & Write Capability](/cre/capabilities/evm-read-write) page for details. ## Consensus in simulation vs. deployed workflows ## Learn more Learn more about how to use CRE capabilities with built-in consensus: - **[Capabilities Overview](/cre/capabilities)**: Explore all available capabilities - **[API Interactions](/cre/guides/workflow/using-http-client)**: Learn how to use the HTTP capability with built-in consensus - **[EVM Chain Interactions](/cre/guides/workflow/using-evm-client/overview)**: Learn how to use the EVM capability with built-in consensus --- # Time in CRE Source: https://docs.chain.link/cre/concepts/time-in-cre Last Updated: 2025-11-04 ## The problem: Why time needs consensus Workflows often rely on time for decisions (market-hours checks), scheduling (retries/backoffs), and observability (log timestamps). In a decentralized network, nodes do not share an identical clock—clock drift, resource contention, and OS scheduling can skew each node's local time. If each node consults its own clock: - Different nodes may take **different branches** of your logic (e.g., one thinks the market is open, another does not). - Logs across nodes become **hard to correlate**. - Data fetched using time (e.g., "fetch price at timestamp N") can be **inconsistent**. **DON Time** removes these divergences by making time **deterministic in the DON**. ## The solution: DON time **DON Time** is a timestamp computed by an OCR (Off-Chain Reporting) plugin and agreed upon by the nodes participating in CRE. You access it through the SDK's runtime call, `runtime.Now()`, not via an OS/System clock. The `runtime.Now()` function returns a standard Go `time.Time` object. **Key properties:** - **Deterministic across nodes**: nodes see the same timestamp. - **Sequenced per workflow**: time responses are associated with a **time-call sequence number** inside each workflow execution (1st call, 2nd call, …). Node execution timing might be slightly off, but a given call will resolve to the **same DON timestamp**. - **Low latency**: the plugin runs continuously with **delta round = 0**, and each node **transmits** results back to outstanding requests at the end of every round. - **Tamper-resistant**: workflows don't expose host machine time, reducing timing-attack surface. ## How it works: A high-level view 1. Your workflow calls **`runtime.Now()`**. 2. **The Chainlink network takes this request**: The Workflow Engine's **TimeProvider** assigns that call a **sequence number** and enqueues it in the **DON Time Store**. 3. **All the nodes agree on a single time (the DON Time)**: The **OCR Time Plugin** on each node reaches consensus on a new DON timestamp (the median of observed times). 4. Each node **returns** the newest DON timestamp to every pending request and updates its **last observed DON time** cache. 5. The result is written back into the WebAssembly execution, and your workflow continues. Because requests are sequenced, *Call 1* for a workflow instance will always return the same DON timestamp on every node. If Node A hits *Call 2* before Node B, A will block until the DON timestamp for *Call 2* is produced; when B reaches *Call 2*, it immediately reuses that value. ## Execution modes: DON mode vs. Node mode ### DON mode (default for workflows) - Time is **consensus-based** and **deterministic**. - Use for **any** logic where different outcomes across nodes would be a bug. Examples: - Market-hours gates - Time-windowed queries ("last 15 minutes") - Retry/backoff logic that must align across nodes - Timestamps used for cross-node correlation (logging, audit trails) ### Node mode (advanced / special cases) - Workflow authors handle consensus themselves. - `runtime.Now()` in Node Mode is a non-blocking call that returns the **last generated DON timestamp** from the local node's cache. This is the same mechanism used by standard Go `time.Now()` calls within the Wasm environment. - Useful in situation where you already expect non-determinism (e.g., inherently variable HTTP responses). ## Best practices: Avoiding non-determinism in DON mode When running in DON Mode, you get determinism **if and only if** you base time-dependent logic on DON Time. **Avoid** these patterns: - **Reading host/system time** (`time.Now()`, etc.). Always use `runtime.Now()` from the CRE SDK. - **Mixing time sources** in the same control path. - **Per-node "sleeps" based on local time** that gate deterministic decisions. **Deterministic patterns:** - ✅ Gate behavior with: ```go now := runtime.Now() if market.IsOpenAt(now): // proceed ``` - ✅ Compute windows from DON Time: ```go now := runtime.Now() windowStart := now.Add(-15 * time.Minute) fetchData(windowStart, now) ``` ## FAQ **Is DON Time "real UTC time"?** It's the **median of node observations** per round. It closely tracks real time but prioritizes **consistency** over absolute accuracy. **What is the resolution?** New DON timestamps are produced continuously (multiple per second). Treat it as coarse-grained real time suitable for gating and logging, not sub-millisecond measurement. --- # Random in CRE Source: https://docs.chain.link/cre/concepts/random-in-cre Last Updated: 2025-11-04 ## The problem: Why randomness needs special handling Workflows often need randomness for various purposes: generating nonces, selecting winners from a list, or creating unpredictable values. However, in a decentralized network, naive use of random number generators creates a critical problem: **If each node generates different random values, they cannot reach consensus on the workflow's output.** For example, if your workflow selects a lottery winner using each node's local random generator, different nodes would select different winners, making it impossible to agree on a single result to write onchain. ## The solution: Consensus-safe randomness CRE provides randomness through the `runtime.Rand()` method, which returns a standard Go `*rand.Rand` object. This random generator is managed by the CRE platform to ensure all nodes generate the same sequence of random values, enabling consensus while still providing unpredictability across different workflow executions. ### Usage ```go // Get the random generator from the runtime rnd, err := runtime.Rand() if err != nil { return err } // Use it with standard Go rand methods randomInt := rnd.Intn(100) // Random int in [0, 100) randomBigInt := new(big.Int).Rand(rnd, big.NewInt(1000)) // Random big.Int ``` ## Common use cases - Selecting a winner from a lottery or pool - Generating nonces for transactions - Creating random identifiers or values - Any random selection that needs to be agreed upon by all nodes ## Working with big.Int random values For Solidity `uint256` types, you often need random `*big.Int` values: ```go rnd, err := runtime.Rand() if err != nil { return err } // Generate a random number in the range [0, max) max := new(big.Int) max.SetString("1000000000000000000", 10) // 1 ETH in wei randomAmount := new(big.Int).Rand(rnd, max) // randomAmount is a random value between 0 and 1 ETH ``` ## Complete example: Random lottery Here's a complete example that demonstrates using DON mode randomness to select a lottery winner and generate a prize amount: ```go //go:build wasip1 package main import ( "fmt" "log/slog" "math/big" "github.com/smartcontractkit/cre-sdk-go/capabilities/scheduler/cron" "github.com/smartcontractkit/cre-sdk-go/cre" "github.com/smartcontractkit/cre-sdk-go/cre/wasm" ) type Config struct { Schedule string `json:"schedule"` } type MyResult struct { WinnerIndex int Winner string RandomBigInt string } func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) { return cre.Workflow[*Config]{ cre.Handler(cron.Trigger(&cron.Config{Schedule: config.Schedule}), onCronTrigger), }, nil } func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) { logger := runtime.Logger() logger.Info("Running random lottery") // Define participants participants := []string{"Alice", "Bob", "Charlie", "Diana", "Eve"} logger.Info("Participants in lottery", "count", len(participants), "names", participants) // Get the DON mode random generator rnd, err := runtime.Rand() if err != nil { return nil, fmt.Errorf("failed to get random generator: %w", err) } // Select a random winner (index in range [0, 5)) winnerIndex := rnd.Intn(len(participants)) winner := participants[winnerIndex] logger.Info("Selected winner", "index", winnerIndex, "winner", winner) // Generate a random prize amount up to 1,000,000 wei maxPrize := big.NewInt(1000000) randomPrize := new(big.Int).Rand(rnd, maxPrize) logger.Info("Generated random prize", "amount", randomPrize.String()) // Return the results result := &MyResult{ WinnerIndex: winnerIndex, Winner: winner, RandomBigInt: randomPrize.String(), } logger.Info("Random lottery complete!", "result", result) return result, nil } func main() { wasm.NewRunner(cre.ParseJSON[Config]).Run(InitWorkflow) } ``` **What this example demonstrates:** 1. **DON mode context**: The randomness is called directly in the trigger callback (DON mode), ensuring all nodes in the network would select the same winner and prize amount. 2. **Random selection**: Uses `rnd.Intn(len(participants))` to select a random index from the participant list. The `Intn(n)` method returns a value in the range `[0, n)`. 3. **Random big.Int for Solidity**: Generates a `*big.Int` value suitable for use with Solidity `uint256` types. 4. **Error handling**: Properly checks for errors when calling `runtime.Rand()`. When you run this workflow multiple times, each execution will select different winners and prize amounts (because each execution gets a different seed), but within a single execution, all nodes in the DON would arrive at the same winner. ## Best practices ### Do: - **Always use `runtime.Rand()`** for randomness in your workflows - **Check for errors** when calling `runtime.Rand()` ```go rnd, err := runtime.Rand() if err != nil { return fmt.Errorf("failed to get random generator: %w", err) } ``` ### Don't: - **Don't use Go's global `rand` package** directly. Always get your random generator from `runtime.Rand()` first. ## Mode-aware behavior The randomness provided by `runtime.Rand()` is **mode-aware**. The examples above demonstrate DON mode (the default execution mode for workflows). There is also a Node mode with different random behavior, used in advanced scenarios. Each mode provides a different type of randomness. ### DON mode (default) The examples above all use DON mode. In this mode: - All nodes generate the **same** random sequence - Enables consensus on random values - This is the mode your main workflow callback runs in ### Node mode When using `cre.RunInNodeMode`, you can access Node mode randomness: - Each node generates **different** random values - Useful for scenarios where per-node variability is accepted - Access via `nodeRuntime.Rand()` inside the Node mode function **Example:** ```go resultPromise := cre.RunInNodeMode(config, runtime, func(config *Config, nodeRuntime cre.NodeRuntime) (int, error) { rnd, err := nodeRuntime.Rand() if err != nil { return 0, err } // Each node generates a different value return rnd.Intn(100), nil }, cre.ConsensusMedianAggregation[int](), ) ``` ### Important: Mode isolation Random generators are tied to the mode they were created in. **Do not** attempt to use a random generator from one mode in another mode—it will cause a panic and crash your workflow. ## FAQ **Is the randomness cryptographically secure?** The randomness is sourced from the host environment's secure random generator, but the standard Go `*rand.Rand` object is **not** intended for cryptographic purposes. For cryptographic operations, use dedicated crypto libraries. **What happens if I try to use randomness in the wrong mode?** The SDK will panic with the error: `"random cannot be used outside the mode it was created in"`. This is intentional—it prevents subtle consensus bugs. **Can I use the same random generator across multiple calls?** Yes. Once you call `runtime.Rand()` and get a `*rand.Rand` object, you can reuse it within the same execution mode. Each call to methods like `Intn()` will produce the next value in the deterministic sequence. --- # CLI Reference Source: https://docs.chain.link/cre/reference/cli Last Updated: 2025-11-04 The CRE Command Line Interface (CLI) is your primary tool for developing, testing, deploying, and managing workflows. It handles project setup, contract binding generation (Go workflows only), local simulation, and workflow lifecycle management. ## Global flags These flags can be used with any `cre` command. | Flag | Description | | -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | `-h, --help` | Displays help information for any command | | `-e, --env` | Specifies the path to your `.env` file (default: `".env"`) | | `-T, --target` | Sets the target environment from your configuration files | | `-R, --project-root` | Specifies the path to the project root directory. By default, the CLI automatically finds the project root by searching for `project.yaml` in the current directory and parent directories | | `-v, --verbose` | Enables verbose logging to print `DEBUG` level logs | ## Commands overview ### Authentication Manage your authentication and account credentials. - **`cre login`** — Authenticate with the CRE UI and save credentials locally - **`cre logout`** — Revoke authentication tokens and remove local credentials - **`cre whoami`** — Show your current account details [View authentication commands →](/cre/reference/cli/authentication) *** ### Project Setup Initialize projects and generate contract bindings (Go only). - **`cre init`** — Initialize a new CRE project with an interactive setup guide - **`cre generate-bindings`** (Go only) — Generate Go bindings from contract ABI files for type-safe contract interactions [View project setup commands →](/cre/reference/cli/project-setup) *** ### Account Management Manage your linked public key addresses for workflow operations. - **`cre account link-key`** — Link a public key address to your account - **`cre account list-key`** — List workflow owners linked to your organization - **`cre account unlink-key`** — Unlink a public key address from your account [View account management commands →](/cre/reference/cli/account) *** ### Workflow Commands Manage workflows throughout their entire lifecycle. - **`cre workflow simulate`** — Compile and execute workflows in a local simulation environment - **`cre workflow deploy`** — Deploy a workflow to the Workflow Registry contract - **`cre workflow activate`** — Activate a workflow on the Workflow Registry contract - **`cre workflow pause`** — Pause a workflow on the Workflow Registry contract - **`cre workflow delete`** — Delete all versions of a workflow from the Workflow Registry [View workflow commands →](/cre/reference/cli/workflow) *** ### Secrets Management Manage secrets stored in the Vault DON for use in your workflows. - **`cre secrets create`** — Create new secrets from a YAML file - **`cre secrets update`** — Update existing secrets - **`cre secrets delete`** — Delete secrets - **`cre secrets list`** — List secret identifiers in a namespace [View secrets management commands →](/cre/reference/cli/secrets) *** ### Utilities Additional utility commands. - **`cre update`** — Update the CRE CLI to the latest version - **`cre version`** — Print the current version of the CRE CLI [View utility commands →](/cre/reference/cli/utilities) ## Typical development workflow The typical workflow development process uses these commands in sequence: 1. **`cre init`** — Initialize your project 2. **`cre generate-bindings`** — Generate contract bindings from ABIs (Go workflows only, if interacting with contracts) 3. **`cre workflow simulate`** — Test your workflow locally 4. **`cre workflow deploy`** — Deploy your workflow to the registry 5. **`cre workflow activate`** / **`cre workflow pause`** — Control workflow execution ## Learn more - **[CLI Installation](/cre/getting-started/cli-installation)** — How to install and set up the CRE CLI - **[Getting Started](/cre/getting-started)** — Step-by-step tutorials using the CLI - **[Project Configuration](/cre/reference/project-configuration)** — Understanding project structure and configuration files --- # Authentication Commands Source: https://docs.chain.link/cre/reference/cli/authentication Last Updated: 2025-11-04 The authentication commands manage your CRE credentials and account information. ## `cre login` Starts the authentication flow. This command opens your browser for user login and saves your credentials locally. **Usage:** ```bash cre login ``` **Authentication steps:** 1. The CLI opens your default web browser to the login page 2. Enter your account email address 3. Enter your account password 4. Enter your one-time password (OTP) from your authenticator app 5. The CLI automatically captures and saves your credentials locally ## `cre logout` Revokes authentication tokens and removes local credentials. This invalidates the current authentication tokens and deletes stored credentials from your machine. **Usage:** ```bash cre logout ``` ## `cre whoami` Shows your current account details. This command fetches and displays your account information, including your email and organization ID. **Usage:** ```bash cre whoami ``` ## Authentication workflow The typical authentication flow: 1. **`cre login`** — Authenticate with the CRE UI (browser-based) 2. **`cre whoami`** — Verify your authentication and account details 3. **Perform CLI operations** — Deploy, manage workflows, etc. 4. **`cre logout`** — (Optional) Revoke credentials when done ## Learn more - [Getting Started](/cre/getting-started) — Includes authentication setup in the initial steps - [Account Management](/cre/reference/cli/account) — Link keys after authentication - [Deploying Workflows](/cre/guides/operations/deploying-workflows) — Requires authentication --- # Account Management Source: https://docs.chain.link/cre/reference/cli/account Last Updated: 2025-11-04 The `cre account` commands manage your linked public key addresses for workflow operations. These commands allow you to link wallet addresses to your CRE account, list linked addresses, and unlink them when needed. ## `cre account link-key` Links a public key address to your account for workflow operations. This command reads your private key from the `.env` file (for EOA) or uses your configuration (for multi-sig), derives the public address, and registers it onchain in the Workflow Registry contract. For a complete step-by-step guide with examples, see [Linking Wallet Keys](/cre/organization/linking-keys). **Usage:** ```bash cre account link-key [flags] ``` **Flags:** | Flag | Description | | ------------------- | --------------------------------------------------------------- | | `-l, --owner-label` | Label for the workflow owner | | `--unsigned` | Return the raw transaction instead of sending it to the network | | `--yes` | Skip the confirmation prompt and proceed with the operation | **Interactive flow:** When you run this command, the CLI will: 1. Extract your public address from the private key in your `.env` file 2. Prompt you to provide a label for this address (e.g., "Production Wallet") 3. Check if the address is already linked 4. Display transaction details (chain, contract, estimated gas cost) 5. Ask for confirmation to execute the transaction 6. Submit the transaction and provide a block explorer link **Examples:** - Interactive mode (recommended) ```bash cre account link-key ``` - Non-interactive mode with label ```bash cre account link-key --owner-label "My Production Wallet" --yes ``` ## `cre account list-key` Lists all public key addresses linked to your organization. This shows the verification status, chain information, and contract addresses for each linked key. For a complete guide on linking and managing keys, see [Linking Wallet Keys](/cre/organization/linking-keys). **Usage:** ```bash cre account list-key ``` **Example output:** ```bash Workflow owners retrieved successfully: Linked Owners: 1. my-production-wallet Owner Address: Status: VERIFICATION_STATUS_SUCCESSFULL Verified At: Chain Selector: Contract Address: ``` ## `cre account unlink-key` Unlinks a previously linked public key address from your account. This is a destructive operation that removes the address from the Workflow Registry contract and deletes all workflows registered under that address. For a complete guide on linking and unlinking keys, see [Linking Wallet Keys](/cre/organization/linking-keys). **Usage:** ```bash cre account unlink-key [flags] ``` **Flags:** | Flag | Description | | ------------ | --------------------------------------------------------------- | | `--unsigned` | Return the raw transaction instead of sending it to the network | | `--yes` | Skip the confirmation prompt and proceed with the operation | **Interactive flow:** When you run this command, the CLI will: 1. Extract your public address from the private key in your `.env` file (for EOA) or use your configuration (for multi-sig) 2. **Display a destructive action warning** about deleting all workflows 3. Ask for first confirmation to proceed with unlinking 4. Display transaction details (chain, contract, estimated gas cost) 5. Ask for second confirmation to execute the transaction 6. Submit the transaction and provide a block explorer link --- # Workflow Commands Source: https://docs.chain.link/cre/reference/cli/workflow Last Updated: 2025-11-04 The `cre workflow` commands manage workflows throughout their entire lifecycle, from local testing to deployment and ongoing management. ## `cre workflow simulate` Compiles your workflow to WASM and executes it in a local simulation environment. This is the core command for testing and debugging your workflow. **Usage:** ```bash cre workflow simulate [flags] ``` **Arguments:** - `` — (Required) Workflow folder name (e.g., `my-workflow`) or path (e.g., `./my-workflow`). When run from the project root, you can use just the folder name. The CLI looks for a `workflow.yaml` file in the workflow directory. **Flags:** | Flag | Description | | ------------------------- | -------------------------------------------------------------------------------------------------- | | `--broadcast` | Broadcast onchain write transactions (default: `false`). Without this flag, a dry run is performed | | `-g, --engine-logs` | Enable non-fatal engine logging | | `--non-interactive` | Run without prompts; requires `--trigger-index` and inputs for the selected trigger type | | `--trigger-index ` | Index of the trigger to run (0-based). Required when using `--non-interactive` | | `--http-payload ` | HTTP trigger payload as JSON string or path to JSON file (with or without `@` prefix) | | `--evm-tx-hash ` | EVM trigger transaction hash (`0x...`). For EVM log triggers | | `--evm-event-index ` | EVM trigger log index (0-based). For EVM log triggers | **Examples:** - Basic simulation ```bash cre workflow simulate ./my-workflow --target local-simulation ``` - Broadcast real onchain transactions ```bash cre workflow simulate ./my-workflow --broadcast --target local-simulation ``` ## `cre workflow deploy` Deploys a workflow to the Workflow Registry contract. This command compiles your workflow, uploads the artifacts to the CRE Storage Service, and registers the workflow onchain. **Usage:** ```bash cre workflow deploy [flags] ``` **Arguments:** - `` — (Required) Workflow folder name (e.g., `my-workflow`) or path (e.g., `./my-workflow`) **Flags:** | Flag | Description | | ------------------ | ---------------------------------------------------------------------------------------------- | | `-r, --auto-start` | Activate the workflow immediately after deployment (default: `true`) | | `-o, --output` | Output file for the compiled WASM binary encoded in base64 (default: `"./binary.wasm.br.b64"`) | | `--unsigned` | Return the raw transaction instead of sending it to the network | | `--yes` | Skip the confirmation prompt and proceed with the operation | **Examples:** - Deploy workflow with auto-start (default behavior) ```bash cre workflow deploy my-workflow --target production-settings ``` - Deploy without auto-starting ```bash cre workflow deploy my-workflow --auto-start=false --target production-settings ``` - Deploy and save the compiled binary to a custom location ```bash cre workflow deploy my-workflow --output ./dist/workflow.wasm.br.b64 ``` For more details, see [Deploying Workflows](/cre/guides/operations/deploying-workflows). ## `cre workflow activate` Changes the workflow status to active on the Workflow Registry contract. Active workflows can respond to their configured triggers. **Usage:** ```bash cre workflow activate [flags] ``` **Arguments:** - `` — (Required) Workflow folder name (e.g., `my-workflow`) or path (e.g., `./my-workflow`) **Flags:** | Flag | Description | | ------------ | --------------------------------------------------------------- | | `--unsigned` | Return the raw transaction instead of sending it to the network | | `--yes` | Skip the confirmation prompt and proceed with the operation | **Example:** ```bash cre workflow activate ./my-workflow --target production-settings ``` For more details, see [Activating & Pausing Workflows](/cre/guides/operations/activating-pausing-workflows). ## `cre workflow pause` Changes the workflow status to paused on the Workflow Registry contract. Paused workflows will not respond to triggers. **Usage:** ```bash cre workflow pause [flags] ``` **Arguments:** - `` — (Required) Workflow folder name (e.g., `my-workflow`) or path (e.g., `./my-workflow`) **Flags:** | Flag | Description | | ------------ | --------------------------------------------------------------- | | `--unsigned` | Return the raw transaction instead of sending it to the network | | `--yes` | Skip the confirmation prompt and proceed with the operation | **Example:** ```bash cre workflow pause ./my-workflow --target production-settings ``` For more details, see [Activating & Pausing Workflows](/cre/guides/operations/activating-pausing-workflows). ## `cre workflow delete` Deletes a workflow from the Workflow Registry. **Usage:** ```bash cre workflow delete [flags] ``` **Arguments:** - `` — (Required) Workflow folder name (e.g., `my-workflow`) or path (e.g., `./my-workflow`) **Flags:** | Flag | Description | | ------------ | --------------------------------------------------------------- | | `--unsigned` | Return the raw transaction instead of sending it to the network | | `--yes` | Skip the confirmation prompt and proceed with the operation | **Example:** ```bash cre workflow delete ./my-workflow --target production-settings ``` For more details, see [Deleting Workflows](/cre/guides/operations/deleting-workflows). ## Workflow lifecycle The typical workflow lifecycle uses these commands in sequence: 1. **Develop locally** — Write and iterate on your workflow code 2. **`cre workflow simulate`** — Test your workflow in a local simulation environment 3. **`cre workflow deploy`** — Deploy your workflow to the registry (auto-starts by default) 4. **`cre workflow pause`** / **`cre workflow activate`** — Control workflow execution as needed 5. **`cre workflow deploy`** (again) — Deploy updates (replaces the existing workflow) 6. **`cre workflow delete`** — Remove the workflow when no longer needed ## Learn more - [Deploying Workflows](/cre/guides/operations/deploying-workflows) — Detailed deployment guide - [Activating & Pausing Workflows](/cre/guides/operations/activating-pausing-workflows) — Managing workflow state - [Updating Deployed Workflows](/cre/guides/operations/updating-deployed-workflows) — Version management - [Deleting Workflows](/cre/guides/operations/deleting-workflows) — Cleanup and removal --- # Secrets Management Commands Source: https://docs.chain.link/cre/reference/cli/secrets Last Updated: 2025-11-04 The `cre secrets` commands manage secrets stored in the Vault DON (Decentralized Oracle Network) for deployed workflows. These commands allow you to create, update, delete, and list secrets that your workflows can access at runtime. ## Namespaces Secrets are organized into **namespaces**, which act as logical groupings (e.g., `"main"`, `"staging"`, `"production"`). All secrets are stored in the `"main"` namespace by default. Currently, `create`, `update`, and `delete` commands only support the default namespace. Custom namespace support may be added in future CLI versions. ## cre secrets create Creates new secrets in the Vault DON from a YAML file. ### Usage ```bash cre secrets create [SECRETS_FILE_PATH] [flags] ``` ### Arguments - `SECRETS_FILE_PATH` — (Required) Path to a YAML file containing the secrets to create ### Flags | Flag | Type | Default | Description | | ------------ | -------- | ------- | --------------------------------------------------------------- | | `--timeout` | duration | `48h` | Timeout for the operation (e.g., `30m`, `2h`, `48h`). Max: `7d` | | `--unsigned` | boolean | `false` | Generate raw transaction data for multi-sig wallets | ### Input file format YAML file with `secretsNames` structure: ```yaml secretsNames: API_KEY: - API_KEY_VALUE DATABASE_URL: - DATABASE_URL_VALUE ``` - `secretsNames` — Top-level key containing all secrets - Each secret key (e.g., `API_KEY`) maps to an array containing an environment variable name - Secret values are read from environment variables or `.env` file ### Examples - Create secrets from YAML file ```bash cre secrets create my-secrets.yaml --target production-settings ``` - Create secrets with custom timeout ```bash cre secrets create my-secrets.yaml --timeout 1h ``` - Create secrets for multi-sig wallets ```bash cre secrets create my-secrets.yaml --unsigned ``` ## cre secrets update Updates existing secrets in the Vault DON from a YAML file. ### Usage ```bash cre secrets update [SECRETS_FILE_PATH] [flags] ``` ### Arguments - `SECRETS_FILE_PATH` — (Required) Path to a YAML file containing the secrets to update ### Flags | Flag | Type | Default | Description | | ------------ | -------- | ------- | --------------------------------------------------------------- | | `--timeout` | duration | `48h` | Timeout for the operation (e.g., `30m`, `2h`, `48h`). Max: `7d` | | `--unsigned` | boolean | `false` | Generate raw transaction data for multi-sig wallets | ### Input file format Same YAML format as `create`. ### Examples - Update secrets ```bash cre secrets update my-secrets.yaml --target production-settings ``` - Update secrets with custom timeout ```bash cre secrets update my-secrets.yaml --timeout 6h ``` ## cre secrets delete Deletes secrets from the Vault DON based on a YAML file. ### Usage ```bash cre secrets delete [SECRETS_FILE_PATH] [flags] ``` ### Arguments - `SECRETS_FILE_PATH` — (Required) Path to a YAML file containing the secrets to delete ### Flags | Flag | Type | Default | Description | | ------------ | -------- | ------- | --------------------------------------------------------------- | | `--timeout` | duration | `48h` | Timeout for the operation (e.g., `30m`, `2h`, `48h`). Max: `7d` | | `--unsigned` | boolean | `false` | Generate raw transaction data for multi-sig wallets | ### Input file format YAML file with a simple list of secret identifiers to delete: ```yaml secretsNames: - API_KEY - OLD_SECRET ``` ### Example ```bash cre secrets delete secrets-to-delete.yaml --target production-settings ``` ## cre secrets list Lists all secret identifiers for your owner address in a specific namespace. ### Usage ```bash cre secrets list [flags] ``` ### Flags | Flag | Type | Default | Description | | ------------- | -------- | -------- | --------------------------------------------------------------- | | `--namespace` | string | `"main"` | Namespace to list secrets from | | `--timeout` | duration | `48h` | Timeout for the operation (e.g., `30m`, `2h`, `48h`). Max: `7d` | | `--unsigned` | boolean | `false` | Generate raw transaction data for multi-sig wallets | ### Example - List secrets in default namespace ```bash cre secrets list --target production-settings ``` - List secrets in specific namespace ```bash cre secrets list --namespace production ``` ### Output Returns secret identifiers (not values) for the specified namespace: ``` Secret identifiers in namespace 'main': - API_KEY - DATABASE_URL - WEBHOOK_SECRET ``` ## Using with multi-sig wallets All commands support the `--unsigned` flag for multi-sig operations: ```bash cre secrets create my-secrets.yaml --unsigned ``` When `--unsigned` is used: 1. CLI generates raw transaction data instead of broadcasting 2. Transaction payload is returned for submission through your multi-sig interface 3. After multi-sig confirmation, the secrets operation proceeds For details, see [Using Multi-sig Wallets](/cre/guides/operations/using-multisig-wallets). ## Learn more - [Managing Secrets](/cre/guides/workflow/secrets) — Overview and decision tree for secrets management - [Using Secrets in Simulation](/cre/guides/workflow/secrets/using-secrets-simulation) — For local development - [Using Secrets with Deployed Workflows](/cre/guides/workflow/secrets/using-secrets-deployed) — Complete guide with examples - [Managing Secrets with 1Password](/cre/guides/workflow/secrets/managing-secrets-1password) — Best practice for secure management - [Using Multi-sig Wallets](/cre/guides/operations/using-multisig-wallets) — Multi-sig configuration --- # Utility Commands Source: https://docs.chain.link/cre/reference/cli/utilities Last Updated: 2025-11-04 Utility commands provide helpful information and troubleshooting capabilities. ## `cre update` Updates the CRE CLI to the latest version. This command automatically downloads and installs the newest release, making it easy to stay up to date. **Usage:** ```bash cre update ``` **Behavior:** - Checks for the latest available version on GitHub - Compares it with your currently installed version - Automatically downloads and installs the update if a newer version is available - Downloads the appropriate binary for your operating system and architecture - Replaces the existing CLI binary with the new version ## `cre version` Prints the current version of the CRE CLI. **Usage:** ```bash cre version ``` **Example output:** ```bash cre version v1.0.0 ``` ## Learn more - [CLI Installation](/cre/getting-started/cli-installation) — How to install and update the CRE CLI - [CLI Reference](/cre/reference/cli) — Complete CLI command reference --- # Avoiding Non-Determinism in Workflows Source: https://docs.chain.link/cre/concepts/non-determinism-go Last Updated: 2025-11-04 ## The problem: Why determinism matters When your workflow runs in DON mode, multiple nodes execute the same code independently. These nodes must reach consensus on the results before proceeding. **If nodes execute different code paths, they generate different request IDs for capability calls, and consensus fails.** The failure pattern: Code diverges → Different request IDs → No quorum → Workflow fails ## Quick reference: Common pitfalls | Don't Use | Use Instead | | -------------------------------- | -------------------------------------------- | | Direct map iteration | Sort keys first, then iterate | | `encoding/json` v2 | `encoding/json` v1 | | Protocol Buffers `proto.Marshal` | `proto.MarshalOptions{Deterministic: true}` | | `select` with multiple channels | Process channels in deterministic order | | `time.Now()` or `time` package | `runtime.Now()` | | Go's `rand` package | `runtime.Rand()` | | LLM free-text responses | Structured output with field-level consensus | ## 1. Map iteration Go maps are **designed to iterate in random order** for security reasons. Each time you iterate over a map, the order may be different. This means different nodes will process items in different sequences, leading to divergent capability calls and consensus failure. **The problem:** Direct map iteration produces unpredictable order across nodes. **The solution:** Extract map keys, sort them, then iterate in the sorted order. This ensures all nodes process items in the same sequence. ## 2. JSON and data serialization ### JSON v2 non-determinism The `encoding/json` v2 library uses random hashing for field order in hashmaps, making serialization non-deterministic. The same data structure can serialize to different JSON strings on different nodes. **The solution:** Use `encoding/json` v1, which provides deterministic field ordering. ### Protocol Buffers serialization The default `proto.Marshal` function does not guarantee deterministic output. Fields may be serialized in different orders across nodes. **The solution:** Use `proto.MarshalOptions{Deterministic: true}.Marshal()` to ensure consistent serialization order across all nodes. ## 3. Concurrency and channel selection Go's `select` statement with multiple channels introduces non-determinism. When multiple channels are ready, `select` picks one at random. Different nodes may select different channels, causing code paths to diverge. **The problem:** `select` with multiple ready channels picks randomly, breaking consensus. **The solution:** Process channels in a **fixed, deterministic order** instead of using `select`. Check channels sequentially in a consistent order across all nodes. ## 4. Time and dates Never use Go's `time` package functions in DON mode. Nodes have different system clocks, causing divergence when calling `time.Now()` or similar functions. **The problem:** Using `time.Now()` returns different values on each node. **The solution:** Use `runtime.Now()` from the CRE SDK, which provides DON Time—a consensus-derived timestamp that all nodes agree on. See [Time in CRE](/cre/concepts/time-in-cre) for details. ## 5. Random number generation Go's built-in `rand` package generates different random sequences on each node, making it impossible to reach consensus on values that depend on randomness. **The problem:** Each node generates different random values, breaking consensus. **The solution:** Use `runtime.Rand()` from the CRE SDK, which provides consensus-safe random number generation. All nodes generate the same sequence of random values, enabling consensus. See [Random in CRE](/cre/concepts/random-in-cre) for details. ## 6. Working with LLMs Large Language Models (LLMs) generate different responses for the same prompt, even with temperature set to 0. This inherent non-determinism breaks consensus in workflows. **The problem:** Free-text responses from LLMs will vary across nodes, making it impossible to reach agreement on the output. **The solution:** Request **structured output** from the LLM (such as JSON with specific fields) rather than free-form text. Then use consensus aggregation on the structured fields. This approach allows nodes to agree on the key data points even if the exact text varies slightly. ## Best practices summary ### Do: - Sort map keys before iteration - Use `encoding/json` v1 for deterministic JSON serialization - Use `proto.MarshalOptions{Deterministic: true}` for Protocol Buffers - Process channels in a fixed, deterministic order - Use `runtime.Now()` for all time operations - Use `runtime.Rand()` for random number generation - Request structured output from LLMs ### Don't: - Iterate over maps directly without sorting keys - Use `encoding/json` v2 (uses random hashing) - Use `proto.Marshal` without deterministic options - Use `select` with multiple channels for decision-making - Use `time.Now()` or other `time` package functions - Use Go's `rand` package directly - Rely on free-text LLM responses ## Related concepts - **[Time in CRE](/cre/concepts/time-in-cre)**: Learn about DON Time and why `runtime.Now()` is required - **[Random in CRE](/cre/concepts/random-in-cre)**: Understand consensus-safe random number generation - **[Consensus Computing](/cre/concepts/consensus-computing)**: Deep dive into how nodes reach agreement --- # Part 1: Project Setup & Simulation Source: https://docs.chain.link/cre/getting-started/part-1-project-setup-go Last Updated: 2025-11-04 In this first part, you'll go from an empty directory to a fully initialized CRE project and [simulate](/cre/guides/operations/simulating-workflows) your first, minimal workflow. The goal is to get a quick "win" and familiarize yourself with the core project structure and development loop. ## What you'll do - Initialize a new project using `cre init`. - Explore the generated project structure and workflow code. - Configure your workflow for simulation. - Run your first local simulation with `cre workflow simulate`. ## Prerequisites Before you begin, ensure you have the following: - **CRE CLI**: See the [Installation Guide](/cre/getting-started/cli-installation/macos-linux) for details. - **CRE account & authentication**: You must have a CRE account and be logged in with the CLI. See [Create your account](/cre/account/creating-account) and [Log in with the CLI](/cre/account/cli-login) for instructions. - **Go**: You must have Go version 1.24.4 or higher installed. Check your version with go version. See [Install Go](https://go.dev/doc/install) for instructions. - **Funded Sepolia Account**: An account with Sepolia ETH to pay for transaction gas fees. Go to faucets.chain.link to get some Sepolia ETH. ## Step 1: Verify your authentication Before initializing your project, verify that you're logged in to the CRE CLI: ```bash cre whoami ``` **Expected output:** - If you're authenticated, you'll see your account details: ```bash Account details retrieved: Email: email@domain.com Organization ID: org_AbCdEfGhIjKlMnOp ``` - If you're not logged in, you'll receive an error message prompting you to run `cre login`: ```bash Error: failed to attach credentials: failed to load credentials: you are not logged in, try running cre login ``` Run the login command and follow the prompts: ```bash cre login ``` See [Logging in with the CLI](/cre/account/cli-login) for detailed instructions if you need help. ## Step 2: Initialize your project The CRE CLI provides an `init` command to scaffold a new project. It's an interactive process that will ask you for a project name, a workflow template, and a name for your first workflow. 1. **In your terminal, navigate to a parent directory where you want your new CRE project to live.** 2. **Run the `init` command.** The CLI will guide you through the setup process: ```bash cre init ``` 3. **Provide the following details when prompted:** - **Project name**: onchain-calculator - **Language**: Select `Golang` and press Enter. - **Pick a workflow template**: Use the arrow keys to select `Helloworld: A Golang Hello World example` and press Enter. We are starting from scratch to learn all the configuration steps. - **Workflow name**: my-calculator-workflow The CLI will then create a new `onchain-calculator` directory and initialize your first workflow within it. ## Step 3: Explore the generated files The `init` command creates a directory with a standard structure and generates your first workflow code. Let's explore what was created. ### Project structure Your new project has the following structure: ``` onchain-calculator/ ├── contracts/ │ └── evm/ │ └── src/ │ ├── abi/ │ └── keystone/ ├── my-calculator-workflow/ │ ├── config.production.json │ ├── config.staging.json │ ├── main.go │ ├── README.md │ └── workflow.yaml ├── .env ├── .gitignore ├── go.mod ├── go.sum ├── project.yaml └── secrets.yaml ``` - **Project**: The top-level directory (e.g., `onchain-calculator/`). - It contains project-wide files like `project.yaml`, which holds shared configurations for all workflows within the project. - The entire project is a single Go module. - A project can contain multiple workflows. - **Workflow**: A subdirectory (e.g., `my-calculator-workflow/`) that contains source code and configuration. It functions as a Go package within the main project-level Go module. Here are the key files and their roles: | File | Role | | ----------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `project.yaml` | The global configuration file. Contains shared settings like RPC URLs for different environments (called `targets`). | | `.env` | Stores secrets and environment variables, like your private key. Never commit this file to version control. | | `go.mod`/`go.sum` | Manages the dependencies for the entire project, including all workflows and contract bindings. | | `contracts/evm/src/abi/` | Directory where you place contract ABI files (`.abi`). These are used to generate bindings, which are type-safe Go packages that make it easy to interact with your smart contracts from your workflow. The generated bindings will be saved to a new `contracts/evm/src/generated/` directory. | | `contracts/evm/src/keystone/` | Directory for Keystone-related contract files. | | `secrets.yaml` | An empty secrets configuration file created by the CLI. You'll learn how to use this for managing secrets in more advanced guides. | | `my-calculator-workflow/` | A directory containing the source code and configuration for a single workflow. It is a package within the project's main Go module. | | `├── workflow.yaml` | Contains configurations specific to this workflow, such as its name and workflow artifacts (entry point path, config file path, secrets file path). The `workflow-artifacts` section tells the CLI where to find your workflow's files. | | `├── config.staging.json` | Contains parameters for your workflow when using the `staging-settings` target, which can be accessed in your code via the `Config` object. | | `├── config.production.json` | Contains parameters for your workflow when using the `production-settings` target, which can be accessed in your code via the `Config` object. | | `└── main.go` | The heart of your workflow where you'll write your Go logic. | You don't need to understand every file and directory right now—this guide is designed to introduce each concept when you actually need it. For now, let's look at the workflow code. ### The workflow code The `init` command created a `main.go` file with a minimal `main` function. Let's replace the contents of this file with the code for a basic "Hello World!" workflow. This code defines a `Config` struct to hold parameters from our config file. It then configures a [cron trigger](/cre/reference/sdk/triggers/cron-trigger) to run on the schedule provided in the config, and registers a simple handler that logs a message. Open `onchain-calculator/my-calculator-workflow/main.go` and replace its entire content with the following code: Code snippet for onchain-calculator/my-calculator-workflow/main.go: ```go //go:build wasip1 package main import ( "log/slog" "github.com/smartcontractkit/cre-sdk-go/capabilities/scheduler/cron" "github.com/smartcontractkit/cre-sdk-go/cre" "github.com/smartcontractkit/cre-sdk-go/cre/wasm" ) // Config struct defines the parameters that can be passed to the workflow. type Config struct { Schedule string `json:"schedule"` } // The result of our workflow, which is empty for now. type MyResult struct{} // onCronTrigger is the callback function that gets executed when the cron trigger fires. func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) { logger := runtime.Logger() logger.Info("Hello, Calculator! Workflow triggered.") return &MyResult{}, nil } // InitWorkflow is the required entry point for a CRE workflow. // The runner calls this function to initialize the workflow and register its handlers. func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) { return cre.Workflow[*Config]{ cre.Handler( // Use the schedule from our config file. cron.Trigger(&cron.Config{Schedule: config.Schedule}), onCronTrigger, ), }, nil } // main is the entry point for the WASM binary. func main() { // The runner is initialized with our Config struct. // It will automatically parse the config.json file into this struct. wasm.NewRunner(cre.ParseJSON[Config]).Run(InitWorkflow) } ``` ## Step 4: Configure your workflow Now that you've explored the generated files, let's configure your workflow for simulation. You'll need to adjust a few configuration files. ### Update config files The CLI generates separate config files for each target environment. Your workflow code can access the parameters from whichever config file corresponds to the target you're using. Inside the `my-calculator-workflow` directory, open `config.staging.json` and add the `schedule` parameter that our `main.go` code expects: ```json { "schedule": "0 */1 * * * *" } ``` Since we'll be using the `staging-settings` target for this guide, you only need to update `config.staging.json` for now. The `config.production.json` file can remain empty. ### Review `workflow.yaml` This file tells the CLI where to find your workflow files. The `cre init` command created this file with default values. Open `my-calculator-workflow/workflow.yaml` and you'll see: ```yaml # ========================================================================== staging-settings: user-workflow: workflow-name: "my-calculator-workflow-staging" workflow-artifacts: workflow-path: "." config-path: "./config.staging.json" secrets-path: "" # ========================================================================== production-settings: user-workflow: workflow-name: "my-calculator-workflow-production" workflow-artifacts: workflow-path: "." config-path: "./config.production.json" secrets-path: "" ``` **Understanding the sections:** - **Target names** (`staging-settings`, `production-settings`): These are environment configuration sets. The `cre init` command pre-populates your `workflow.yaml` with these two common targets as a starting point, but you can name targets whatever you want (e.g., `dev`, `test`, `prod`). When running CLI commands, you specify which target to use with the `--target` flag. - **`workflow-name`**: Each target has its own workflow name with a suffix (e.g., `-staging`, `-production`). This allows you to deploy the same workflow to different environments with distinct identities. - **`workflow-path: "."`**: The entry point for your Go code (`.` means the current directory) - **`config-path`**: Each target points to its own config file (`config.staging.json` or `config.production.json`) - **`secrets-path: ""`**: The location of your secrets file (empty for now; you'll learn about secrets in more advanced guides) You don't need to modify this file for now. For this guide, we'll use `staging-settings` for local simulation. When you run `cre workflow simulate my-calculator-workflow --target staging-settings`, the CLI reads the configuration from the `staging-settings` section of this file. ### Set up your private key The simulator requires a private key to initialize its environment, even for workflows that don't interact with the blockchain yet. This key will be used in later parts of this guide to read from and send transactions to the Sepolia testnet. 1. Open the `.env` file located in your `onchain-calculator/` project root directory. 2. Add your funded Sepolia account's private key: ```bash # Replace with your own private key for your funded Sepolia account CRE_ETH_PRIVATE_KEY=YOUR_64_CHARACTER_PRIVATE_KEY_HERE ``` ## Step 5: Run your first simulation Now that your workflow is configured and dependencies are installed, you can run the simulation. [Workflow simulation](/cre/guides/operations/simulating-workflows) is a local execution environment that compiles your code to WebAssembly and runs it on your machine, allowing you to test and debug before deploying to a live network. Run the `simulate` command from your project root directory (the `onchain-calculator/` folder): ```bash cre workflow simulate my-calculator-workflow --target staging-settings ``` This command compiles your Go code, uses the `staging-settings` target configuration from `workflow.yaml`, and spins up a local simulation environment. ## Step 6: Review the output After the workflow compiles, the simulator detects the single trigger you defined in your code and immediately runs the workflow. ```bash Workflow compiled 2025-11-03T22:34:11Z [SIMULATION] Simulator Initialized 2025-11-03T22:34:11Z [SIMULATION] Running trigger trigger=cron-trigger@1.0.0 2025-11-03T22:34:11Z [USER LOG] msg="Hello, Calculator! Workflow triggered." Workflow Simulation Result: {} 2025-11-03T22:34:11Z [SIMULATION] Execution finished signal received 2025-11-03T22:34:11Z [SIMULATION] Skipping WorkflowEngineV2 ``` - **`[USER LOG]`**: This is the output from your own code—in this case, the `logger.Info()` call. This is where you will look for your custom log messages. - **`[SIMULATION]`**: These are system-level messages from the simulator showing its internal state (initialization, trigger execution, completion). - **`Workflow Simulation Result: {}`**: This is the final return value of your workflow. It's currently an empty object, but you will populate it in the next part of this guide. Congratulations! You've built and simulated your first CRE workflow from scratch. ## Next steps In the next section, you'll build on this foundation by modifying the workflow to fetch real data from an external API. - **[Part 2: Fetching Offchain Data](/cre/getting-started/part-2-fetching-data)** --- # Part 2: Fetching Offchain Data Source: https://docs.chain.link/cre/getting-started/part-2-fetching-data-go Last Updated: 2025-11-04 In Part 1, you successfully built and ran a minimal workflow. Now, it's time to connect it to the outside world. In this section, you will modify your workflow to fetch data from a public API using the CRE SDK's [`http.Client`](/cre/reference/sdk/http-client). ## What you'll do - Add a new URL to your workflow's config file. - Learn about the `http.SendRequest` helper for offchain operations. - Write a new function to fetch data from the public [`api.mathjs.org`](https://api.mathjs.org/) API. - Integrate the offchain data into your main workflow logic. ## Step 1: Update your configuration First, you need to add the API endpoint to your workflow's configuration. This allows you to easily change the URL without modifying your Go code. Open the `config.staging.json` file in your `my-calculator-workflow` directory and add the `apiUrl` key. Your file should now look like this: ```json { "schedule": "0 */1 * * * *", "apiUrl": "https://api.mathjs.org/v4/?expr=randomInt(1,101)" } ``` This URL calls the public mathjs.org API and uses its `randomInt(min, max)` function to return a random integer between 1 and 100. Note that the upper bound is exclusive, so we use `101` to get values up to 100. The API returns the number as a raw string in the response body. ## Step 2: Understand the `http.SendRequest` pattern Many offchain data sources are **non-deterministic**, meaning different nodes calling the same API might get slightly different answers due to timing, load balancing, or other factors. The `api.mathjs.org` API with the `randomInt` function is a perfect example—each call will return a different random number. The CRE SDK solves this with `http.SendRequest`, a helper function that transforms these potentially varied results into a single, highly reliable result. It works like a "map-reduce" for the DON: 1. **Map**: You provide a function (e.g., `fetchMathResult`) that will be executed by every node in the DON independently. Each node "maps" the offchain world by fetching its own version of the data. 2. **Reduce**: You provide a consensus algorithm (e.g., [`ConsensusMedianAggregation`](/cre/reference/sdk/consensus#consensusmedianaggregationt)) that takes all the individual results and "reduces" them into a single, trusted outcome. This pattern is fundamental to securely and reliably bringing offchain data into your workflow. ## Step 3: Add the HTTP fetch logic Now, let's modify your `main.go` file. You will add a new function, `fetchMathResult`, that contains the logic for calling the API. You'll also update the `onCronTrigger` function to call the `http.SendRequest` helper. Replace the entire content of `onchain-calculator/my-calculator-workflow/main.go` with the following code. Code snippet for onchain-calculator/my-calculator-workflow/main.go: ```go //go:build wasip1 package main import ( "fmt" "log/slog" "math/big" "github.com/smartcontractkit/cre-sdk-go/capabilities/networking/http" "github.com/smartcontractkit/cre-sdk-go/capabilities/scheduler/cron" "github.com/smartcontractkit/cre-sdk-go/cre" "github.com/smartcontractkit/cre-sdk-go/cre/wasm" ) // Add the ApiUrl to your config struct type Config struct { Schedule string `json:"schedule"` ApiUrl string `json:"apiUrl"` } type MyResult struct { Result *big.Int } func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) { return cre.Workflow[*Config]{ cre.Handler( cron.Trigger(&cron.Config{Schedule: config.Schedule}), onCronTrigger, ), }, nil } // fetchMathResult is the function passed to the http.SendRequest helper. // It contains the logic for making the request and parsing the response. func fetchMathResult(config *Config, logger *slog.Logger, sendRequester *http.SendRequester) (*big.Int, error) { req := &http.Request{ Url: config.ApiUrl, Method: "GET", } // Send the request using the provided sendRequester resp, err := sendRequester.SendRequest(req).Await() if err != nil { return nil, fmt.Errorf("failed to get API response: %w", err) } // The mathjs.org API returns the result as a raw string in the body. // We need to parse it into a big.Int. val, ok := new(big.Int).SetString(string(resp.Body), 10) if !ok { return nil, fmt.Errorf("failed to parse API response into big.Int") } return val, nil } // onCronTrigger is our main DON-level callback. func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) { logger := runtime.Logger() logger.Info("Hello, Calculator! Workflow triggered.") client := &http.Client{} // Use the http.SendRequest helper to execute the offchain fetch. mathPromise := http.SendRequest(config, runtime, client, fetchMathResult, // The API returns a random number, so each node can get a different result. We use Median Aggregation to find a median value. cre.ConsensusMedianAggregation[*big.Int](), ) // Await the final, aggregated result. result, err := mathPromise.Await() if err != nil { return nil, err } logger.Info("Successfully fetched and aggregated math result", "result", result) return &MyResult{ Result: result, }, nil } func main() { wasm.NewRunner(cre.ParseJSON[Config]).Run(InitWorkflow) } ``` ## Step 4: Sync your dependencies 1. **Sync Dependencies**: Your new code imports a new package. Run the following `go get` command to add the required dependency at its specific version: ```bash go get github.com/smartcontractkit/cre-sdk-go/capabilities/networking/http@v1.0.0-beta.0 ``` 2. **Clean up and organize your module files**: After fetching the new dependencies, run `go mod tidy` to clean up the `go.mod` and `go.sum` files. ```bash go mod tidy ``` ## Step 5: Run the simulation and review the output Run the `simulate` command from your project root directory (the `onchain-calculator/` folder). Because there is only one trigger, the simulator runs it automatically. ```bash cre workflow simulate my-calculator-workflow --target staging-settings ``` The output shows the new user logs from your workflow, followed by the final `Workflow Simulation Result`. ```bash Workflow compiled 2025-11-03T22:35:51Z [SIMULATION] Simulator Initialized 2025-11-03T22:35:51Z [SIMULATION] Running trigger trigger=cron-trigger@1.0.0 2025-11-03T22:35:51Z [USER LOG] msg="Hello, Calculator! Workflow triggered." 2025-11-03T22:35:52Z [USER LOG] msg="Successfully fetched and aggregated math result" result=50 Workflow Simulation Result: { "Result": 50 } 2025-11-03T22:35:52Z [SIMULATION] Execution finished signal received 2025-11-03T22:35:52Z [SIMULATION] Skipping WorkflowEngineV2 ``` - **`[USER LOG]`**: You can now see both of your `logger.Info()` calls in the output. The second log shows the fetched and aggregated value (`result=50`), confirming that the API call and consensus worked correctly. - **`[SIMULATION]`**: These are system-level messages from the simulator showing its internal state. - **`Workflow Simulation Result`**: This is the final return value of your workflow. The `Result` field now contains the aggregated value from the API as a number. Your workflow can now fetch and process data from an external source. ## Next Steps Next, you'll learn how to interact with a smart contract to read data from the blockchain and combine it with this offchain result. - **[Part 3: Reading an Onchain Value](/cre/getting-started/part-3-reading-onchain-value)** --- # Part 3: Reading an Onchain Value Source: https://docs.chain.link/cre/getting-started/part-3-reading-onchain-value-go Last Updated: 2025-11-04 In the previous part, you successfully fetched data from an offchain API. Now, you will complete the "Onchain Calculator" by reading a value from a smart contract and combining it with your offchain result. This part of the guide introduces the core pattern for all onchain interactions: **contract bindings**. ## What you'll do - Configure your project with a Sepolia RPC URL. - Create a new Go package for a contract binding. - Use a binding to read a value from a deployed smart contract. - Integrate the onchain value into your main workflow logic. ## Step 1: The smart contract For this guide, we will interact with a simple `Storage` contract that has already been deployed to the Sepolia testnet. All it does is store a single `uint256` value. Here is the Solidity source code for the contract: ```solidity // SPDX-License-Identifier: MIT pragma solidity ^0.8.19; contract Storage { uint256 public value; constructor(uint256 initialValue) { value = initialValue; } function get() public view returns (uint256) { return value; } } ``` A version of this contract has been deployed to Sepolia at `0xa17CF997C28FF154eDBae1422e6a50BeF23927F4` with an `initialValue` of `22`. ## Step 2: Configure your environment To interact with a contract on Sepolia, your workflow needs EVM chain details. 1. **Contract address and chain name**: Add the deployed contract's address and chain name to your `config.staging.json` file. We use an `evms` array to hold the configuration, which makes it easy to add more contracts (or chains) later. ```json { "schedule": "0 */1 * * * *", "apiUrl": "https://api.mathjs.org/v4/?expr=randomInt(1,101)", "evms": [ { "storageAddress": "0xa17CF997C28FF154eDBae1422e6a50BeF23927F4", "chainName": "ethereum-testnet-sepolia" } ] } ``` 2. **RPC URL**: For your workflow to interact with the blockchain, it needs an RPC endpoint. The `cre init` command has already configured a public Sepolia RPC URL in your `project.yaml` file for convenience. Let's take a look at what was generated: Open your `project.yaml` file at the root of your project. Your `staging-settings` target should look like this: ```yaml # in onchain-calculator/project.yaml staging-settings: rpcs: - chain-name: ethereum-testnet-sepolia url: https://ethereum-sepolia-rpc.publicnode.com ``` This public RPC endpoint is sufficient for testing and following this guide. However, for production use or higher reliability, you should consider using a dedicated RPC provider like [Alchemy](https://www.alchemy.com/) or [Infura](https://infura.io/). ## Step 3: Create the contract binding This is the core of onchain interaction. While you *could* call the generic [`evm.Client`](/cre/reference/sdk/evm-client) directly from your main workflow logic, this is not recommended. Doing so would require you to manually handle ABI encoding and decoding for every contract call, leading to code that is hard to read and prone to errors. The recommended pattern is to create a **binding**: a separate Go package that acts as a type-safe client for your specific smart contract. The binding encapsulates all the low-level encoding/decoding logic, allowing your main workflow to remain clean and focused. In this step, you will create a binding for the `Storage` contract. 1. **Add the contract ABI**: Create a new file called `Storage.abi` in the existing `abi` directory and add the contract's ABI JSON. From your project root (`onchain-calculator/`), run the following command: ```bash touch contracts/evm/src/abi/Storage.abi ``` Open `contracts/evm/src/abi/Storage.abi` and paste the following ABI: ```json [ { "inputs": [{ "internalType": "uint256", "name": "initialValue", "type": "uint256" }], "stateMutability": "nonpayable", "type": "constructor" }, { "inputs": [], "name": "get", "outputs": [{ "internalType": "uint256", "name": "", "type": "uint256" }], "stateMutability": "view", "type": "function" }, { "inputs": [], "name": "value", "outputs": [{ "internalType": "uint256", "name": "", "type": "uint256" }], "stateMutability": "view", "type": "function" } ] ``` 2. **Generate the Go binding**: Run the CRE binding generator from your project root (`onchain-calculator/`): ```bash cre generate-bindings evm ``` This command will automatically generate type-safe Go bindings for all ABI files in your `contracts/evm/src/abi/` directory. It also automatically adds the required `evm` capability dependency to your `go.mod` file. The generated bindings will be placed in `contracts/evm/src/generated/`. 3. **Verify the generated files**: After running the command, you should see two new files in `contracts/evm/src/generated/storage/`: - `Storage.go` — The main binding that provides a type-safe interface for interacting with your contract - `Storage_mock.go` — A mock implementation for testing workflows without deploying contracts ## Step 4: Update your workflow logic Now you can use your new binding in your `main.go` file to read the onchain value and complete the calculation. Replace the entire content of `onchain-calculator/my-calculator-workflow/main.go` with the version below. **Note:** Lines highlighted in green indicate new or modified code compared to Part 2. Code snippet for onchain-calculator/my-calculator-workflow/main.go: ```go //go:build wasip1 package main import ( "fmt" "log/slog" "math/big" "onchain-calculator/contracts/evm/src/generated/storage" "github.com/ethereum/go-ethereum/common" "github.com/smartcontractkit/cre-sdk-go/capabilities/blockchain/evm" "github.com/smartcontractkit/cre-sdk-go/capabilities/networking/http" "github.com/smartcontractkit/cre-sdk-go/capabilities/scheduler/cron" "github.com/smartcontractkit/cre-sdk-go/cre" "github.com/smartcontractkit/cre-sdk-go/cre/wasm" ) // EvmConfig defines the configuration for a single EVM chain. type EvmConfig struct { StorageAddress string `json:"storageAddress"` ChainName string `json:"chainName"` } // Config struct now contains a list of EVM configurations. // This makes it consistent with the structure used in Part 4. type Config struct { Schedule string `json:"schedule"` ApiUrl string `json:"apiUrl"` Evms []EvmConfig `json:"evms"` } type MyResult struct { FinalResult *big.Int } func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) { return cre.Workflow[*Config]{ cre.Handler(cron.Trigger(&cron.Config{Schedule: config.Schedule}), onCronTrigger), }, nil } func fetchMathResult(config *Config, logger *slog.Logger, sendRequester *http.SendRequester) (*big.Int, error) { req := &http.Request{Url: config.ApiUrl, Method: "GET"} resp, err := sendRequester.SendRequest(req).Await() if err != nil { return nil, fmt.Errorf("failed to get API response: %w", err) } val, ok := new(big.Int).SetString(string(resp.Body), 10) if !ok { return nil, fmt.Errorf("failed to parse API response into big.Int") } return val, nil } func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) { logger := runtime.Logger() // Step 1: Fetch offchain data (from Part 2) client := &http.Client{} mathPromise := http.SendRequest(config, runtime, client, fetchMathResult, cre.ConsensusMedianAggregation[*big.Int]()) offchainValue, err := mathPromise.Await() if err != nil { return nil, err } logger.Info("Successfully fetched offchain value", "result", offchainValue) // Get the first EVM configuration from the list. evmConfig := config.Evms[0] // Step 2: Read onchain data using the binding // Convert the human-readable chain name to a numeric chain selector chainSelector, err := evm.ChainSelectorFromName(evmConfig.ChainName) if err != nil { return nil, fmt.Errorf("invalid chain name: %w", err) } evmClient := &evm.Client{ ChainSelector: chainSelector, } storageAddress := common.HexToAddress(evmConfig.StorageAddress) storageContract, err := storage.NewStorage(evmClient, storageAddress, nil) if err != nil { return nil, fmt.Errorf("failed to create contract instance: %w", err) } onchainValue, err := storageContract.Get(runtime, big.NewInt(-3)).Await() // -3 means finalized block if err != nil { return nil, fmt.Errorf("failed to read onchain value: %w", err) } logger.Info("Successfully read onchain value", "result", onchainValue) // Step 3: Combine the results finalResult := new(big.Int).Add(onchainValue, offchainValue) logger.Info("Final calculated result", "result", finalResult) return &MyResult{ FinalResult: finalResult, }, nil } func main() { wasm.NewRunner(cre.ParseJSON[Config]).Run(InitWorkflow) } ``` ## Step 5: Sync your dependencies Now that your `main.go` file has been updated to import the new `storage` binding package, run `go mod tidy` to automatically update your project's `go.mod` and `go.sum` files. ```bash go mod tidy ``` ## Step 6: Run the simulation and review the output Run the simulation from your project root directory (the `onchain-calculator/` folder). Because there is only one trigger defined, the simulator runs it automatically. ```bash cre workflow simulate my-calculator-workflow --target staging-settings ``` The simulation logs will show the end-to-end execution of your workflow. ```bash Workflow compiled 2025-11-03T22:37:05Z [SIMULATION] Simulator Initialized 2025-11-03T22:37:05Z [SIMULATION] Running trigger trigger=cron-trigger@1.0.0 2025-11-03T22:37:05Z [USER LOG] msg="Successfully fetched offchain value" result=53 2025-11-03T22:37:05Z [USER LOG] msg="Successfully read onchain value" result=22 2025-11-03T22:37:05Z [USER LOG] msg="Final calculated result" result=75 Workflow Simulation Result: { "FinalResult": 75 } 2025-11-03T22:37:05Z [SIMULATION] Execution finished signal received 2025-11-03T22:37:05Z [SIMULATION] Skipping WorkflowEngineV2 ``` - **`[USER LOG]`**: You can now see all three of your `logger.Info()` calls, showing the offchain value (`result=53`), the onchain value (`result=22`), and the final combined result (`result=75`). - **`[SIMULATION]`**: These are system-level messages from the simulator showing its internal state. - **`Workflow Simulation Result`**: This is the final, JSON-formatted return value of your workflow. The `FinalResult` field contains the sum of the offchain and onchain values (53 + 22 = 75). You have successfully built a complete CRE workflow that combines offchain and onchain data. ## Next Steps You have successfully read a value from a smart contract and combined it with offchain data. The final step is to write this new result back to the blockchain. - **[Part 4: Writing Onchain](/cre/getting-started/part-4-writing-onchain)**: Learn how to execute an onchain write transaction from your workflow to complete the project. --- # Part 4: Writing Onchain Source: https://docs.chain.link/cre/getting-started/part-4-writing-onchain-go Last Updated: 2025-11-04 In the previous parts, you successfully fetched offchain data and read from a smart contract. Now, you'll complete the "Onchain Calculator" by writing your computed result back to the blockchain. ## What you'll do - Generate bindings for a pre-deployed `CalculatorConsumer` contract - Modify your workflow to write data to the blockchain using the EVM capability - Execute your first onchain write transaction through CRE - Verify your result on the blockchain ## Step 1: The consumer contract To write data onchain, your workflow needs a target smart contract (a "consumer contract"). For this guide, we have pre-deployed a simple `CalculatorConsumer` contract on the Sepolia testnet. This contract is designed to receive and store the calculation results from your workflow. Here is the source code for the contract so you can see how it works: ```solidity // SPDX-License-Identifier: MIT pragma solidity ^0.8.19; import { IReceiverTemplate } from "./keystone/IReceiverTemplate.sol"; /** * @title CalculatorConsumer (Testing Version) * @notice This contract receives reports from a CRE workflow and stores the results of a calculation onchain. * @dev This version uses IReceiverTemplate without configuring any security checks, making it compatible * with the mock Forwarder used during simulation. All permission fields remain at their default zero * values (disabled). */ contract CalculatorConsumer is IReceiverTemplate { // Struct to hold the data sent in a report from the workflow struct CalculatorResult { uint256 offchainValue; int256 onchainValue; uint256 finalResult; } // --- State Variables --- CalculatorResult public latestResult; uint256 public resultCount; mapping(uint256 => CalculatorResult) public results; // --- Events --- event ResultUpdated(uint256 indexed resultId, uint256 finalResult); /** * @dev The constructor doesn't set any security checks. * The IReceiverTemplate parent constructor will initialize all permission fields to zero (disabled). */ constructor() {} /** * @notice Implements the core business logic for processing reports. * @dev This is called automatically by IReceiverTemplate's onReport function after security checks. */ function _processReport(bytes calldata report) internal override { // Decode the report bytes into our CalculatorResult struct CalculatorResult memory calculatorResult = abi.decode(report, (CalculatorResult)); // --- Core Logic --- // Update contract state with the new result resultCount++; results[resultCount] = calculatorResult; latestResult = calculatorResult; emit ResultUpdated(resultCount, calculatorResult.finalResult); } // This function is a "dry-run" utility. It allows an offchain system to check // if a prospective result is an outlier before submitting it for a real onchain update. // It is also used to guide the binding generator to create a method that accepts the CalculatorResult struct. function isResultAnomalous(CalculatorResult memory _prospectiveResult) public view returns (bool) { // A result is not considered anomalous if it's the first one. if (resultCount == 0) { return false; } // Business logic: Define an anomaly as a new result that is more than double the previous result. // This is just one example of a validation rule you could implement. return _prospectiveResult.finalResult > (latestResult.finalResult * 2); } } ``` The contract is already deployed for you on Sepolia at the following address: `0xF3abEAa889e46c6C5b9A0bD818cE54Cc4eAF8A54`. You will use this address in your configuration file. ## Step 2: Generate the consumer contract binding You need to create a binding for the consumer contract so your workflow can interact with it. 1. **Add the consumer contract ABI**: Create a new file for the consumer contract ABI. From your project root (`onchain-calculator/`), run: ```bash touch contracts/evm/src/abi/CalculatorConsumer.abi ``` 2. **Add the ABI content**: Open `contracts/evm/src/abi/CalculatorConsumer.abi` and paste the contract's ABI: ```json [{"inputs":[],"stateMutability":"nonpayable","type":"constructor"},{"inputs":[{"internalType":"address","name":"received","type":"address"},{"internalType":"address","name":"expected","type":"address"}],"name":"InvalidAuthor","type":"error"},{"inputs":[{"internalType":"address","name":"sender","type":"address"},{"internalType":"address","name":"expected","type":"address"}],"name":"InvalidSender","type":"error"},{"inputs":[{"internalType":"bytes32","name":"received","type":"bytes32"},{"internalType":"bytes32","name":"expected","type":"bytes32"}],"name":"InvalidWorkflowId","type":"error"},{"inputs":[{"internalType":"bytes10","name":"received","type":"bytes10"},{"internalType":"bytes10","name":"expected","type":"bytes10"}],"name":"InvalidWorkflowName","type":"error"},{"inputs":[{"internalType":"address","name":"owner","type":"address"}],"name":"OwnableInvalidOwner","type":"error"},{"inputs":[{"internalType":"address","name":"account","type":"address"}],"name":"OwnableUnauthorizedAccount","type":"error"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"previousOwner","type":"address"},{"indexed":true,"internalType":"address","name":"newOwner","type":"address"}],"name":"OwnershipTransferred","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"uint256","name":"resultId","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"finalResult","type":"uint256"}],"name":"ResultUpdated","type":"event"},{"inputs":[],"name":"expectedAuthor","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"expectedWorkflowId","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"expectedWorkflowName","outputs":[{"internalType":"bytes10","name":"","type":"bytes10"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"forwarderAddress","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[{"components":[{"internalType":"uint256","name":"offchainValue","type":"uint256"},{"internalType":"int256","name":"onchainValue","type":"int256"},{"internalType":"uint256","name":"finalResult","type":"uint256"}],"internalType":"struct CalculatorConsumer.CalculatorResult","name":"_prospectiveResult","type":"tuple"}],"name":"isResultAnomalous","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"latestResult","outputs":[{"internalType":"uint256","name":"offchainValue","type":"uint256"},{"internalType":"int256","name":"onchainValue","type":"int256"},{"internalType":"uint256","name":"finalResult","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"bytes","name":"metadata","type":"bytes"},{"internalType":"bytes","name":"report","type":"bytes"}],"name":"onReport","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"owner","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"renounceOwnership","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"resultCount","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"uint256","name":"","type":"uint256"}],"name":"results","outputs":[{"internalType":"uint256","name":"offchainValue","type":"uint256"},{"internalType":"int256","name":"onchainValue","type":"int256"},{"internalType":"uint256","name":"finalResult","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"_author","type":"address"}],"name":"setExpectedAuthor","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"bytes32","name":"_id","type":"bytes32"}],"name":"setExpectedWorkflowId","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"bytes10","name":"_name","type":"bytes10"}],"name":"setExpectedWorkflowName","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"_forwarder","type":"address"}],"name":"setForwarderAddress","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"bytes4","name":"interfaceId","type":"bytes4"}],"name":"supportsInterface","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"pure","type":"function"},{"inputs":[{"internalType":"address","name":"newOwner","type":"address"}],"name":"transferOwnership","outputs":[],"stateMutability":"nonpayable","type":"function"}] ``` 3. **Generate the bindings**: Run the binding generator to create Go bindings for all ABI files in your project. From your project root (`onchain-calculator/`), run: ```bash cre generate-bindings evm ``` This will generate bindings for both the `Storage` contract (from Part 3) and the new `CalculatorConsumer` contract. For each contract, you'll see two files: `.go` (main binding) and `_mock.go` (for testing). **Generated method**: The binding generator sees the `CalculatorResult` struct and creates a `WriteReportFromCalculatorResult` method in the `CalculatorConsumer` binding that automatically handles encoding the struct and submitting it onchain. ## Step 3: Update your workflow configuration Add the `CalculatorConsumer` contract address to your `config.staging.json`: ```json { "schedule": "0 */1 * * * *", "apiUrl": "https://api.mathjs.org/v4/?expr=randomInt(1,101)", "evms": [ { "storageAddress": "0xa17CF997C28FF154eDBae1422e6a50BeF23927F4", "calculatorConsumerAddress": "0xF3abEAa889e46c6C5b9A0bD818cE54Cc4eAF8A54", "chainName": "ethereum-testnet-sepolia", "gasLimit": 500000 } ] } ``` ## Step 4: Update your workflow logic Now modify your workflow to write the final result to the contract. The binding generator creates a `WriteReportFrom` method that automatically handles encoding the `CalculatorResult` struct. Replace the entire content of `onchain-calculator/my-calculator-workflow/main.go` with this final version. **Note:** Lines highlighted in green indicate new or modified code compared to Part 3. Code snippet for onchain-calculator/my-calculator-workflow/main.go: ```go //go:build wasip1 package main import ( "fmt" "log/slog" "math/big" "onchain-calculator/contracts/evm/src/generated/calculator_consumer" "onchain-calculator/contracts/evm/src/generated/storage" "github.com/ethereum/go-ethereum/common" "github.com/smartcontractkit/cre-sdk-go/capabilities/blockchain/evm" "github.com/smartcontractkit/cre-sdk-go/capabilities/networking/http" "github.com/smartcontractkit/cre-sdk-go/capabilities/scheduler/cron" "github.com/smartcontractkit/cre-sdk-go/cre" "github.com/smartcontractkit/cre-sdk-go/cre/wasm" ) // The EvmConfig is updated from Part 3 with new fields for the write operation. type EvmConfig struct { ChainName string `json:"chainName"` StorageAddress string `json:"storageAddress"` CalculatorConsumerAddress string `json:"calculatorConsumerAddress"` GasLimit uint64 `json:"gasLimit"` } type Config struct { Schedule string `json:"schedule"` ApiUrl string `json:"apiUrl"` Evms []EvmConfig `json:"evms"` } // MyResult struct now holds all the outputs of our workflow. type MyResult struct { OffchainValue *big.Int OnchainValue *big.Int FinalResult *big.Int TxHash string } func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) { return cre.Workflow[*Config]{ cre.Handler(cron.Trigger(&cron.Config{Schedule: config.Schedule}), onCronTrigger), }, nil } func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) { logger := runtime.Logger() evmConfig := config.Evms[0] // Convert the human-readable chain name to a numeric chain selector chainSelector, err := evm.ChainSelectorFromName(evmConfig.ChainName) if err != nil { return nil, fmt.Errorf("invalid chain name: %w", err) } // Step 1: Fetch offchain data client := &http.Client{} mathPromise := http.SendRequest(config, runtime, client, fetchMathResult, cre.ConsensusMedianAggregation[*big.Int]()) offchainValue, err := mathPromise.Await() if err != nil { return nil, err } logger.Info("Successfully fetched offchain value", "result", offchainValue) // Step 2: Read onchain data using the binding for the Storage contract evmClient := &evm.Client{ ChainSelector: chainSelector, } storageAddress := common.HexToAddress(evmConfig.StorageAddress) storageContract, err := storage.NewStorage(evmClient, storageAddress, nil) if err != nil { return nil, fmt.Errorf("failed to create contract instance: %w", err) } onchainValue, err := storageContract.Get(runtime, big.NewInt(-3)).Await() if err != nil { return nil, fmt.Errorf("failed to read onchain value: %w", err) } logger.Info("Successfully read onchain value", "result", onchainValue) // Step 3: Calculate the final result finalResultInt := new(big.Int).Add(onchainValue, offchainValue) logger.Info("Final calculated result", "result", finalResultInt) // Step 4: Write the result to the consumer contract txHash, err := updateCalculatorResult(config, runtime, chainSelector, evmConfig, offchainValue, onchainValue, finalResultInt) if err != nil { return nil, fmt.Errorf("failed to update calculator result: %w", err) } // Step 5: Log and return the final, consolidated result. finalWorkflowResult := &MyResult{ OffchainValue: offchainValue, OnchainValue: onchainValue, FinalResult: finalResultInt, TxHash: txHash, } logger.Info("Workflow finished successfully!", "result", finalWorkflowResult) return finalWorkflowResult, nil } func fetchMathResult(config *Config, logger *slog.Logger, sendRequester *http.SendRequester) (*big.Int, error) { req := &http.Request{Url: config.ApiUrl, Method: "GET"} resp, err := sendRequester.SendRequest(req).Await() if err != nil { return nil, fmt.Errorf("failed to get API response: %w", err) } // The mathjs.org API returns the result as a raw string in the body. // We need to parse it into a number. val, ok := new(big.Int).SetString(string(resp.Body), 10) if !ok { return nil, fmt.Errorf("failed to parse API response into big.Int") } return val, nil } // updateCalculatorResult handles the logic for writing data to the CalculatorConsumer contract. func updateCalculatorResult(config *Config, runtime cre.Runtime, chainSelector uint64, evmConfig EvmConfig, offchainValue *big.Int, onchainValue *big.Int, finalResult *big.Int) (string, error) { logger := runtime.Logger() logger.Info("Updating calculator result", "consumerAddress", evmConfig.CalculatorConsumerAddress) evmClient := &evm.Client{ ChainSelector: chainSelector, } // Create a contract binding instance pointed at the CalculatorConsumer address. consumerAddress := common.HexToAddress(evmConfig.CalculatorConsumerAddress) consumerContract, err := calculator_consumer.NewCalculatorConsumer(evmClient, consumerAddress, nil) if err != nil { return "", fmt.Errorf("failed to create consumer contract instance: %w", err) } gasConfig := &evm.GasConfig{ GasLimit: evmConfig.GasLimit, } logger.Info("Writing report to consumer contract", "offchainValue", offchainValue, "onchainValue", onchainValue, "finalResult", finalResult) // Call the `WriteReport` method on the binding. This sends a secure report to the consumer. writeReportPromise := consumerContract.WriteReportFromCalculatorResult(runtime, calculator_consumer.CalculatorResult{ OffchainValue: offchainValue, OnchainValue: onchainValue, FinalResult: finalResult, }, gasConfig) logger.Info("Waiting for write report response") resp, err := writeReportPromise.Await() if err != nil { logger.Error("WriteReport await failed", "error", err) return "", fmt.Errorf("failed to await write report: %w", err) } txHash := fmt.Sprintf("0x%x", resp.TxHash) logger.Info("Write report transaction succeeded", "txHash", txHash) logger.Info("View transaction at", "url", fmt.Sprintf("https://sepolia.etherscan.io/tx/%s", txHash)) return txHash, nil } func main() { wasm.NewRunner(cre.ParseJSON[Config]).Run(InitWorkflow) } ``` ## Step 5: Sync your dependencies Because the `main.go` file has been updated to import new packages for the `CalculatorConsumer` binding, you must sync your dependencies. Run `go mod tidy` to automatically download the new dependencies and update your `go.mod` and `go.sum` files. ```bash go mod tidy ``` ## Step 6: Run the simulation and review the output Run the simulation from your project root directory (the `onchain-calculator/` folder). Because there is only one trigger, the simulator runs it automatically. ```bash cre workflow simulate my-calculator-workflow --target staging-settings --broadcast ``` Your workflow will now show the complete end-to-end execution, including the final log of the `MyResult` struct containing the transaction hash. ```bash Workflow compiled 2025-11-03T22:48:41Z [SIMULATION] Simulator Initialized 2025-11-03T22:48:41Z [SIMULATION] Running trigger trigger=cron-trigger@1.0.0 2025-11-03T22:48:41Z [USER LOG] msg="Successfully fetched offchain value" result=56 2025-11-03T22:48:41Z [USER LOG] msg="Successfully read onchain value" result=22 2025-11-03T22:48:41Z [USER LOG] msg="Final calculated result" result=78 2025-11-03T22:48:41Z [USER LOG] msg="Updating calculator result" consumerAddress=0xF3abEAa889e46c6C5b9A0bD818cE54Cc4eAF8A54 2025-11-03T22:48:41Z [USER LOG] msg="Writing report to consumer contract" offchainValue=56 onchainValue=22 finalResult=78 2025-11-03T22:48:41Z [USER LOG] msg="Waiting for write report response" 2025-11-03T22:48:48Z [USER LOG] msg="Write report transaction succeeded" txHash=0x86a26f848c83f37b8eace8123ec275a0af9d21b23b1fbba9cc7664b7e474314f 2025-11-03T22:48:48Z [USER LOG] msg="View transaction at" url=https://sepolia.etherscan.io/tx/0x86a26f848c83f37b8eace8123ec275a0af9d21b23b1fbba9cc7664b7e474314f 2025-11-03T22:48:48Z [USER LOG] msg="Workflow finished successfully!" result="&{OffchainValue:+56 OnchainValue:+22 FinalResult:+78 TxHash:0x86a26f848c83f37b8eace8123ec275a0af9d21b23b1fbba9cc7664b7e474314f}" Workflow Simulation Result: { "FinalResult": 78, "OffchainValue": 56, "OnchainValue": 22, "TxHash": "0x86a26f848c83f37b8eace8123ec275a0af9d21b23b1fbba9cc7664b7e474314f" } 2025-11-03T22:48:48Z [SIMULATION] Execution finished signal received 2025-11-03T22:48:48Z [SIMULATION] Skipping WorkflowEngineV2 ``` - **`[USER LOG]`**: You can see all of your `logger.Info()` calls showing the complete workflow execution, including the offchain value (`result=56`), onchain value (`result=22`), final calculation (`result=78`), and the transaction hash. - **`[SIMULATION]`**: These are system-level messages from the simulator showing its internal state. - **`Workflow Simulation Result`**: This is the final return value of your workflow. The `MyResult` struct contains all the values (56 + 22 = 78) and the transaction hash confirming the write operation succeeded. ## Step 7: Verify the result onchain ### **1. Check the Transaction** In your terminal output, you'll see a clickable URL to view the transaction on Sepolia Etherscan: ``` [USER LOG] msg="View transaction at" url=https://sepolia.etherscan.io/tx/0x... ``` Click the URL (or copy and paste it into your browser) to see the full details of the transaction your workflow submitted. **What are you seeing on a blockchain explorer?** You'll notice the transaction's `to` address is not the `CalculatorConsumer` contract you intended to call. Instead, it's to a **Forwarder** contract. Your workflow sends a secure report to the Forwarder, which then verifies the request and makes the final call to the `CalculatorConsumer` on your workflow's behalf. To learn more, see the [Onchain Write guide](/cre/guides/workflow/using-evm-client/onchain-write). ### **2. Check the contract state** While your wallet interacted with the Forwarder, the `CalculatorConsumer` contract's state was still updated. You can verify this change directly on Etherscan: - Navigate to the `CalculatorConsumer` contract address: `0xF3abEAa889e46c6C5b9A0bD818cE54Cc4eAF8A54`. - Expand the `latestResult` function and click **Query**. The values should match the `finalResult`, `offchainValue`, and `onchainValue` from your workflow logs. This completes the end-to-end loop: triggering a workflow, fetching data, reading onchain state, and verifiably writing the result back to a public blockchain. To learn more about implementing consumer contracts and the secure write process, see these guides: - **[Building Consumer Contracts](/cre/guides/workflow/using-evm-client/onchain-write/building-consumer-contracts)**: Learn how to create your own secure consumer contracts with proper validation. - **[Onchain Write Guide](/cre/guides/workflow/using-evm-client/onchain-write)**: Dive deeper into the write patterns. ## Next steps You've now mastered the complete CRE development workflow! - **[Conclusion & Next Steps](/cre/getting-started/conclusion)**: Review what you've learned and find resources for advanced topics. --- # Using Secrets in Simulation Source: https://docs.chain.link/cre/guides/workflow/secrets/using-secrets-simulation-go Last Updated: 2025-11-04 This guide explains how to use secrets during **local development and simulation**. When you're simulating a workflow on your local machine with `cre workflow simulate`, secrets are provided via environment variables or a `.env` file. At a high level, the process follows a simple, three-step pattern: 1. **Declare**: You declare the logical names of your secrets in a `secrets.yaml` file. 2. **Provide**: You provide the actual secret values in a `.env` file or as environment variables. 3. **Use**: You access the secrets in your workflow code using the SDK's secret management API. This separation of concerns ensures that your workflow code is portable and your secrets are never hard-coded. ## Step-by-step guide ### Step 1: Declare your secrets (`secrets.yaml`) The first step is to create a `secrets.yaml` file in the root of your project. This file acts as a manifest, defining the "logical names" or "IDs" for the secrets your workflow will use. In this file, you map a logical name (which you'll use in your workflow code) to one environment variable name that will hold the actual secret value. **Example `secrets.yaml`:** ```yaml # in project-root/secrets.yaml secretsNames: # This is the logical ID you will use in your workflow code SECRET_ADDRESS: # This is the environment variable the CLI will look for - SECRET_ADDRESS_ALL ``` ### Step 2: Provide the secret values Next, you need to provide the actual values for the secrets. The `cre` CLI can read these values in two primary ways. #### Method 1: Using shell environment variables (Recommended) You can provide secrets as standard environment variables directly in your shell. For example, in your terminal: ```bash export SECRET_ADDRESS_ALL="0x1234567890abcdef1234567890abcdef12345678" ``` When you run the `cre workflow simulate` command in the same terminal session, the CLI will have access to this variable. #### Method 2: Using a `.env` file Create a `.env` file in your project's root directory. The `cre` CLI automatically finds this file and loads the variables defined within it into the environment for your simulation. The variable names here must match those you declared in `secrets.yaml`. **Example `.env` file:** ```bash # in project-root/.env # The variable name matches the one in secrets.yaml SECRET_ADDRESS_ALL="0x1234567890abcdef1234567890abcdef12345678" ``` ### Step 3: Use the secret in your workflow Now you can access the secret in your workflow code. The SDK provides a method to retrieve secrets using the logical ID you defined in `secrets.yaml`. The following code shows a complete, runnable workflow that triggers on a schedule, fetches a secret, and logs its value. **Example workflow:** Code snippet for Fetching Single Secret (Go): ```go //go:build wasip1 package main import ( "log/slog" protos "github.com/smartcontractkit/chainlink-protos/cre/go/sdk" "github.com/smartcontractkit/cre-sdk-go/capabilities/scheduler/cron" "github.com/smartcontractkit/cre-sdk-go/cre" "github.com/smartcontractkit/cre-sdk-go/cre/wasm" ) // Config can be an empty struct if you don't need any parameters from config.json. type Config struct{} // MyResult can be an empty struct if your workflow doesn't need to return a result. type MyResult struct{} // Define the logical name of the secret as a constant for clarity. const SecretName = "SECRET_ADDRESS" // onCronTrigger is the callback function that gets executed when the cron trigger fires. // This is where you use the secret. func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) { logger := runtime.Logger() // Build the request with the secret's logical ID. secretReq := &protos.SecretRequest{ Id: SecretName, } // Call runtime.GetSecret and await the promise. secret, err := runtime.GetSecret(secretReq).Await() if err != nil { logger.Error("Failed to get secret", "name", SecretName, "err", err) return nil, err } // Use the secret's value. secretAddress := secret.Value logger.Info("Successfully fetched a secret!", "address", secretAddress) // ... now you can use the secretAddress in your logic ... return &MyResult{}, nil } // InitWorkflow is the required entry point for a CRE workflow. func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) { return cre.Workflow[*Config]{ cre.Handler( cron.Trigger(&cron.Config{Schedule: "0 */10 * * * *"}), onCronTrigger, ), }, nil } // main is the entry point for the WASM binary. func main() { wasm.NewRunner(cre.ParseJSON[Config]).Run(InitWorkflow) } ``` ### Step 4: Configure secrets path in `workflow.yaml` Before simulating, you need to tell the CLI where to find your secrets file. This is configured in your `workflow.yaml` file under `workflow-artifacts.secrets-path`. Open your `workflow.yaml` file and set the `secrets-path`: ```yaml local-simulation: user-workflow: workflow-name: "my-workflow" workflow-artifacts: workflow-path: "./main.go" config-path: "./config.json" secrets-path: "../secrets.yaml" # Path to your secrets file ``` Notice the path `../secrets.yaml`. Because the workflow artifacts are relative to the workflow directory, you need to point to the `secrets.yaml` file located one level up in the project root. ### Step 5: Run the simulation Now you can simulate your workflow: ```bash cre workflow simulate my-workflow --target staging-settings ``` The CLI will automatically read the `secrets-path` from your `workflow.yaml` and load the secrets from your `.env` file or environment variables you provided in your terminal session. ## Fetching multiple secrets You can fetch multiple secrets by calling the secret retrieval method multiple times within your workflow. The following example builds on the previous one. First, update your `secrets.yaml` to declare two secrets: ```yaml secretsNames: SECRET_ADDRESS: - SECRET_ADDRESS_ALL API_KEY: - API_KEY_ALL ``` Then provide the values in your `.env` file or export them as environment variables in your terminal session: ```bash export SECRET_ADDRESS_ALL="0x1234567890abcdef1234567890abcdef12345678" export API_KEY_ALL="your-api-key-here" ``` Now you can fetch both secrets in your workflow code: Code snippet for Fetching Multiple Secrets (Go): ```go //go:build wasip1 package main import ( "log/slog" protos "github.com/smartcontractkit/chainlink-protos/cre/go/sdk" "github.com/smartcontractkit/cre-sdk-go/capabilities/scheduler/cron" "github.com/smartcontractkit/cre-sdk-go/cre" "github.com/smartcontractkit/cre-sdk-go/cre/wasm" ) // Config can be an empty struct if you don't need any parameters from config.json. type Config struct{} // MyResult can be an empty struct if your workflow doesn't need to return a result. type MyResult struct{} const ( SecretAddressName = "SECRET_ADDRESS" ApiKeyName = "API_KEY" ) func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) { logger := runtime.Logger() // Important: Fetch secrets sequentially, not in parallel. // The WASM host for CRE runtime does not support parallel runtime.GetSecret() requests. // Always call GetSecret(), then Await() before making the next GetSecret() call. // 1. Fetch the first secret addressPromise := runtime.GetSecret(&protos.SecretRequest{Id: SecretAddressName}) secretAddress, err := addressPromise.Await() if err != nil { logger.Error("Failed to get SECRET_ADDRESS", "err", err) return nil, err } // 2. Fetch the second secret (only after the first is complete) apiKeyPromise := runtime.GetSecret(&protos.SecretRequest{Id: ApiKeyName}) apiKey, err := apiKeyPromise.Await() if err != nil { logger.Error("Failed to get API_KEY", "err", err) return nil, err } // 3. Use your secrets logger.Info("Successfully fetched secrets!", "address", secretAddress.Value, "apiKey", apiKey.Value, ) return &MyResult{}, nil } // InitWorkflow is the required entry point for a CRE workflow. func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) { return cre.Workflow[*Config]{ cre.Handler( cron.Trigger(&cron.Config{Schedule: "0 */10 * * * *"}), onCronTrigger, ), }, nil } // main is the entry point for the WASM binary. func main() { wasm.NewRunner(cre.ParseJSON[Config]).Run(InitWorkflow) } ``` --- # Onchain Read Source: https://docs.chain.link/cre/guides/workflow/using-evm-client/onchain-read-go Last Updated: 2025-11-04 This guide explains how to read data from a smart contract from within your CRE workflow. The process uses [generated bindings](/cre/guides/workflow/using-evm-client/generating-bindings) and the SDK's [`evm.Client`](/cre/reference/sdk/evm-client) to create a simple, type-safe developer experience. ## The read pattern Reading from a contract follows a simple pattern: 1. **Prerequisite - Generate bindings**: You must first [generate Go bindings](/cre/guides/workflow/using-evm-client/generating-bindings) for your smart contracts using the CRE CLI. This creates type-safe Go methods that correspond to your contract's `view` and `pure` functions. 2. **Instantiate the binding**: In your workflow logic, create an instance of your generated binding. 3. **Call a read method**: Call the desired function on the binding instance, specifying a block number. This is an asynchronous call that immediately returns a [`Promise`](/cre/reference/sdk/core/#promise). 4. **Await the result**: Call `.Await()` on the returned promise to pause execution and wait for the consensus-verified result from the DON. ## Step-by-step example Let's assume you have followed the [generating bindings guide](/cre/guides/workflow/using-evm-client/generating-bindings) and have created a binding for the Storage contract with a `get() view returns (uint256)` function. ### 1. The generated binding After running `cre generate-bindings evm`, your binding will contain methods that wrap the onchain functions. For the `Storage` contract's `get` function, the generated method takes the `sdk.Runtime` and a block number as arguments: ```go // In contracts/evm/src/generated/storage/storage.go func (c Storage) Get(runtime cre.Runtime, blockNumber *big.Int) cre.Promise[*evm.CallContractReply] { // This method handles ABI encoding, calling the evm.Client, // and returns a promise for the response. } ``` ### 2. The workflow logic In your main workflow file (`main.go`), you can now use this binding to read from your contract. ```go // In your workflow's main.go import ( "contracts/evm/src/generated/storage" // Import your generated binding "fmt" "math/big" "github.com/ethereum/go-ethereum/common" "github.com/smartcontractkit/cre-sdk-go/capabilities/blockchain/evm" "github.com/smartcontractkit/cre-sdk-go/cre" ) func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) { logger := runtime.Logger() // 1. Create the EVM client with chain selector evmClient := &evm.Client{ ChainSelector: config.ChainSelector, // e.g., 16015286601757825753 for Sepolia } // 2. Instantiate the contract binding contractAddress := common.HexToAddress(config.ContractAddress) storageContract, err := storage.NewStorage(evmClient, contractAddress, nil) if err != nil { return nil, fmt.Errorf("failed to create contract instance: %w", err) } // 3. Call the read method - it returns the decoded value directly // See the "Block number options" section below for details on block number parameters storedValue, err := storageContract.Get(runtime, big.NewInt(-3)).Await() // -3 = finalized block if err != nil { logger.Error("Failed to read storage value", "err", err) return nil, err } logger.Info("Successfully read storage value", "value", storedValue.String()) return &MyResult{StoredValue: storedValue}, nil } ``` ## Understanding return types Generated bindings are designed to be **self-documenting**. The method signature tells you exactly what type you'll receive, so you don't need to guess or look up the ABI—the Go type system provides this information directly. ### Reading method signatures When you call a read method on a generated binding, its signature shows you the return type. For example, from the `IERC20` binding: ```go // This method returns a *big.Int func (c IERC20) TotalSupply( runtime cre.Runtime, blockNumber *big.Int, ) cre.Promise[*big.Int] // ← The return type is right here // This method returns a bool func (c IERC20) Approve( runtime cre.Runtime, args ApproveInput, blockNumber *big.Int, ) cre.Promise[bool] // ← Returns bool ``` ### Solidity-to-Go type mappings The binding generator follows standard Ethereum conventions: | Solidity Type | Go Type | | ------------------------ | ----------------------------------------------------------------------------------------- | | `uint8`, `uint256`, etc. | `*big.Int` | | `int8`, `int256`, etc. | `*big.Int` | | `address` | `common.Address` | | `bool` | `bool` | | `string` | `string` | | `bytes`, `bytes32`, etc. | `[]byte` | | `struct` | Custom Go struct ([generated](/cre/guides/workflow/using-evm-client/generating-bindings)) | ### Using your IDE Modern IDEs will show you the method signature when you hover over a function call or use autocomplete. This makes it easy to see exactly what type you're working with: ```go // When you type this and hover over TotalSupply, your IDE shows: value, err := token.TotalSupply(runtime, big.NewInt(-3)).Await() // ↑ IDE tooltip: "func TotalSupply(...) cre.Promise[*big.Int]" // So you know `value` is a *big.Int and can use it directly ``` ### Practical usage Because the type is explicit, you can immediately use the value with confidence: ```go totalSupply, err := token.TotalSupply(runtime, big.NewInt(-3)).Await() if err != nil { return nil, err } // You know it's *big.Int, so you can use it in calculations: doubled := new(big.Int).Mul(totalSupply, big.NewInt(2)) logger.Info("Supply doubled", "result", doubled.String()) ``` ## Block number options When calling contract read methods, you must specify a block number. There are two ways to do this: ### Using magic numbers - **Finalized block**: Use `big.NewInt(-3)` to read from the latest finalized block - **Latest block**: Use `big.NewInt(-2)` to read from the latest block - **Specific block**: Use `big.NewInt(blockNumber)` to read from a specific block ### Using rpc constants (alternative) You can also use constants from the `go-ethereum/rpc` package for better readability: ```go import ( "math/big" "github.com/ethereum/go-ethereum/rpc" ) // For latest block reqBlockNumber := big.NewInt(rpc.LatestBlockNumber.Int64()) // For finalized block reqBlockNumber := big.NewInt(rpc.FinalizedBlockNumber.Int64()) ``` Both approaches are equivalent - use whichever you find more readable in your code. ## Complete example This example shows a full, runnable workflow that triggers on a cron schedule and reads a value from the Storage contract. ```go package main import ( "contracts/evm/src/generated/storage" // Generated Storage binding "fmt" "log/slog" "math/big" "github.com/ethereum/go-ethereum/common" "github.com/smartcontractkit/cre-sdk-go/capabilities/blockchain/evm" "github.com/smartcontractkit/cre-sdk-go/capabilities/scheduler/cron" "github.com/smartcontractkit/cre-sdk-go/cre" "github.com/smartcontractkit/cre-sdk-go/cre/wasm" ) type Config struct { ContractAddress string `json:"contractAddress"` ChainSelector uint64 `json:"chainSelector"` } type MyResult struct { StoredValue *big.Int } func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) { logger := runtime.Logger() // Create EVM client evmClient := &evm.Client{ ChainSelector: config.ChainSelector, } // Create contract instance contractAddress := common.HexToAddress(config.ContractAddress) storageContract, err := storage.NewStorage(evmClient, contractAddress, nil) if err != nil { return nil, fmt.Errorf("failed to create contract instance: %w", err) } // Call contract method - it returns the decoded type directly storedValue, err := storageContract.Get(runtime, big.NewInt(-3)).Await() if err != nil { return nil, fmt.Errorf("failed to read storage value: %w", err) } logger.Info("Successfully read storage value", "value", storedValue.String()) return &MyResult{StoredValue: storedValue}, nil } func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) { return cre.Workflow[*Config]{ cre.Handler( cron.Trigger(&cron.Config{Schedule: "*/10 * * * * *"}), onCronTrigger, ), }, nil } func main() { wasm.NewRunner(cre.ParseJSON[Config]).Run(InitWorkflow) } ``` ## Configuration Your workflow configuration file (`config.json`) should include both the contract address and chain selector: ```json { "contractAddress": "0xYourContractAddressHere", "chainSelector": 16015286601757825753 } ``` You pass this file to the simulator using the `--config` flag: `cre workflow simulate --config config.json main.go` --- # Onchain Write Source: https://docs.chain.link/cre/guides/workflow/using-evm-client/onchain-write/overview-go Last Updated: 2025-11-04 This guide explains how to write data from your CRE workflow to a smart contract on the blockchain. **What you'll learn:** - How CRE's secure write mechanism works (and why it's different from traditional web3) - What a consumer contract is and why you need one - Which approach to use based on your specific use case - How to construct Solidity-compatible types in Go ## Understanding how CRE writes work Before diving into code, it's important to understand how CRE handles onchain writes differently than traditional web3 applications. ### Why CRE doesn't write directly to your contract In a traditional web3 app, you'd create a transaction and send it directly to your smart contract. **CRE uses a different, more secure approach** for three key reasons: 1. **Decentralization**: Multiple nodes in the Decentralized Oracle Network (DON) need to agree on what data to write 2. **Verification**: The blockchain needs cryptographic proof that the data came from a trusted Chainlink network 3. **Accountability**: There must be a verifiable trail showing which workflow and owner created the data ### The secure write flow (4 steps) Here's the journey your workflow's data takes to reach the blockchain: 1. **Report generation**: Your workflow generates a ***report***— your data is ABI-encoded and wrapped in a cryptographically signed "package" 2. **DON consensus**: The DON reaches consensus on the report's contents 3. **Forwarder submission**: A designated node submits the report to a Chainlink `KeystoneForwarder` contract 4. **Delivery to your contract**: The Forwarder validates the report's signatures and calls your consumer contract's `onReport()` function with the data Your workflow code handles this process using the [`evm.Client`](/cre/reference/sdk/evm-client), which manages the interaction with the Forwarder contract. Depending on your approach (covered below), this can be fully automated via generated binding helpers or done manually with direct client calls. ## What you need: A consumer contract Before you can write data onchain, you need a **consumer contract**. This is the smart contract that will receive your workflow's data. **What is a consumer contract?** A consumer contract is **your smart contract** that implements the `IReceiver` interface. This interface defines an `onReport()` function that the Chainlink Forwarder calls to deliver your workflow's data. Think of it as a mailbox that's designed to receive packages (reports) from Chainlink's secure delivery service (the Forwarder contract). **Key requirement:** Your contract must implement the `IReceiver` interface. This single requirement ensures your contract has the necessary `onReport(bytes metadata, bytes report)` function that the Chainlink Forwarder calls to deliver data. **Getting started:** - **Don't have a consumer contract yet?** Follow the [Building Consumer Contracts](/cre/guides/workflow/using-evm-client/onchain-write/building-consumer-contracts) guide to create one. - **Already have one deployed?** Great! Make sure you have its address ready. Depending on which approach you choose (see below), you may also need the contract's ABI to generate bindings. ## Choosing your approach: Which guide should you follow? Now that you have a consumer contract, the next step depends on **what type of data you're sending** and **what's available in your contract's ABI**. This determines whether you can use the easy automated approach or need to encode data manually. Use this table to find the guide that matches your needs: | Your scenario | What you have | Recommended approach | Where to go | | ----------------------------------------------------------------- | --------------------------------------------------- | ------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Write a struct onchain** | Struct is in the ABI(\*) | Use the `WriteReportFrom` binding helper | [Using WriteReportFrom Helpers](/cre/guides/workflow/using-evm-client/onchain-write/using-write-report-helpers) | | **Write a struct onchain** | Struct is NOT in the ABI(\*) |
  • Manual tuple encoding
  • Report generation
  • Report submission
|
  • [Generating Reports: Structs](/cre/guides/workflow/using-evm-client/onchain-write/generating-reports-structs)
  • [Submitting Reports Onchain](/cre/guides/workflow/using-evm-client/onchain-write/submitting-reports-onchain)
| | **Write a single value onchain** | Need to send one `uint256`, `address`, `bool`, etc. |
  • Manual ABI encoding
  • Report generation
  • Report submission
|
  • [Generating Reports: Single Values](/cre/guides/workflow/using-evm-client/onchain-write/generating-reports-single-values)
  • [Submitting Reports Onchain](/cre/guides/workflow/using-evm-client/onchain-write/submitting-reports-onchain)
| | **Already have a generated report and need to submit it onchain** | A report from `runtime.GenerateReport()` | Manual submission with `evm.Client` | [Submitting Reports Onchain](/cre/guides/workflow/using-evm-client/onchain-write/submitting-reports-onchain) | **(\*) When is a struct included in the ABI?** Your contract's ABI includes a struct's definition if that struct is used anywhere in the signature (as a parameter or a return value) of a `public` or `external` function. For example, this contract's ABI **will** include the `CalculatorResult` struct: ```solidity contract MyConsumerContract { struct CalculatorResult { uint256 offchainValue; int256 onchainValue; uint256 finalResult; } // The struct is used as a parameter in a public function - it WILL be in the ABI function isResultAnomalous(CalculatorResult memory _prospectiveResult) public view returns (bool) { // ... } // The struct is used as a return value in a public function - it WILL also be in the ABI function getSampleResult() public pure returns (CalculatorResult memory) { return CalculatorResult(1, 2, 3); } // ... } ``` **Why does this matter?** When you compile your contract, only `public` and `external` functions and their signatures are included in the ABI file. If a struct is part of that signature, its definition is also included so that external applications know how to encode and decode it. The CRE [binding generator](/cre/guides/workflow/using-evm-client/generating-bindings) reads the ABI and creates helper methods for any structs it finds there. **What if my struct is only used internally?** If your struct is only used in `internal`/`private` functions, or only used via `abi.decode` inside functions that take `bytes`, it won't be in the ABI. In that case, use the [Generating Reports: Structs](/cre/guides/workflow/using-evm-client/onchain-write/generating-reports-structs) guide for manual encoding. ## Working with Solidity input types Before writing data to a contract, you often need to convert or construct values from your workflow's configuration and logic into the types that Solidity expects. This section explains the common type conversions you'll encounter when preparing your data. ### Converting strings to addresses Contract addresses are typically stored as strings in your `config.json` file. To use them with generated bindings, convert them to `common.Address`: ```go import "github.com/ethereum/go-ethereum/common" // From a config string contractAddress := common.HexToAddress(config.ProxyAddress) // contractAddress is now a common.Address // Use it directly with bindings contract, err := my_contract.NewMyContract(evmClient, contractAddress, nil) ``` ### Creating big.Int values All Solidity integer types (`uint8`, `uint256`, `int8`, `int256`, etc.) map to Go's `*big.Int`. Here are the common ways to create them: **From an integer literal:** ```go import "math/big" // For small values, use big.NewInt() gasLimit := big.NewInt(1000000) amount := big.NewInt(100) ``` **From a string (for large numbers):** ```go // For values too large for int64, parse from a string largeAmount := new(big.Int) largeAmount.SetString("1000000000000000000000000", 10) // Base 10 // Or in one line value, ok := new(big.Int).SetString("123456789", 10) if !ok { return fmt.Errorf("failed to parse big.Int") } ``` **From calculations:** ```go // Arithmetic with big.Int a := big.NewInt(100) b := big.NewInt(50) sum := new(big.Int).Add(a, b) product := new(big.Int).Mul(a, b) ``` **From random numbers:** ```go // Get the runtime's random generator rnd, err := runtime.Rand() if err != nil { return err } // Generate a random big.Int in range [0, max) max := big.NewInt(1000) randomValue := new(big.Int).Rand(rnd, max) ``` **Note**: For a complete understanding of how randomness works in CRE, including the difference between DON mode and Node mode randomness, see [Random in CRE](/cre/concepts/random-in-cre). ### Constructing input structs When your contract method takes parameters, you'll need to construct the input struct generated by the bindings. The binding generator creates a struct type for each method that has parameters. ```go // Example: For a method that takes (address owner, address spender) // The generator creates an AllowanceInput struct allowanceInput := ierc20.AllowanceInput{ Owner: common.HexToAddress("0xOwnerAddress"), Spender: common.HexToAddress("0xSpenderAddress"), } // This struct can now be passed to the corresponding method ``` ### Working with bytes Solidity types like `bytes` and `bytes32` map to `[]byte` in Go. ## Learn more - **[Building Consumer Contracts](/cre/guides/workflow/using-evm-client/onchain-write/building-consumer-contracts)**: How to create a compliant contract to receive data onchain - **[Generating Bindings](/cre/guides/workflow/using-evm-client/generating-bindings)**: How to create type-safe contract bindings - **[Using WriteReportFrom Helpers](/cre/guides/workflow/using-evm-client/onchain-write/using-write-report-helpers)**: Implementation guide for the most common approach - **[EVM Client Reference](/cre/reference/sdk/evm-client)**: Complete API documentation for the `evm.Client` - **[Onchain Read](/cre/guides/workflow/using-evm-client/onchain-read)**: Reading data from smart contracts --- # EVM Chain Interactions Source: https://docs.chain.link/cre/guides/workflow/using-evm-client/overview-go Last Updated: 2025-11-04 The `evm.Client` is the Go SDK's interface for interacting with EVM-compatible blockchains. It provides a simple, powerful, and type-safe way to read data from and write data to your onchain contracts through the underlying **[EVM Read & Write Capabilities](/cre/capabilities/evm-read-write)**. ## How it works The Go SDK uses **auto-generated contract bindings** created by the CRE CLI. These bindings provide: - Type-safe Go structs for contract functions and events - Automatic ABI encoding/decoding - Promise-based async operations with `.Await()` - Built-in helper methods for common operations This approach gives you IDE autocompletion, compile-time type checking, and eliminates manual ABI handling. ## Guides - **[Generating Contract Bindings](/cre/guides/workflow/using-evm-client/generating-bindings)**: Learn how to generate type-safe Go bindings from contract ABIs using the CRE CLI - **[Onchain Read](/cre/guides/workflow/using-evm-client/onchain-read)**: Learn how to call `view` and `pure` functions on your smart contracts to read onchain state - **[Onchain Write](/cre/guides/workflow/using-evm-client/onchain-write)**: Learn how to call state-changing functions on your smart contracts to write data to the blockchain - **[Building Consumer Contracts](/cre/guides/workflow/using-evm-client/onchain-write/building-consumer-contracts)**: Learn how to write your own consumer contracts that can receive reports from your workflow --- # Supported Networks Source: https://docs.chain.link/cre/guides/workflow/using-evm-client/supported-networks-go Last Updated: 2025-11-04 CRE workflows support EVM read and write operations on the following blockchain networks. Use the chain names listed below when [configuring your workflows in `project.yaml`](/cre/reference/project-configuration-go#31-global-configuration-projectyaml) or when creating EVM clients in your code. ## Mainnets | Network | Chain Name | Forwarder Address | | ---------------- | ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Arbitrum One | ethereum-mainnet-arbitrum-1 | 0xF8344CFd5c43616a4366C34E3EEE75af79a74482 | | Avalanche | avalanche-mainnet | 0x76c9cf548b4179F8901cda1f8623568b58215E62 | | Base | ethereum-mainnet-base-1 | 0xF8344CFd5c43616a4366C34E3EEE75af79a74482 | | BNB Smart Chain | binance_smart_chain-mainnet | 0x76c9cf548b4179F8901cda1f8623568b58215E62 | | Ethereum Mainnet | ethereum-mainnet | 0x0b93082D9b3C7C97fAcd250082899BAcf3af3885 | | OP Mainnet | ethereum-mainnet-optimism-1 | 0xF8344CFd5c43616a4366C34E3EEE75af79a74482 | | Polygon | polygon-mainnet | 0x76c9cf548b4179F8901cda1f8623568b58215E62 | ## Testnets | Network | Chain Name | Forwarder Address | | ---------------- | ----------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Arbitrum Sepolia | ethereum-testnet-sepolia-arbitrum-1 | 0x76c9cf548b4179F8901cda1f8623568b58215E62 | | Avalanche Fuji | avalanche-testnet-fuji | 0x76c9cf548b4179F8901cda1f8623568b58215E62 | | Base Sepolia | ethereum-testnet-sepolia-base-1 | 0xF8344CFd5c43616a4366C34E3EEE75af79a74482 | | BSC Testnet | binance_smart_chain-testnet | 0x76c9cf548b4179F8901cda1f8623568b58215E62 | | Ethereum Sepolia | ethereum-testnet-sepolia | 0xF8344CFd5c43616a4366C34E3EEE75af79a74482 | | OP Sepolia | ethereum-testnet-sepolia-optimism-1 | 0x76c9cf548b4179F8901cda1f8623568b58215E62 | | Polygon Amoy | polygon-testnet-amoy | 0x76c9cf548b4179F8901cda1f8623568b58215E62 | ## Using these networks in your workflows To use these networks in your workflows: - **Configure RPC endpoints**: Add the chain name to your `project.yaml` file. See [Project Configuration](/cre/reference/project-configuration) for details. - **In your workflow code**: Use the chain names with the EVM Client. See the [chain selector documentation](/cre/reference/sdk/evm-client-go#chain-selectors). --- # Making GET Requests Source: https://docs.chain.link/cre/guides/workflow/using-http-client/get-request-go Last Updated: 2025-11-04 The `http.Client` is the SDK's interface for the underlying [HTTP Capability](/cre/capabilities/http). It allows your workflow to fetch data from any external API. All HTTP requests are wrapped in a consensus mechanism to provide a single, reliable result. The SDK provides two ways to do this: - **[`http.SendRequest`](#1-the-httpsendrequest-pattern-recommended):** (Recommended) A high-level helper function that simplifies making requests. - **[`cre.RunInNodeMode`](#2-the-cre-runinnodemode-pattern-low-level):** The lower-level pattern for more complex scenarios. ## Prerequisites This guide assumes you have a basic understanding of CRE. If you are new, we strongly recommend completing the [Getting Started tutorial](/cre/getting-started/overview) first. ## Choosing your approach ### Use `http.SendRequest` (Section 1) when: - Making a **single HTTP GET request** - Your logic is straightforward: make request → parse response → return result - You want **simple, clean code** with minimal boilerplate This is the recommended approach for most use cases. ### Use `cre.RunInNodeMode` (Section 2) when: - You need **multiple HTTP requests** with logic between them - You need **conditional execution** (if/else based on runtime conditions) - You need **custom retry logic** or complex error handling - You need **complex data transformation** (fetching from multiple APIs and combining results) If you're unsure, start with Section 1. You can always migrate to Section 2 later if your requirements become more complex. ## 1. The `http.SendRequest` Pattern (Recommended) The `http.SendRequest` helper function is the simplest and recommended way to make HTTP calls. It automatically handles the `cre.RunInNodeMode` pattern for you. ### How it works The pattern involves two key functions: 1. **A Fetching Function**: You create a function (e.g., `fetchAndParse`) that contains your core logic—making the request, parsing the response, and returning a clean data struct. This function receives a `sendRequester` object to make the actual API call. 2. **Your Main Handler**: Your main trigger callback (e.g., `onCronTrigger`) calls the `http.SendRequest` helper, passing it your fetching function and a consensus method. For a full list of supported tags and methods, see the [Consensus & Aggregation reference](/cre/reference/sdk/consensus). This separation keeps your code clean and focused. ### Step-by-step example This example shows a complete workflow that fetches the price of an asset, parses it into a struct with consensus tags, and aggregates the results. #### Step 1: Configure your workflow Add the API URL to your `config.json` file. ```json { "apiUrl": "https://some-price-api.com/price?ids=ethereum" } ``` #### Step 2: Define the response structs with tags Define Go structs for the API response and your internal data model. Add `consensus_aggregation` tags to the fields of your internal struct. ```go import ( "time" "github.com/shopspring/decimal" ) // PriceData is the clean, internal struct that our workflow will use. // The tags tell the DON how to aggregate the results from multiple nodes. type PriceData struct { Price decimal.Decimal `json:"price" consensus_aggregation:"median"` LastUpdated time.Time `json:"lastUpdated" consensus_aggregation:"median"` } // ExternalApiResponse is used to parse the nested JSON from the external API. type ExternalApiResponse struct { Ethereum struct { USD decimal.Decimal `json:"usd"` LastUpdatedAt int64 `json:"last_updated_at"` } `json:"ethereum"` } ``` #### Step 3: Implement the fetch and parse logic Create the function that will be passed to `http.SendRequest`. It takes the `config`, a `logger`, and the `sendRequester` as arguments. ```go import ( "encoding/json" "fmt" "log/slog" "time" "github.com/shopspring/decimal" ) func fetchAndParse(config *Config, logger *slog.Logger, sendRequester *http.SendRequester) (*PriceData, error) { // 1. Construct the request req := &http.Request{ Url: config.ApiUrl, Method: "GET", } // 2. Send the request using the provided sendRequester resp, err := sendRequester.SendRequest(req).Await() if err != nil { return nil, fmt.Errorf("failed to get API response: %w", err) } // 3. Parse the raw JSON into our ExternalApiResponse struct var externalResp ExternalApiResponse if err = json.Unmarshal(resp.Body, &externalResp); err != nil { return nil, fmt.Errorf("failed to parse API response: %w", err) } // 4. Transform into our internal PriceData struct and return priceData := &PriceData{ Price: externalResp.Ethereum.USD, LastUpdated: time.Unix(externalResp.Ethereum.LastUpdatedAt, 0), } return priceData, nil } ``` #### Step 4: Call `http.SendRequest` from your handler In your main `onCronTrigger` handler, call the `http.SendRequest` helper, passing it your `fetchAndParse` function and `cre.ConsensusAggregationFromTags`. ```go func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) { logger := runtime.Logger() client := &http.Client{} pricePromise := http.SendRequest(config, runtime, client, fetchAndParse, // The SDK inspects the PriceData struct tags to determine the aggregation strategy. cre.ConsensusAggregationFromTags[*PriceData](), ) result, err := pricePromise.Await() if err != nil { return nil, err } logger.Info("Successfully fetched and aggregated price data", "price", result.Price, "timestamp", result.LastUpdated) return &MyResult{}, nil } ``` ## 2. The `cre.RunInNodeMode` Pattern (Low-Level) For more complex scenarios, you can use the lower-level `cre.RunInNodeMode` function directly. This gives you more control but requires more boilerplate code. The pattern works like a "map-reduce" for the DON: 1. **Map**: You provide a function (e.g., `fetchPriceData`) that executes on every node. 2. **Reduce**: You provide a consensus algorithm (e.g., `ConsensusAggregationFromTags`) to reduce the individual results into a single outcome. For a full list of supported tags and methods, see the [Consensus & Aggregation reference](/cre/reference/sdk/consensus). The example below is functionally identical to the `http.SendRequest` example above, but implemented using the low-level pattern. ```go // fetchPriceData is a function that runs on each individual node. func fetchPriceData(config *Config, nodeRuntime cre.NodeRuntime) (*PriceData, error) { // 1. Fetch raw data from the API client := &http.Client{} req := &http.Request{ Url: config.ApiUrl, Method: "GET", } resp, err := client.SendRequest(nodeRuntime, req).Await() if err != nil { return nil, fmt.Errorf("failed to get API response: %w", err) } // 2. Parse and transform the response (same as before) // ... return priceData, nil } func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) { logger := runtime.Logger() pricePromise := cre.RunInNodeMode(config, runtime, fetchPriceData, cre.ConsensusAggregationFromTags[*PriceData](), ) result, err := pricePromise.Await() if err != nil { return nil, err } logger.Info("Successfully fetched and aggregated price data", "price", result.Price, "timestamp", result.LastUpdated) return &MyResult{}, nil } ``` ## Customizing your requests The `http.Request` struct provides several fields to customize your request. See the [HTTP Client SDK Reference](/cre/reference/sdk/http-client) for a full list of options. --- # Making POST Requests Source: https://docs.chain.link/cre/guides/workflow/using-http-client/post-request-go Last Updated: 2025-11-04 This guide explains how to use the HTTP Client to send data to an external API using a `POST` request. Because POST requests typically create resources or trigger actions, this guide shows you how to ensure your request executes only once, even though multiple nodes in the DON run your workflow. All HTTP requests are wrapped in a consensus mechanism. The SDK provides two ways to do this: - **[`http.SendRequest`](#1-the-httpsendrequest-pattern-recommended):** A high-level helper function that simplifies making requests. This is the recommended approach for most use cases. - **[`cre.RunInNodeMode`](#2-the-cre-runinnodemode-pattern-low-level):** The lower-level pattern for more complex scenarios. ## Choosing your approach ### Use `http.SendRequest` (Section 1) when: - Making a **single HTTP POST request** - Your logic is straightforward: make request → parse response → return result - You want **simple, clean code** with minimal boilerplate - You need to use secrets (fetch them first and use closures—see [Using secrets with `http.SendRequest`](#using-secrets-with-httpsendrequest-optional)) This is the recommended approach for most use cases. ### Use `cre.RunInNodeMode` (Section 2) when: - You need **multiple HTTP requests** with logic between them - You need **conditional execution** (if/else based on runtime conditions) - You're **combining HTTP with other node-level operations** - You need **custom retry logic** or complex error handling If you're unsure, start with Section 1. You can always migrate to Section 2 later if your requirements become more complex. For this example, we will use **webhook.site**, a free service that provides a unique URL to which you can send requests and see the results in real-time. ## Prerequisites This guide assumes you have a basic understanding of CRE. If you are new, we strongly recommend completing the [Getting Started tutorial](/cre/getting-started/overview) first. ## 1. The `http.SendRequest` Pattern (recommended) The `http.SendRequest` helper function is the simplest and recommended way to make `POST` requests. It automatically handles the `cre.RunInNodeMode` pattern for you. ### Step 1: Generate your unique webhook URL 1. Go to **webhook.site**. 2. Copy the unique URL provided for use in your configuration. ### Step 2: Configure your workflow In your `config.json` file, add the webhook URL: ```json { "webhookUrl": "https://webhook.site/", "schedule": "*/30 * * * * *" } ``` ### Step 3: Implement the POST request logic #### 1. Understanding single-execution with `CacheSettings` Before writing code, it's important to understand how to prevent duplicate POST requests. When your workflow runs, **all nodes in the DON execute your code**. For POST requests that create resources or trigger actions, this would cause duplicates. **The solution**: Use `CacheSettings` in your HTTP request. This enables a shared cache across nodes: 1. The first node makes the HTTP request and stores the response in the cache 2. Other nodes check the cache first and reuse the cached response 3. Result: Only one actual HTTP call is made, while all nodes participate in consensus **Key configuration:** - `Store: true` — Enables caching of the response - `MaxAge` — How long to accept cached responses (as a `*durationpb.Duration`) Now let's implement this pattern. #### 2. Define your data structures In your `main.go`, define the structs for your configuration, the data to be sent, and the response you want to achieve consensus on. ```go type Config struct { WebhookUrl string `json:"webhookUrl"` Schedule string `json:"schedule"` } type MyData struct { Message string `json:"message"` Value int `json:"value"` } type PostResponse struct { StatusCode uint32 `json:"statusCode" consensus_aggregation:"identical"` } type MyResult struct{} ``` #### 3. Create the data posting function Create the function that will be passed to `http.SendRequest`. It prepares the data, serializes it to JSON, and uses the `sendRequester` to send the `POST` request **with `CacheSettings`** to ensure single execution. ```go func postData(config *Config, logger *slog.Logger, sendRequester *http.SendRequester) (*PostResponse, error) { // 1. Prepare the data to be sent dataToSend := MyData{ Message: "Hello there!", Value: 77, } // 2. Serialize the data to JSON body, err := json.Marshal(dataToSend) if err != nil { return nil, fmt.Errorf("failed to marshal data: %w", err) } // 3. Construct the POST request with CacheSettings req := &http.Request{ Url: config.WebhookUrl, Method: "POST", Body: body, Headers: map[string]string{ "Content-Type": "application/json", }, CacheSettings: &http.CacheSettings{ Store: true, // Enable caching MaxAge: durationpb.New(60 * time.Second), // Accept cached responses up to 1 minute old }, } // 4. Send the request and wait for the response resp, err := sendRequester.SendRequest(req).Await() if err != nil { return nil, fmt.Errorf("failed to send POST request: %w", err) } logger.Info("HTTP Response", "statusCode", resp.StatusCode, "body", string(resp.Body)) return &PostResponse{StatusCode: resp.StatusCode}, nil } ``` #### 4. Call `http.SendRequest` from your handler In your main `onCronTrigger` handler, call the `http.SendRequest` helper, passing it your `postData` function. ```go func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) { logger := runtime.Logger() client := &http.Client{} postPromise := http.SendRequest(config, runtime, client, postData, cre.ConsensusAggregationFromTags[*PostResponse](), ) _, err := postPromise.Await() if err != nil { logger.Error("POST promise failed", "error", err) return nil, err } logger.Info("Successfully sent data to webhook.") return &MyResult{}, nil } ``` #### 5. Assemble the full workflow Finally, add the `InitWorkflow` and `main` functions. ```go func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) { return cre.Workflow[*Config]{ cre.Handler( cron.Trigger(&cron.Config{Schedule: config.Schedule}), onCronTrigger, ), }, nil } func main() { wasm.NewRunner(cre.ParseJSON[Config]).Run(InitWorkflow) } ``` #### The complete workflow file ```go //go:build wasip1 package main import ( "encoding/json" "fmt" "log/slog" "time" "github.com/smartcontractkit/cre-sdk-go/capabilities/networking/http" "github.com/smartcontractkit/cre-sdk-go/capabilities/scheduler/cron" "github.com/smartcontractkit/cre-sdk-go/cre" "github.com/smartcontractkit/cre-sdk-go/cre/wasm" "google.golang.org/protobuf/types/known/durationpb" ) type Config struct { WebhookUrl string `json:"webhookUrl"` Schedule string `json:"schedule"` } type MyData struct { Message string `json:"message"` Value int `json:"value"` } type PostResponse struct { StatusCode uint32 `json:"statusCode" consensus_aggregation:"identical"` } type MyResult struct{} func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) { return cre.Workflow[*Config]{ cre.Handler( cron.Trigger(&cron.Config{Schedule: config.Schedule}), onCronTrigger, ), }, nil } func postData(config *Config, logger *slog.Logger, sendRequester *http.SendRequester) (*PostResponse, error) { // 1. Prepare the data to be sent dataToSend := MyData{ Message: "Hello there!", Value: 77, } // 2. Serialize the data to JSON body, err := json.Marshal(dataToSend) if err != nil { return nil, fmt.Errorf("failed to marshal data: %w", err) } // 3. Construct the POST request with CacheSettings req := &http.Request{ Url: config.WebhookUrl, Method: "POST", Body: body, Headers: map[string]string{ "Content-Type": "application/json", }, CacheSettings: &http.CacheSettings{ Store: true, // Enable caching MaxAge: durationpb.New(60 * time.Second), // Accept cached responses up to 1 minute old }, } // 4. Send the request and wait for the response resp, err := sendRequester.SendRequest(req).Await() if err != nil { return nil, fmt.Errorf("failed to send POST request: %w", err) } logger.Info("HTTP Response", "statusCode", resp.StatusCode, "body", string(resp.Body)) return &PostResponse{StatusCode: resp.StatusCode}, nil } func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) { logger := runtime.Logger() client := &http.Client{} postPromise := http.SendRequest(config, runtime, client, postData, cre.ConsensusAggregationFromTags[*PostResponse](), ) _, err := postPromise.Await() if err != nil { logger.Error("POST promise failed", "error", err) return nil, err } logger.Info("Successfully sent data to webhook.") return &MyResult{}, nil } func main() { wasm.NewRunner(cre.ParseJSON[Config]).Run(InitWorkflow) } ``` ### Step 4: Sync your dependencies 1. **Sync Dependencies**: Your code imports the following packages. Run the following `go get` commands to add them to your Go module. ```bash go get github.com/smartcontractkit/cre-sdk-go/capabilities/networking/http@v1.0.0-beta.0 go get github.com/smartcontractkit/cre-sdk-go/capabilities/scheduler/cron@v1.0.0-beta.0 go get github.com/smartcontractkit/cre-sdk-go@v1.0.0 ``` 2. **Clean up and organize your module files**: ```bash go mod tidy ``` ### Step 5: Run the simulation and verify 1. **Run the simulation**: ```bash cre workflow simulate my-workflow --target staging-settings ``` 2. **Check webhook.site**: Open the webhook.site page with your unique URL. You should see a new request appear. Click on it to inspect the details, and you will see the JSON payload you sent. ```json { "message": "Hello there!", "value": 77 } ``` *** ### Using secrets with `http.SendRequest` (optional) If your POST request requires authentication (e.g., an API key in the headers), you can use secrets with `http.SendRequest` by fetching the secret first and using a closure pattern. #### 1. Configure your secret Add your secret to `.env`: ```bash API_KEY=your-secret-api-key ``` Add the secret declaration to `secrets.yaml`: ```yaml secretsNames: API_KEY: - API_KEY ``` For more details on configuring secrets, see [Secrets](/cre/guides/workflow/secrets). #### 2. Create a wrapper function Create a function that returns a closure capturing the API key: ```go // ResponseFunc matches the signature expected by http.SendRequest type ResponseFunc func(config *Config, logger *slog.Logger, sendRequester *http.SendRequester) (*PostResponse, error) // withAPIKey returns a function that has access to the API key via closure func withAPIKey(apiKey string) ResponseFunc { return func(config *Config, logger *slog.Logger, sendRequester *http.SendRequester) (*PostResponse, error) { // Prepare the data to be sent dataToSend := MyData{ Message: "Hello there!", Value: 77, } // Serialize the data to JSON body, err := json.Marshal(dataToSend) if err != nil { return nil, fmt.Errorf("failed to marshal data: %w", err) } // Construct the POST request with API key in headers req := &http.Request{ Url: config.WebhookUrl, Method: "POST", Body: body, Headers: map[string]string{ "Content-Type": "application/json", "Authorization": "Bearer " + apiKey, // Use the secret from closure }, CacheSettings: &http.CacheSettings{ Store: true, MaxAge: durationpb.New(60 * time.Second), }, } // Send the request and wait for the response resp, err := sendRequester.SendRequest(req).Await() if err != nil { return nil, fmt.Errorf("failed to send POST request: %w", err) } logger.Info("HTTP Response", "statusCode", resp.StatusCode, "body", string(resp.Body)) return &PostResponse{StatusCode: resp.StatusCode}, nil } } ``` #### 3. Update your handler to fetch the secret Modify your `onCronTrigger` handler to fetch the secret and use the wrapper function: ```go func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) { logger := runtime.Logger() // 1. Fetch the secret first secretReq := &pb.SecretRequest{ Id: "API_KEY", // The secret name from secrets.yaml } secret, err := runtime.GetSecret(secretReq).Await() if err != nil { logger.Error("Failed to get API key", "error", err) return nil, fmt.Errorf("failed to get API key: %w", err) } apiKey := secret.Value // 2. Use http.SendRequest with the closure that captures the API key client := &http.Client{} postPromise := http.SendRequest(config, runtime, client, withAPIKey(apiKey), // Pass the wrapper function cre.ConsensusAggregationFromTags[*PostResponse](), ) _, err = postPromise.Await() if err != nil { logger.Error("POST promise failed", "error", err) return nil, err } logger.Info("Successfully sent authenticated data to webhook.") return &MyResult{}, nil } ``` ## 2. The `cre.RunInNodeMode` pattern (alternative) For more complex scenarios or when you prefer working directly with the lower-level API, you can use `cre.RunInNodeMode`. This pattern gives you direct access to `nodeRuntime` within the callback function. ### Example with secrets Here's how to make a POST request with an API key from secrets: ```go func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) { logger := runtime.Logger() // Define a function that has access to runtime (and thus secrets) postDataWithAuth := func(config *Config, nodeRuntime cre.NodeRuntime) (*PostResponse, error) { // 1. Get the API key from secrets secretReq := &pb.SecretRequest{ Id: "API_KEY", // The secret name from your secrets.yaml } secret, err := runtime.GetSecret(secretReq).Await() if err != nil { return nil, fmt.Errorf("failed to get API key: %w", err) } // Use the secret value apiKey := secret.Value // 2. Prepare the data to be sent dataToSend := MyData{ Message: "Hello there!", Value: 77, } // 3. Serialize the data to JSON body, err := json.Marshal(dataToSend) if err != nil { return nil, fmt.Errorf("failed to marshal data: %w", err) } // 4. Create HTTP client and construct request with API key client := &http.Client{} req := &http.Request{ Url: config.WebhookUrl, Method: "POST", Body: body, Headers: map[string]string{ "Content-Type": "application/json", "Authorization": "Bearer " + apiKey, // Use the secret }, CacheSettings: &http.CacheSettings{ Store: true, MaxAge: durationpb.New(60 * time.Second), }, } // 5. Send the request resp, err := client.SendRequest(nodeRuntime, req).Await() if err != nil { return nil, fmt.Errorf("failed to send POST request: %w", err) } logger.Info("HTTP Response", "statusCode", resp.StatusCode, "body", string(resp.Body)) return &PostResponse{StatusCode: resp.StatusCode}, nil } // Execute the function with consensus postPromise := cre.RunInNodeMode(config, runtime, postDataWithAuth, cre.ConsensusAggregationFromTags[*PostResponse](), ) _, err := postPromise.Await() if err != nil { logger.Error("POST promise failed", "error", err) return nil, err } logger.Info("Successfully sent data to webhook.") return &MyResult{}, nil } ``` ## Learn more - **[HTTP Client SDK Reference](/cre/reference/sdk/http-client)** — Complete API reference with all request options - **[Secrets](/cre/guides/workflow/secrets)** — Learn how to securely use API keys and sensitive data - **[GET Requests](/cre/guides/workflow/using-http-client/get-request)** — Learn how to fetch data from APIs --- # Submitting Reports via HTTP Source: https://docs.chain.link/cre/guides/workflow/using-http-client/submitting-reports-http-go Last Updated: 2025-11-04 This guide shows how to send a cryptographically signed report (generated by your workflow) to an external HTTP API. You'll learn how to write a transformation function that formats the report for your specific API's requirements. **What you'll learn:** - How to use `SendReport` to submit reports via HTTP - How to write transformation functions for different API formats - Best practices for report submission and deduplication ## Prerequisites - Familiarity with [making POST requests](/cre/guides/workflow/using-http-client/post-request) - Understanding of [generating reports](/cre/guides/workflow/using-evm-client/onchain-write/generating-reports-single-values) - A generated report from `runtime.GenerateReport()` ## Quick start: Minimal example Here's the simplest possible workflow that generates and submits a report via HTTP: ```go func formatReportSimple(r *sdk.ReportResponse) (*http.Request, error) { return &http.Request{ Url: "https://api.example.com/reports", Method: "POST", Body: r.RawReport, // Send the raw report bytes Headers: map[string]string{ "Content-Type": "application/octet-stream", }, CacheSettings: &http.CacheSettings{ Store: true, MaxAge: durationpb.New(60 * time.Second), }, }, nil } func submitReport(config *Config, logger *slog.Logger, sendRequester *http.SendRequester, report *cre.Report) (*SubmitResponse, error) { resp, err := sendRequester.SendReport(*report, formatReportSimple).Await() if err != nil { return nil, fmt.Errorf("failed to send report: %w", err) } return &SubmitResponse{Success: true}, nil } ``` **What's happening here:** 1. `formatReportSimple` transforms the report into an HTTP request that your API understands 2. `sendRequester.SendReport()` calls your transformation function and sends the request 3. The SDK handles consensus and returns the result The rest of this guide explains how this works and shows different formatting patterns for various API requirements. ## How it works ### The report structure When you call `runtime.GenerateReport()`, the SDK creates a `sdk.ReportResponse` containing: ```go type sdk.ReportResponse struct { RawReport []byte // Your ABI-encoded data + metadata ReportContext []byte // Workflow execution context Sigs []*AttributedSignature // Cryptographic signatures from DON nodes ConfigDigest []byte // DON configuration identifier SeqNr uint64 // Sequence number } ``` This structure contains everything your API might need: - **`RawReport`**: The actual report data (always required) - **`Sigs`**: Cryptographic signatures from DON nodes (for verification) - **`ReportContext`**: Metadata about the workflow execution - **`SeqNr`**: Sequence number ### The transformation function Your transformation function tells the SDK how to format the report for your API: ```go func(reportResponse *sdk.ReportResponse) (*http.Request, error) ``` **The SDK calls this function internally:** 1. You pass your transformation function to `SendReport` 2. The SDK calls it with the generated `sdk.ReportResponse` 3. Your function returns an `http.Request` formatted for your API 4. The SDK sends the request and handles consensus **Why is this needed?** Different APIs expect different formats: - Some want raw binary data - Some want JSON with base64-encoded fields - Some want signatures in headers, others in the body The transformation function gives you complete control over the format. ## Formatting patterns Here are common patterns for formatting reports. Choose the one that matches your API's requirements. ### Choosing the right pattern | Pattern | When to use | | ------------------------------------------------------------------------------------------------------- | --------------------------------------------------------- | | [**Pattern 1: Report in body**](#pattern-1-report-in-body-simplest) | Your API accepts raw binary data and handles decoding | | [**Pattern 2: Report + signatures in body**](#pattern-2-report-signatures-in-body) | Your API needs everything concatenated in one binary blob | | [**Pattern 3: Report in body, signatures in headers**](#pattern-3-report-in-body-signatures-in-headers) | Your API needs signatures separated for easier parsing | | [**Pattern 4: JSON-formatted report**](#pattern-4-json-formatted-report) | Your API only accepts JSON payloads | ### Pattern 1: Report in body (simplest) Use this when your API accepts raw binary data: ```go import "google.golang.org/protobuf/types/known/durationpb" import "time" func formatReportSimple(r *sdk.ReportResponse) (*http.Request, error) { return &http.Request{ Url: "https://api.example.com/reports", Method: "POST", Body: r.RawReport, // Just send the report Headers: map[string]string{ "Content-Type": "application/octet-stream", }, CacheSettings: &http.CacheSettings{ Store: true, // Enable caching MaxAge: durationpb.New(60 * time.Second), // Accept cached responses up to 60 seconds old }, }, nil } ``` ### Pattern 2: Report + signatures in body Use this when your API needs everything concatenated in one payload: ```go func formatReportWithSignatures(r *sdk.ReportResponse) (*http.Request, error) { var body []byte // Append the raw report body = append(body, r.RawReport...) // Append the context body = append(body, r.ReportContext...) // Append all signatures for _, sig := range r.Sigs { body = append(body, sig.Signature...) } return &http.Request{ Url: "https://api.example.com/reports", Method: "POST", Body: body, Headers: map[string]string{ "Content-Type": "application/octet-stream", }, CacheSettings: &http.CacheSettings{ Store: true, // Enable caching MaxAge: durationpb.New(60 * time.Second), // Accept cached responses up to 60 seconds old }, }, nil } ``` ### Pattern 3: Report in body, signatures in headers Use this when your API needs signatures separated for easier parsing: ```go import "encoding/base64" func formatReportWithHeaderSigs(r *sdk.ReportResponse) (*http.Request, error) { headers := make(map[string]string) headers["Content-Type"] = "application/octet-stream" // Add signatures to headers for i, sig := range r.Sigs { sigKey := fmt.Sprintf("X-Signature-%d", i) signerKey := fmt.Sprintf("X-Signer-ID-%d", i) headers[sigKey] = base64.StdEncoding.EncodeToString(sig.Signature) headers[signerKey] = fmt.Sprintf("%d", sig.SignerId) } return &http.Request{ Url: "https://api.example.com/reports", Method: "POST", Body: r.RawReport, Headers: headers, CacheSettings: &http.CacheSettings{ Store: true, MaxAge: durationpb.New(60 * time.Second), }, }, nil } ``` ### Pattern 4: JSON-formatted report Use this when your API only accepts JSON payloads: ```go import "encoding/json" type ReportPayload struct { Report string `json:"report"` ReportContext string `json:"context"` Signatures []string `json:"signatures"` ConfigDigest string `json:"configDigest"` SequenceNumber uint64 `json:"seqNr"` } func formatReportAsJSON(r *sdk.ReportResponse) (*http.Request, error) { // Extract signatures sigs := make([]string, len(r.Sigs)) for i, sig := range r.Sigs { sigs[i] = base64.StdEncoding.EncodeToString(sig.Signature) } // Create JSON payload payload := ReportPayload{ Report: base64.StdEncoding.EncodeToString(r.RawReport), ReportContext: base64.StdEncoding.EncodeToString(r.ReportContext), Signatures: sigs, ConfigDigest: base64.StdEncoding.EncodeToString(r.ConfigDigest), SequenceNumber: r.SeqNr, } body, err := json.Marshal(payload) if err != nil { return nil, fmt.Errorf("failed to marshal report: %w", err) } return &http.Request{ Url: "https://api.example.com/reports", Method: "POST", Body: body, Headers: map[string]string{ "Content-Type": "application/json", }, CacheSettings: &http.CacheSettings{ Store: true, MaxAge: durationpb.New(60 * time.Second), }, }, nil } ``` ### Understanding `CacheSettings` for reports You'll notice that all the patterns above include `CacheSettings`. This is critical for report submissions, just like it is for [POST requests](/cre/guides/workflow/using-http-client/post-request). For a complete explanation of how `CacheSettings` works in general, see [Understanding `CacheSettings` behavior](/cre/reference/sdk/http-client#understanding-cachesettings-behavior) in the HTTP Client reference. **Why use `CacheSettings`?** When a workflow executes, **all nodes in the DON** attempt to send the report to your API. Without caching, your API would receive multiple identical submissions (one from each node). `CacheSettings` prevents this by having the first node cache the response, which other nodes can reuse. **Why are cache hits limited for reports?** Unlike regular POST requests where caching can be very effective, **reports have a more limited cache effectiveness** due to signature variance: 1. Each DON node generates its own **unique cryptographic signature** for the report 2. These signatures are part of the `sdk.ReportResponse` structure 3. When nodes construct the HTTP request body (whether concatenating signatures or including them in headers), the signatures differ **In practice:** Even though cache hits are limited, you should still include `CacheSettings` to prevent worst-case scenarios where all nodes hit your API simultaneously. **The real solution: API-side deduplication** Because caching alone cannot prevent all duplicate submissions, your receiving API **must implement its own deduplication logic**: - Use the **hash of the report** (`keccak256(RawReport)`) as the unique identifier - Store this hash when processing a report - Reject any subsequent submissions with the same hash This approach is reliable because the `RawReport` is identical across all nodes—only the signatures vary. ## Using `SendReport` (recommended approach) Use the high-level `http.SendRequest` pattern with `sendRequester.SendReport()`: ```go func submitReportViaHTTP(config *Config, logger *slog.Logger, sendRequester *http.SendRequester) (*SubmitResponse, error) { // Assume 'report' was generated earlier in your workflow resp, err := sendRequester.SendReport(report, formatReportSimple).Await() if err != nil { return nil, fmt.Errorf("failed to send report: %w", err) } if resp.StatusCode != 200 { return nil, fmt.Errorf("API returned error: status=%d", resp.StatusCode) } logger.Info("Report submitted successfully", "statusCode", resp.StatusCode) return &SubmitResponse{Success: true}, nil } // In your trigger callback func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) { logger := runtime.Logger() client := &http.Client{} // Call the submission function submitPromise := http.SendRequest(config, runtime, client, submitReportViaHTTP, cre.ConsensusIdenticalAggregation[*SubmitResponse](), ) result, err := submitPromise.Await() if err != nil { return nil, err } return &MyResult{}, nil } ``` ## Complete working example This example shows a workflow that: 1. Generates a report from a single value 2. Submits it to an HTTP API 3. Uses the simple "report in body" format ```go //go:build wasip1 package main import ( "fmt" "log/slog" "math/big" "time" "github.com/ethereum/go-ethereum/accounts/abi" "github.com/smartcontractkit/chainlink-protos/cre/go/sdk" "github.com/smartcontractkit/cre-sdk-go/capabilities/networking/http" "github.com/smartcontractkit/cre-sdk-go/capabilities/scheduler/cron" "github.com/smartcontractkit/cre-sdk-go/cre" "github.com/smartcontractkit/cre-sdk-go/cre/wasm" "google.golang.org/protobuf/types/known/durationpb" ) type Config struct { ApiUrl string `json:"apiUrl"` Schedule string `json:"schedule"` } type SubmitResponse struct { Success bool } type MyResult struct{} func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) { return cre.Workflow[*Config]{ cre.Handler(cron.Trigger(&cron.Config{Schedule: config.Schedule}), onCronTrigger), }, nil } // Transformation function: defines how the API expects the report func formatReportForMyAPI(r *sdk.ReportResponse) (*http.Request, error) { return &http.Request{ Url: "https://webhook.site/your-unique-id", // Replace with your API Method: "POST", Body: r.RawReport, Headers: map[string]string{ "Content-Type": "application/octet-stream", "X-Report-SeqNr": fmt.Sprintf("%d", r.SeqNr), }, CacheSettings: &http.CacheSettings{ Store: true, // Prevent duplicate submissions MaxAge: durationpb.New(60 * time.Second), // Accept cached responses up to 60 seconds old }, }, nil } // Function that submits the report via HTTP func submitReportViaHTTP(config *Config, logger *slog.Logger, sendRequester *http.SendRequester, report *cre.Report) (*SubmitResponse, error) { logger.Info("Submitting report to API", "url", config.ApiUrl) resp, err := sendRequester.SendReport(*report, formatReportForMyAPI).Await() if err != nil { return nil, fmt.Errorf("failed to send report: %w", err) } logger.Info("Report submitted", "statusCode", resp.StatusCode, "bodyLength", len(resp.Body), ) if resp.StatusCode != 200 { return nil, fmt.Errorf("API error: status=%d, body=%s", resp.StatusCode, string(resp.Body)) } return &SubmitResponse{Success: true}, nil } func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) { logger := runtime.Logger() // Step 1: Generate a report (example: a single uint256 value) myValue := big.NewInt(123456789) logger.Info("Generating report", "value", myValue.String()) // ABI-encode the value uint256Type, err := abi.NewType("uint256", "", nil) if err != nil { return nil, fmt.Errorf("failed to create type: %w", err) } args := abi.Arguments{{Type: uint256Type}} encodedValue, err := args.Pack(myValue) if err != nil { return nil, fmt.Errorf("failed to encode value: %w", err) } // Generate the report reportPromise := runtime.GenerateReport(&cre.ReportRequest{ EncodedPayload: encodedValue, EncoderName: "evm", SigningAlgo: "ecdsa", HashingAlgo: "keccak256", }) report, err := reportPromise.Await() if err != nil { return nil, fmt.Errorf("failed to generate report: %w", err) } logger.Info("Report generated successfully") // Step 2: Submit the report via HTTP client := &http.Client{} submitPromise := http.SendRequest(config, runtime, client, func(config *Config, logger *slog.Logger, sendRequester *http.SendRequester) (*SubmitResponse, error) { return submitReportViaHTTP(config, logger, sendRequester, report) }, cre.ConsensusIdenticalAggregation[*SubmitResponse](), ) submitResult, err := submitPromise.Await() if err != nil { logger.Error("Failed to submit report", "error", err) return nil, err } logger.Info("Workflow completed successfully", "submitted", submitResult.Success) return &MyResult{}, nil } func main() { wasm.NewRunner(cre.ParseJSON[Config]).Run(InitWorkflow) } ``` ### Configuration file (`config.json`) ```json { "apiUrl": "https://webhook.site/your-unique-id", "schedule": "0 * * * *" } ``` ### Testing with webhook.site 1. Go to [webhook.site](https://webhook.site/) and get a unique URL 2. Update `config.json` with your webhook URL 3. Run the simulation: ```bash cre workflow simulate my-workflow --target staging-settings ``` 4. Check webhook.site to see the report data received ## Advanced: Low-level pattern For complex scenarios where you need more control, use `client.SendReport()` with `cre.RunInNodeMode`: ```go func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) { logger := runtime.Logger() // Assume 'report' was generated earlier submitPromise := cre.RunInNodeMode(config, runtime, func(config *Config, nodeRuntime cre.NodeRuntime) (*SubmitResponse, error) { client := &http.Client{} resp, err := client.SendReport(nodeRuntime, *report, formatReportSimple).Await() if err != nil { return nil, fmt.Errorf("failed to send report: %w", err) } if resp.StatusCode != 200 { return nil, fmt.Errorf("API error: %d", resp.StatusCode) } return &SubmitResponse{Success: true}, nil }, cre.ConsensusIdenticalAggregation[*SubmitResponse](), ) result, err := submitPromise.Await() if err != nil { return nil, err } return &MyResult{}, nil } ``` ## Best practices 1. **Always use `CacheSettings`**: Include caching in every transformation function to prevent worst-case duplicate submission scenarios 2. **Implement API-side deduplication**: Your receiving API must implement deduplication using the **hash of the report** (`keccak256(RawReport)`) to detect and reject duplicate submissions 3. **Verify signatures before processing**: Your API must verify the cryptographic signatures against DON public keys before trusting report data (see note below about signature verification) 4. **Match your API's format exactly**: Study your API's documentation to understand the expected format (binary, JSON, headers, etc.) 5. **Handle errors gracefully**: Check HTTP status codes and provide meaningful error messages ## Troubleshooting **"failed to send report" error** - Verify your API URL is correct and accessible - Check that your transformation function returns a valid `http.Request` - Ensure your API can handle binary data if you're sending raw bytes **API returns 400/422 errors** - Your report format likely doesn't match what your API expects - Check if your API expects base64 encoding, JSON wrapping, or specific headers ## Learn more - **[HTTP Client SDK Reference](/cre/reference/sdk/http-client)** — Complete API reference including `SendReport` and `sdk.ReportResponse` - **[POST Requests](/cre/guides/workflow/using-http-client/post-request)** — Learn about HTTP request patterns and caching - **[Generating Reports: Single Values](/cre/guides/workflow/using-evm-client/onchain-write/generating-reports-single-values)** — How to create reports from single values - **[Generating Reports: Structs](/cre/guides/workflow/using-evm-client/onchain-write/generating-reports-structs)** — How to create reports from struct data - **[Submitting Reports Onchain](/cre/guides/workflow/using-evm-client/onchain-write/submitting-reports-onchain)** — Alternative: Submit reports to smart contracts instead of HTTP --- # Cron Trigger Source: https://docs.chain.link/cre/guides/workflow/using-triggers/cron-trigger-go Last Updated: 2025-11-04 The Cron trigger fires based on a time-based schedule, defined by a standard cron expression. **Use case examples:** - Periodically fetching data from an API. - Regularly checking an onchain state. - Regularly writing data to an onchain contract. ## Configuration and handler You create a Cron trigger by calling the [`cron.Trigger`](/cre/reference/sdk/triggers#crontrigger) function and register it with a handler inside your `InitWorkflow` function. When you configure a Cron trigger, you must provide a `Schedule` using a standard cron expression. The expression can contain 5 or 6 fields, where the optional 6th field represents seconds. For help understanding or creating cron expressions, see crontab.guru (note: this tool supports 5-field expressions; add a seconds field at the beginning for 6-field expressions). **Examples:** - Every 30 seconds (6 fields): `*/30 * * * * *` - Every minute, at second 0 (6 fields): `0 * * * * *` - Every hour, at the top of the hour (6 fields): `0 0 * * * *` - Every 5 minutes from 08:00 to 08:59, Monday to Friday (5 fields): `*/5 8 * * 1-5` ### Timezone support By default, cron expressions use UTC. To specify a different timezone, prefix your cron expression with `TZ=`, where `` is an IANA timezone identifier (e.g., `America/New_York`, `Europe/London`, `Asia/Tokyo`). **Examples with timezones:** - Daily at midnight in New York: `TZ=America/New_York 0 0 * * *` - Every Sunday at 8 PM in Tokyo: `TZ=Asia/Tokyo 0 20 * * 0` - Every weekday at 9 AM in London: `TZ=Europe/London 0 9 * * 1-5` The timezone-aware scheduler automatically handles daylight saving time transitions, ensuring your workflows run at the correct local time throughout the year. ```go import ( "log/slog" "github.com/smartcontractkit/cre-sdk-go/cre" "github.com/smartcontractkit/cre-sdk-go/capabilities/scheduler/cron" ) // Callback function that runs when the cron trigger fires func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) { logger := runtime.Logger() scheduledTime := trigger.ScheduledExecutionTime.AsTime() logger.Info("Cron trigger fired", "scheduledTime", scheduledTime) // Your logic here... return &MyResult{}, nil } func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) { // Create the trigger - fires every 30 seconds in UTC cronTrigger := cron.Trigger(&cron.Config{Schedule: "*/30 * * * * *"}) // Or use a timezone-aware schedule - fires daily at 9 AM Eastern Time // cronTrigger := cron.Trigger(&cron.Config{Schedule: "TZ=America/New_York 0 9 * * *"}) // Register a handler with the trigger and a callback function return cre.Workflow[*Config]{ cre.Handler(cronTrigger, onCronTrigger), }, nil } ``` ## Callback and payload When a Cron trigger fires, it passes a [`*cron.Payload`](/cre/reference/sdk/triggers/cron-trigger) object to your callback function. This payload contains the scheduled execution time. For the full type definition and all available fields, see the [Cron Trigger SDK Reference](/cre/reference/sdk/triggers/cron-trigger). The `trigger` parameter is always included in the callback function signature. You can access the scheduled execution time using the `AsTime()` method: ```go func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*MyResult, error) { logger := runtime.Logger() scheduledTime := trigger.ScheduledExecutionTime.AsTime() logger.Info("Cron trigger fired", "scheduledTime", scheduledTime) // Your logic here... return &MyResult{}, nil } ``` ## Testing cron triggers in simulation To test your cron trigger during development, you can use the workflow simulator. The simulator executes cron triggers immediately when selected, allowing you to test your logic without waiting for the scheduled time. For detailed instructions on simulating cron triggers, including interactive and non-interactive modes, see the [Cron Trigger section in the Simulating Workflows guide](/cre/guides/operations/simulating-workflows#cron-trigger). --- # EVM Log Trigger Source: https://docs.chain.link/cre/guides/workflow/using-triggers/evm-log-trigger-go Last Updated: 2025-11-04 The EVM Log trigger fires when a specific log (event) is emitted by a smart contract on an EVM-compatible blockchain. This capability allows you to build powerful, event-driven workflows that react to onchain activity. This guide explains the two key parts of working with log triggers: - **[How to configure your workflow to listen for specific events](#configuring-your-trigger)** - **[How to decode the event data your workflow receives](#decoding-the-event-payload)** ## Configuring your trigger There are two methods for configuring a log trigger. Choose the one that best fits your use case: - **[Method 1: Using Binding Helpers](#method-1-using-binding-helpers-recommended):** Use this approach if you are listening for a **single event from a single contract**. This is the recommended and simplest approach for most scenarios. - **[Method 2: Manual Configuration](#method-2-manual-configuration-advanced):** Use this approach when you need to listen for **multiple events** at once, or for the **same event from multiple different contracts**. This guide covers both methods in detail below. ### Method 1: Using binding helpers (recommended) For the most common use case—listening to a single event from a single contract—the recommended approach is to use the helper functions generated by `cre generate-bindings evm`. The generator creates a `LogTriggerLog` method for each event in your contract's ABI. This method is simple, readable, and type-safe. The following example assumes you have generated bindings for a contract that emits a standard ERC20 `Transfer(address,address,uint256)` event. ```go import ( "fmt" "log/slog" "github.com/ethereum/go-ethereum/common" "github.com/smartcontractkit/cre-sdk-go/capabilities/blockchain/evm" "github.com/smartcontractkit/cre-sdk-go/capabilities/blockchain/evm/bindings" "github.com/smartcontractkit/cre-sdk-go/cre" "your-module-name/contracts/evm/src/generated/my_token" // Replace with your module name from go.mod ) // When using binding helpers, your handler receives a *bindings.DecodedLog with the decoded event data func onEvmTrigger(config *Config, runtime cre.Runtime, payload *bindings.DecodedLog[my_token.TransferDecoded]) (*MyResult, error) { logger := runtime.Logger() // Access the decoded event fields directly from payload.Data from := payload.Data.From to := payload.Data.To value := payload.Data.Value logger.Info("Transfer detected", "from", from.Hex(), "to", to.Hex(), "value", value.String(), "blockNumber", payload.Log.BlockNumber.String(), ) return &MyResult{}, nil } func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) { // Create an EVM client for the chain you want to monitor chainSelector, err := evm.ChainSelectorFromName(config.ChainName) if err != nil { return nil, fmt.Errorf("failed to get chain selector: %w", err) } evmClient := &evm.Client{ ChainSelector: chainSelector, } // Initialize your contract binding (note: returns 2 values) contractAddress := common.HexToAddress(config.TokenAddress) myTokenContract, err := my_token.NewMyToken(evmClient, contractAddress, nil) if err != nil { return nil, fmt.Errorf("failed to create contract binding: %w", err) } // Use the generated helper to create the trigger for the Transfer event logTrigger, err := myTokenContract.LogTriggerTransferLog( chainSelector, // The chain to monitor (as uint64) evm.ConfidenceLevel_CONFIDENCE_LEVEL_FINALIZED, // The confidence level []my_token.Transfer{}, // Empty slice = listen to all Transfer events ) if err != nil { return nil, fmt.Errorf("failed to create log trigger: %w", err) } // Register the handler that will be called when the event is detected return cre.Workflow[*Config]{ cre.Handler(logTrigger, onEvmTrigger), }, nil } ``` #### Understanding the `DecodedLog` payload The `*bindings.DecodedLog[T]` payload has two main parts: - **`payload.Data`**: The decoded event struct with your event fields (e.g., `From`, `To`, `Value`) - **`payload.Log`**: The raw log metadata (block number, transaction hash, log index, etc.) **Example: Accessing decoded event data** ```go func onEvmTrigger(config *Config, runtime cre.Runtime, payload *bindings.DecodedLog[my_token.TransferDecoded]) (*MyResult, error) { logger := runtime.Logger() // Access decoded event fields from payload.Data from := payload.Data.From // address (common.Address) to := payload.Data.To // address (common.Address) value := payload.Data.Value // uint256 (*big.Int) // Access raw log metadata from payload.Log blockNumber := payload.Log.BlockNumber // *pb.BigInt txHash := payload.Log.TxHash // []byte (32-byte transaction hash) logIndex := payload.Log.Index // uint32 logger.Info("Transfer detected", "from", from.Hex(), "to", to.Hex(), "value", value.String(), "blockNumber", blockNumber.String(), "txHash", common.BytesToHash(txHash).Hex(), ) return &MyResult{}, nil } ``` #### Filtering by indexed parameters with the helper The third parameter of the `LogTriggerLog` function (`filters`) allows you to create filters based on the values of `indexed` event parameters. - **`indexed` Parameters Only:** You can only filter on parameters marked as `indexed` in the Solidity event definition. Filtering on non-indexed parameters must be done inside your workflow after the event is decoded. - **Filter Logic:** The helper supports both **AND** and **OR** conditions: - It creates an **AND** condition between *different* indexed parameters (e.g., `From` AND `To`). - It creates an **OR** condition for *multiple values* provided for the *same* indexed parameter. **Example 1: "AND" Filtering (`from` Alice "AND" `to` Bob)** To trigger only on transfers from a specific sender to a specific receiver, provide both values in the same struct literal. ```go // This trigger will only fire for transfers FROM Alice AND TO Bob. logTrigger, err := myTokenContract.LogTriggerTransferLog( config.ChainSelector, evm.ConfidenceLevel_CONFIDENCE_LEVEL_FINALIZED, []my_token.Transfer{ { From: common.HexToAddress("0xAlice"), To: common.HexToAddress("0xBob") }, }, ) ``` **Example 2: "OR" Filtering (`from` Alice "OR" `from` Charlie)** To trigger on a transfer from one of several possible senders, provide multiple struct literals, each with a value for the same field. ```go // This trigger will fire for any transfer sent by Alice OR Charlie. logTrigger, err := myTokenContract.LogTriggerTransferLog( config.ChainSelector, evm.ConfidenceLevel_CONFIDENCE_LEVEL_FINALIZED, []my_token.Transfer{ { From: common.HexToAddress("0xAlice") }, { From: common.HexToAddress("0xCharlie") }, }, ) ``` **Example 3: "AND/OR" Filtering** You can combine **AND** and **OR** conditions for even more precise filtering. The following example triggers if a `Transfer` is (`from` Alice **AND** `to` Bob) **OR** (`from` Charlie **AND** `to` David). ```go logTrigger, err := myTokenContract.LogTriggerTransferLog( config.ChainSelector, evm.ConfidenceLevel_CONFIDENCE_LEVEL_FINALIZED, []my_token.Transfer{ { From: common.HexToAddress("0xAlice"), To: common.HexToAddress("0xBob") }, { From: common.HexToAddress("0xCharlie"), To: common.HexToAddress("0xDavid") }, }, ) ``` For more complex, grouped **AND/OR** conditions (e.g., (`from` Alice **AND** `to` Bob) **OR** (`from` Charlie **AND** `to` David), you must use the manual configuration method below. #### Confidence Level The second parameter of the `LogTriggerLog` function specifies the block confirmation level to wait for before triggering. For more details on the available levels, see the [`evm.FilterLogTriggerRequest`](/cre/reference/sdk/triggers/evm-log-trigger-go#evmfilterlogtriggerrequest) reference. ### Method 2: Manual configuration (advanced) For more complex scenarios, you can manually construct the `evm.FilterLogTriggerRequest`. This method is necessary when you need to listen to **multiple events** or the **same event from multiple contracts** with a single trigger. | Feature | Using Binding Helpers | Manual Configuration | | :--------------------------- | :-------------------------------------------------------------- | :-------------------------------------------------------------------------------------- | | **Primary use case** | Filtering on a **single event type** from a **single contract** | Filtering across **multiple event types** or the **same event from multiple contracts** | | **Simplicity & readability** | ✅ High | ❌ Low | | **Type safety** | ✅ High | ❌ Low (manual hash management) | The main advantage of manual configuration is its filtering capability. This is achieved by directly manipulating the fields of the `evm.FilterLogTriggerRequest` struct. Here is a simplified view of its structure: ```go logTriggerCfg := &evm.FilterLogTriggerRequest{ Addresses: [][]byte{ ... }, Topics: []*evm.TopicValues{ ... }, } ``` To manually configure a trigger, you specify the contract addresses and the event topics to filter by. The `Topics` array is where you define both the **event type** you want to listen for, and any conditions on its `indexed` parameters. #### Understanding topic filtering An EVM log filter uses these fields to create precise rules: - **The `Addresses` List:** The trigger will fire if the event is emitted from **any** contract in this list (**OR** logic). - **The `Topics` Array:** An event must match the conditions for **all** defined topic slots (**AND** logic between topics). Within a single topic, you can provide a list of values, and it will match if the event's topic is **any** of those values (**OR** logic within a topic). This **AND**/**OR** logic is what enables advanced patterns. The first and most important topic, `Topics[0]`, is used to filter by **event type**. Its value should be the Keccak-256 hash of the event's signature. For example, the signature for a standard ERC20 transfer is `"Transfer(address,address,uint256)"`. You provide the hash of this string as `Topics[0]` to filter for only `Transfer` events. The subsequent topics, `Topics[1]`, `Topics[2]`, and `Topics[3]`, are then used to filter on that event's `indexed` parameters. #### Filtering by indexed parameters **Example 1: "AND" filtering** To trigger only on a `Transfer` from Alice **AND** to Bob, you must manually set `Topics[1]` (for the first indexed parameter, `from`) and `Topics[2]` (for the second, `to`). ```go logTriggerCfg := &evm.FilterLogTriggerRequest{ Addresses: [][]byte{ common.HexToAddress("0xYourTokenContract").Bytes() }, Topics: []*evm.TopicValues{ { Values: [][]byte{ myTokenContract.Codec.TransferLogHash() } }, // Topics[0]: Event signature must be Transfer { Values: [][]byte{ common.HexToAddress("0xAlice").Bytes() } }, // Topics[1]: `from` must be Alice { Values: [][]byte{ common.HexToAddress("0xBob").Bytes() } }, // Topics[2]: `to` must be Bob }, } ``` **Examples 2: "OR" filtering** You can use the **OR** logic within a topic or across the `Addresses` list to monitor a broader set of onchain activities. **Example 2.A: Listening to multiple event types** To trigger on either a `Transfer` **OR** an `Approval` from a single contract, you provide multiple values for `Topics[0]`: ```go logTriggerCfg := &evm.FilterLogTriggerRequest{ Addresses: [][]byte{ common.HexToAddress("0xYourContractAddress").Bytes() }, Topics: []*evm.TopicValues{ { // Topic 0: The event signature Values: [][]byte{ myTokenContract.Codec.TransferLogHash(), // either a Transfer... myTokenContract.Codec.ApprovalLogHash(), // ...OR an Approval. }, }, }, } ``` **Example 2.B: Listening to the same event from multiple contracts** To trigger on a `Transfer` event from `TokenA` OR `TokenB` OR `TokenC`, you provide multiple addresses in the `Addresses` list: ```go logTriggerCfg := &evm.FilterLogTriggerRequest{ // Listen for events from ANY of these addresses Addresses: [][]byte{ common.HexToAddress("0xTokenContract_A").Bytes(), common.HexToAddress("0xTokenContract_B").Bytes(), common.HexToAddress("0xTokenContract_C").Bytes(), }, Topics: []*evm.TopicValues{ { // Topic for the Transfer event signature Values: [][]byte{ myTokenContract.Codec.TransferLogHash() }, }, }, } ``` **Example 3: "AND/OR" filtering** You can combine all of these techniques to create highly specific filters. For example, to trigger a workflow if a `Transfer` event is emitted by: - either `TokenA` **OR** `TokenB`, - **AND** that transfer is TO your Vault, - **AND** it is FROM either `Alice` **OR** `Charlie`: ```go logTriggerCfg := &evm.FilterLogTriggerRequest{ // OR condition on addresses Addresses: [][]byte{ common.HexToAddress("0xTokenContract_A").Bytes(), common.HexToAddress("0xTokenContract_B").Bytes(), }, Topics: []*evm.TopicValues{ // AND condition for Topic 0 (must be a Transfer) { Values: [][]byte{ myTokenContract.Codec.TransferLogHash() } }, // AND condition for Topic 1 (`from` must be...) { Values: [][]byte{ common.HexToAddress("0xAlice").Bytes(), // ...Alice OR common.HexToAddress("0xCharlie").Bytes(), // ...Charlie }}, // AND condition for Topic 2 (`to` must be your vault) { Values: [][]byte{ common.HexToAddress("0xYourVaultAddress").Bytes() } }, }, } ``` #### Confidence Level You can set the block confirmation level by adding the `Confidence` field to the `evm.FilterLogTriggerRequest` struct. See the [reference](/cre/reference/sdk/triggers/evm-log-trigger-go#evmfilterlogtriggerrequest) for more details on the available levels. ## Decoding the event payload What your handler receives depends on how you configured your trigger: - **If you used [binding helpers](#method-1-using-binding-helpers-recommended):** Your handler automatically receives `*bindings.DecodedLog[Decoded]` with the event data already decoded. **You don't need to do anything else** - just access `payload.Data.`. - **If you used [manual configuration](#method-2-manual-configuration-advanced):** Your handler receives a raw `*evm.Log` struct that you must decode yourself. The sections below show you how. For the full type definition of `*evm.Log` and all available fields, see the [EVM Log Trigger SDK Reference](/cre/reference/sdk/triggers/evm-log-trigger-go#evmlog). ### Method 1: Using the binding codec (for manual configuration) If you used manual configuration but have bindings for the contract that emitted the event, you should use the generated `Codec` to decode the `*evm.Log` payload your handler receives. The Codec provides a safe, simple, and type-safe way to get your data. ```go import ( "your-module-name/contracts/evm/src/generated/my_token" "github.com/ethereum/go-ethereum/common" "github.com/smartcontractkit/cre-sdk-go/capabilities/blockchain/evm" "github.com/smartcontractkit/cre-sdk-go/cre" ) // The `onEvmTrigger` handler receives the raw event log. func onEvmTrigger(config *Config, runtime cre.Runtime, log *evm.Log) (*MyResult, error) { logger := runtime.Logger() // Assume `myTokenContract` is an initialized instance of your contract binding. // Check which event was received by comparing the first topic to known event topics. eventSignature := common.BytesToHash(log.Topics[0]) switch eventSignature { case common.BytesToHash(myTokenContract.Codec.TransferLogHash()): // It's a Transfer! Use the safe, generated decoder. transferEvent, err := myTokenContract.Codec.DecodeTransfer(log) if err != nil { /* handle error */ } logger.Info("Transfer detected", "from", transferEvent.From, "to", transferEvent.To) case common.BytesToHash(myTokenContract.Codec.ApprovalLogHash()): // It's an Approval! Use the safe, generated decoder. approvalEvent, err := myTokenContract.Codec.DecodeApproval(log) if err != nil { /* handle error */ } logger.Info("Approval detected", "owner", approvalEvent.Owner, "spender", approvalEvent.Spender) } return &MyResult{}, nil } ``` ### Method 2: Manual decoding (for manual configuration without bindings) If you used manual configuration and are interacting with a third-party contract for which you do not have bindings, you must manually parse the raw byte arrays in `log.Topics` (for `indexed` fields) and `log.Data` (for non-indexed fields). The following example shows how to manually parse a standard ERC20 `Transfer(address indexed from, address indexed to, uint256 value)` event. ```go import "github.com/ethereum/go-ethereum/accounts/abi" import "github.com/ethereum/go-ethereum/common" import "math/big" import "fmt" type MyResult struct{} func onEvmTrigger(config *Config, runtime cre.Runtime, log *evm.Log) (*MyResult, error) { logger := runtime.Logger() // Manually parse the indexed topics. `Topics[0]` is the event signature. // An indexed `address` is a 32-byte value; we slice the last 20 bytes to get the actual address. fromAddress := common.BytesToAddress(log.Topics[1][12:]) toAddress := common.BytesToAddress(log.Topics[2][12:]) // Manually parse the non-indexed data using the go-ethereum ABI package. var value *big.Int uint256Type, _ := abi.NewType("uint256", "", nil) decodedData, err := abi.Arguments{{Type: uint256Type}}.Unpack(log.Data) if err != nil { return nil, fmt.Errorf("failed to unpack log data: %w", err) } value = decodedData[0].(*big.Int) logger.Info( "Manual transfer decode successful", "from", fromAddress.Hex(), "to", toAddress.Hex(), "value", value.String(), ) // ... Your logic here ... return &MyResult{}, nil } ``` ## Testing log triggers in simulation To test your EVM log trigger during development, you can use the workflow simulator with a transaction hash and event index. The simulator fetches the log from your configured RPC and passes it to your callback function. For detailed instructions on simulating EVM log triggers, including interactive and non-interactive modes, see the [EVM Log Trigger section in the Simulating Workflows guide](/cre/guides/operations/simulating-workflows#evm-log-trigger). --- # HTTP Trigger Source: https://docs.chain.link/cre/guides/workflow/using-triggers/http-trigger-go Last Updated: 2025-11-04 The HTTP trigger fires when an external system makes an HTTP request to the trigger endpoint. **Use case examples:** - Integrating with existing web services or webhooks. - Allowing an external system to initiate a workflow on demand. - Creating a user-facing endpoint to run a specific piece of logic. ## Configuration and handler You create an HTTP trigger by calling the [`http.Trigger`](/cre/reference/sdk/triggers#httptrigger) function. Its configuration ([`http.Config`](/cre/reference/sdk/triggers#httpconfig)) requires a set of authorized public keys to validate incoming request signatures. ```go import ( "encoding/json" "fmt" "log/slog" "github.com/smartcontractkit/cre-sdk-go/cre" "github.com/smartcontractkit/cre-sdk-go/capabilities/networking/http" ) // Callback function that runs when an HTTP request is received func onHttpTrigger(config *Config, runtime cre.Runtime, payload *http.Payload) (*MyResult, error) { logger := runtime.Logger() // Unmarshal the JSON input from bytes var requestData map[string]interface{} if err := json.Unmarshal(payload.Input, &requestData); err != nil { return nil, fmt.Errorf("failed to unmarshal input: %w", err) } logger.Info("HTTP trigger received", "data", requestData) // Your logic here... return &MyResult{Message: "Request processed"}, nil } func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) { authorizedKeys := []*http.AuthorizedKey{ { Type: http.KeyType_KEY_TYPE_ECDSA_EVM, PublicKey: config.HTTPPublicKey, }, } httpTrigger := http.Trigger(&http.Config{ AuthorizedKeys: authorizedKeys, }) return cre.Workflow[*Config]{ cre.Handler(httpTrigger, onHttpTrigger), }, nil } ``` **About authorized keys:** - **`PublicKey`**: An EVM address (e.g., `"0xb08E004bd2b5aFf1F5F950d141f449B1c05800eb"`) that is authorized to trigger the workflow - **`Type`**: Must be `http.KeyType_KEY_TYPE_ECDSA_EVM` (currently the only supported authentication method) - **Multiple keys**: You can include multiple authorized addresses in the slice When an HTTP request is made to your workflow's endpoint, CRE verifies that the request was signed by a private key corresponding to one of the authorized addresses. ## Callback and payload The HTTP trigger passes a [`*http.Payload`](/cre/reference/sdk/triggers/http-trigger#httppayload) to your callback. This object contains the request body (`Input`) and the signing key (`Key`) from the incoming HTTP request. For the full type definition and all available fields, see the [HTTP Trigger SDK Reference](/cre/reference/sdk/triggers/http-trigger). ```go func onHttpTrigger(config *Config, runtime cre.Runtime, payload *http.Payload) (*MyResult, error) { logger := runtime.Logger() // The payload.Input is []byte containing JSON data. // Unmarshal it into a map or a custom struct. var requestData map[string]interface{} if err := json.Unmarshal(payload.Input, &requestData); err != nil { return nil, fmt.Errorf("failed to unmarshal input: %w", err) } logger.Info("Received HTTP request", "data", requestData) // Your logic here... // The value returned from your callback will be sent back as the HTTP response. return &MyResult{Message: "Request processed"}, nil } ``` ## Testing HTTP triggers in simulation To test your HTTP trigger during development, you can use the workflow simulator with a JSON payload. The simulator allows you to provide test data either directly as a JSON string or from a file. For detailed instructions on simulating HTTP triggers, including interactive and non-interactive modes, see the [HTTP Trigger section in the Simulating Workflows guide](/cre/guides/operations/simulating-workflows#http-trigger). --- # Project Setup Commands Source: https://docs.chain.link/cre/reference/cli/project-setup-go Last Updated: 2025-11-04 The project setup commands include `cre init` to initialize new CRE projects or add workflows, and `cre generate-bindings` to generate contract bindings for type-safe contract interactions. ## `cre init` Initializes a new CRE project or adds a workflow to an existing project. The behavior depends on your current directory: - **In a directory without a project**: Creates a new project with the first workflow - **In an existing project directory**: Adds a new workflow to the existing project **Usage:** ```bash cre init [flags] ``` **Flags:** | Flag | Description | | --------------------- | ---------------------------------- | | `-p, --project-name` | Name for the new project | | `-t, --template-id` | ID of the workflow template to use | | `-w, --workflow-name` | Name for the new workflow | **Interactive mode (recommended):** Running `cre init` without flags starts an interactive setup that guides you through the process: 1. **Project name** (only if creating a new project) 2. **Language** (Go or TypeScript) 3. **Workflow template** (example templates for the chosen language) 4. **Workflow name** **Example:** ```bash # Interactive setup cre init ``` **Non-interactive mode:** ```bash # Create a new project with initial workflow cre init \ --project-name my-cre-project \ --workflow-name my-workflow \ --template-id 1 ``` For a detailed walkthrough, see [Part 1 of the Getting Started guide](/cre/getting-started/part-1-project-setup). ## `cre generate-bindings` Generates Go bindings from contract ABI files. This enables type-safe contract interactions in your Go workflows. **Usage:** ```bash cre generate-bindings [flags] ``` **Arguments:** - `` — (Required) Chain family. Currently supports: `evm` **Flags:** | Flag | Description | | -------------------- | ---------------------------------------------------------------------------------- | | `-a, --abi` | Path to ABI directory (default: `"contracts/{chain-family}/src/abi/"`) | | `-l, --language` | Target language (default: `"go"`) | | `-k, --pkg` | Base package name; each contract gets its own subdirectory (default: `"bindings"`) | | `-p, --project-root` | Path to project root directory (default: current directory) | **How it works:** - Supports EVM chain family and Go language - Each contract gets its own package subdirectory to avoid naming conflicts - For example, `IERC20.abi` generates bindings in `generated/ierc20/` package - Generated bindings are placed in `contracts/{chain-family}/src/generated/` - For each contract, two files are generated: - `.go` — Main binding for contract interaction - `_mock.go` — Mock implementation for testing workflows without deployed contracts **Examples:** - Generate bindings for all ABI files in the default directory ```bash cre generate-bindings evm ``` - Generate bindings with custom ABI directory ```bash cre generate-bindings evm --abi ./custom-abis ``` - Generate bindings with custom package name ```bash cre generate-bindings evm --pkg mycontracts ``` For a detailed guide, see [Generating Contract Bindings](/cre/guides/workflow/using-evm-client/generating-bindings). ## Project initialization workflow The typical project setup flow for Go workflows: 1. **`cre init`** — Create a new project or add a workflow (interactive or with flags) 2. **Add contract ABIs** — Place ABI files in `contracts/evm/src/abi/` 3. **`cre generate-bindings evm`** — Generate Go bindings from ABIs 4. **Sync dependencies** — Run `go mod tidy` to update your `go.mod` 5. **Start development** — Write your workflow code ## Learn more - [Part 1: Project Setup](/cre/getting-started/part-1-project-setup) — Step-by-step tutorial for initializing projects - [Generating Contract Bindings](/cre/guides/workflow/using-evm-client/generating-bindings) — Detailed guide for Go bindings - [Project Configuration](/cre/reference/project-configuration) — Understanding `project.yaml` and `workflow.yaml` --- # Forwarder Addresses Source: https://docs.chain.link/cre/reference/general/forwarder-addresses [COMING SOON] When you [write data to a smart contract](/cre/guides/workflow/using-evm-client/onchain-write), your workflow does not call the contract directly. Instead, it submits a report to a `KeystoneForwarder` contract, which then validates the report and calls the `onReport()` function on your [consumer contract](/cre/guides/workflow/using-evm-client/onchain-write/building-consumer-contracts). Below is a list of the deployed `KeystoneForwarder` contract addresses and their corresponding Chain Selectors for various testnets. Use the address matching the network where your consumer contract is deployed. | Network | Chain Selector | Forwarder Address | | :------ | :------------- | :---------------- | | XXX | `XXX` | `XXX` | --- # General Reference Source: https://docs.chain.link/cre/reference/general [COMING SOON] This section provides general reference material for CRE. --- # CRE Limits Source: https://docs.chain.link/cre/reference/general/limits [COMING SOON] This page outlines the key limits in CRE that may impact your usage. Each table explains the limit, its default value, how you can configure it (if applicable), and what it means in practice. --- # Project Configuration Source: https://docs.chain.link/cre/reference/project-configuration-go Last Updated: 2025-11-04 This page explains how to manage configuration within Chainlink Runtime Environment (CRE) projects. It covers the standard project structure, the roles and usage of the configuration files (`project.yaml` and `workflow.yaml`), and the concept of **targets** for handling environment-specific settings. You will understand: - The recommended directory structure for CRE projects and the significance of key files. - How to define global settings in `project.yaml` and override them with workflow-specific settings in `workflow.yaml`. - The purpose of targets and how they enable seamless switching between different operational environments. - The process by which the CRE CLI resolves and merges target configurations. ## 1. Glossary | Term | Definition | | -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Project** | A folder that groups one or more workflows plus shared files such as `project.yaml`, `.env`, and `.gitignore`. | | **Workflow** | A sub-folder that contains everything needed to run *one* workflow (code, build artifacts, and the optional `workflow.yaml`). | | **Config** | Any YAML file that lets you adjust how a workflow behaves (timeouts, trigger frequency, etc.). | | **Settings** | Values that describe the runtime environment. Project settings live in `project.yaml`; workflow settings live in `workflow.yaml`. If both define the same key, the workflow value wins. | | **Target** | A named set of settings (e.g. `staging-settings`, `production-settings`). Switching the target switches networks, RPC URLs, contract addresses, and other environment-specific values in one step. | | `secrets.yaml` | A project-level file that defines logical names for secrets and maps them to environment variables. | | **ABI** | Application Binary Interface - a JSON file that describes a smart contract's functions, events, and data structures. | | **Bindings** | Type-safe Go packages automatically generated from contract ABIs, enabling easy interaction with smart contracts from workflows. | ## 2. Project structure A typical CRE Go project is organized as follows: ```text myProject/ ├── go.mod # Go module definition ├── go.sum # Go dependency checksums ├── .env # Secret values (never commit to a Version Control System like Git) ├── .gitignore ├── project.yaml # Global configuration ├── secrets.yaml # Secret name declarations ├── contracts/ # Contract-related files │ └── evm/ │ └── src/ │ ├── abi/ # Contract ABI files (.abi JSON format) │ └── generated/ # Auto-generated Go bindings ├── workflow1/ │ ├── workflow.yaml # Workflow-specific configuration (optional) │ ├── main.go # Your workflow code │ └── … ├── workflow2/ │ ├── workflow.yaml # Workflow-specific configuration (optional) │ ├── main.go # Your workflow code │ └── … └── … ``` - `go.mod` / `go.sum`: **Go module definition** and dependency management for the entire project. - `project.yaml`: **Global settings**, shared by every workflow in the project. - `workflow.yaml`: **Local settings** for a single workflow. Add this file only when the workflow needs overrides. - `secrets.yaml`: **Secret declarations**, a manifest of logical secret names used across the project. - `contracts/evm/src/abi/`: **Contract ABIs**, where you place `.abi` JSON files for contract binding generation. - `contracts/evm/src/generated/`: **Generated Go bindings**, automatically created by the CRE CLI from your ABI files using `cre generate-bindings`. ## 3. Configuration files ### 3.1. Global configuration (`project.yaml`) `project.yaml` holds everything that rarely changes across workflows, such as RPC endpoints. ```yaml # project.yaml staging-settings: rpcs: - chain-name: ethereum-testnet-sepolia url: https://ethereum-sepolia-rpc.publicnode.com # You can define other targets for future use production-settings: account: workflow-owner-address: "0x..." # Optional: For multi-sig wallets rpcs: - chain-name: ethereum-testnet-sepolia url: https://ethereum-sepolia-rpc.publicnode.com - chain-name: ethereum-mainnet url: https://mainnet.infura.io/v3/ ``` #### Configuration fields | Field | Required | When to use | Description | | -------------------------------- | -------- | --------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `account.workflow-owner-address` | No | Multi-sig only | Multi-sig wallet address that owns the workflow. Required when using the `--unsigned` flag to generate unsigned transactions. For standard (non-multi-sig) deployments, omit this field—the CLI uses the address from `CRE_ETH_PRIVATE_KEY`. See [Using Multi-sig Wallets](/cre/guides/operations/using-multisig-wallets). | | `rpcs` | Yes | Always (if using EVM) | Array of RPC endpoints for chains your workflow interacts with. Each entry requires `chain-name` (e.g., `ethereum-mainnet`) and `url` (the RPC endpoint). **Required for both simulation and deployed workflows** that use EVM capabilities. The CLI pre-populates a public Sepolia RPC URL by default. | ### 3.2. Workflow configuration (`workflow.yaml`) `workflow.yaml` captures details **unique to one workflow instance**, like its name. ```yaml # workflow.yaml staging-settings: user-workflow: workflow-name: "my-por-workflow-staging" workflow-artifacts: workflow-path: "." # Points to the workflow directory config-path: "./config.staging.json" secrets-path: "" # Empty if not using secrets production-settings: user-workflow: workflow-owner-address: "
" # Optional: For multi-sig wallets workflow-name: "my-por-workflow-production" workflow-artifacts: workflow-path: "." # Points to the workflow directory config-path: "./config.production.json" secrets-path: "" # e.g. "../secrets.yaml" if using secrets ``` #### Configuration fields | Field | Required | Description | | -------------------------------------- | -------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `user-workflow.workflow-name` | Yes | The name of your workflow. This name is used to identify the workflow when deploying, activating, or managing it via CLI commands. Best practice: include environment suffix (e.g., `-staging`, `-production`). | | `user-workflow.workflow-owner-address` | No | Multi-sig wallet address (if applicable). Overrides the value from `project.yaml` for this specific workflow. See the `project.yaml` configuration table above for details. | | `workflow-artifacts.workflow-path` | Yes | Path to your workflow entry point. For Go: `"."` (current directory). For TypeScript: `"./main.ts"` (or your entry file name). | | `workflow-artifacts.config-path` | Yes | Path to your workflow configuration JSON file (e.g., `"./config.staging.json"` or `"./config.production.json"`). | | `workflow-artifacts.secrets-path` | No | Path to your secrets YAML file (e.g., `"../secrets.yaml"`). Use `""` (empty string) if not using secrets. See [Secrets Management](/cre/guides/workflow/secrets) for details. | ### 3.3. Secrets configuration (`secrets.yaml`) `secrets.yaml` is an optional project-level file that maps logical secret names to environment variables. This allows you to decouple the secret names used in your code from the actual environment variable names. ```yaml # secrets.yaml (at project root) secretsNames: DATA_SOURCE_API_KEY: - DATA_SOURCE_API_KEY_ENV ``` To use secrets in your workflow, reference this file in your `workflow.yaml`: ```yaml workflow-artifacts: secrets-path: "../secrets.yaml" ``` For simulation, secret values are loaded from your `.env` file or environment variables. For deployed workflows, secrets are managed through the Vault DON using `cre secrets` commands. For complete details on using secrets in your workflows, see [Secrets Management](/cre/guides/workflow/secrets). ## 4. Targets A target is a top-level key inside both `project.yaml` and `workflow.yaml`. It bundles all settings for a single environment or variant. The CLI selects the active target using the `--target` flag. Example: ```sh # Simulate using the 'staging-settings' target cre workflow simulate my-workflow --target staging-settings ``` Alternatively, the `CRE_TARGET` environment variable can be used to specify the target. The CLI picks the active target in this order: 1. `--target ` flag 2. `CRE_TARGET=` environment variable ### 4.1. Defining Multiple Targets You can store many targets in one file. This is useful for managing different environments (like simulation vs. production) or for testing variations of a workflow. ```yaml # In project.yaml # Target for simulation and testing staging-settings: rpcs: - chain-name: ethereum-testnet-sepolia url: https://ethereum-sepolia-rpc.publicnode.com # Target for production deployment production-settings: account: workflow-owner-address: "0x123..." # Optional: For multi-sig wallets rpcs: - chain-name: ethereum-mainnet url: https://mainnet.infura.io/v3/ ``` ### 4.2. How Target Resolution Works When you run a CLI command with a target, e.g., `--target staging-settings`: 1. Load the `staging-settings` target from `project.yaml`. 2. Load the same target from `workflow.yaml` (if the file exists and contains a matching target definition). 3. Merge the two objects. - Keys present in both files → value from `workflow.yaml` wins. - Keys present in only one file → that value is used. 4. Validate the final object before execution. --- # SDK Reference: Consensus & Aggregation Source: https://docs.chain.link/cre/reference/sdk/consensus-go Last Updated: 2025-11-04 Aggregation is the process of taking many results from individual nodes and reducing them to a single, reliable value. This aggregated value is what the DON reaches consensus on. When you run code on individual nodes using [`cre.RunInNodeMode`](/cre/reference/sdk/core/#creruninnodemode), you must provide an aggregation strategy to tell the DON how to produce this single, trustworthy outcome. This is achieved using a `ConsensusAggregation`. ## `ConsensusAggregation[T]` This is a generic interface passed as the final argument to `cre.RunInNodeMode`. It defines the aggregation strategy and an optional default value to be used if the node-level execution fails. There are two primary ways to specify an aggregation method: 1. [**Using Built-in Functions**](/cre/reference/sdk/consensus-go/#1-built-in-aggregation-functions): For simple types, you can use functions like [`ConsensusMedianAggregation`](/cre/reference/sdk/consensus/#consensusmedianaggregationt). 2. [**Using Struct Tags**](/cre/reference/sdk/consensus-go/#2-aggregation-via-struct-tags): For complex types (structs), you can use [`ConsensusAggregationFromTags`](/cre/reference/sdk/consensus/#aggregation-via-struct-tags). ## 1. Built-in aggregation functions These functions are used for simple, single-value aggregations. ### `ConsensusMedianAggregation[T]` Computes the median of numeric results from all nodes. **Supported Types (`T`):** Any standard Go numeric type (`int`, `int32`, `float64`, etc.), `*big.Int`, `decimal.Decimal`, and `time.Time`. **Usage:** ```go pricePromise := cre.RunInNodeMode(config, runtime, fetchPrice, cre.ConsensusMedianAggregation[float64](), ) ``` ### `ConsensusIdenticalAggregation[T]` Ensures that a sufficient majority of nodes (a Byzantine Quorum) return the exact same value. **Supported Types (`T`):** Any primitive Go type (`string`, `bool`, standard numeric types), `*big.Int`, `decimal.Decimal`, or structs composed entirely of these types. **Usage:** ```go hashPromise := cre.RunInNodeMode(config, runtime, fetchBlockHash, cre.ConsensusIdenticalAggregation[string](), ) ``` ### `ConsensusCommonPrefixAggregation[T]` Computes the longest common prefix from a slice of values from all nodes. This is useful for finding the longest shared sequence at the beginning of a list. **Supported Types (`T`):** Any slice or array of a type supported by `ConsensusIdenticalAggregation` (e.g., `[]string`, `[]int`). **Usage:** ```go // Assume fetchBlockHeaders returns a slice of block hashes for a chain fork aggregation, err := cre.ConsensusCommonPrefixAggregation[string]()() if err != nil { return "", err } blockHeadersPromise := cre.RunInNodeMode(config, runtime, fetchBlockHeaders, aggregation) ``` ### `ConsensusCommonSuffixAggregation[T]` Computes the longest common suffix from a slice of values from all nodes. This is useful for finding the longest shared sequence at the end of a list. **Supported Types (`T`):** Any slice or array of a type supported by `ConsensusIdenticalAggregation` (e.g., `[]string`, `[]int`). **Usage:** ```go // Assume fetchRecentTransactions returns a slice of recent transaction IDs aggregation, err := cre.ConsensusCommonSuffixAggregation[string]()() if err != nil { return "", err } recentTxsPromise := cre.RunInNodeMode(config, runtime, fetchRecentTransactions, aggregation) ``` ## 2. Aggregation via struct tags For structs, the recommended approach is to use `ConsensusAggregationFromTags`. This function inspects the `consensus_aggregation` struct tags on your return type to determine how to aggregate each field. **Usage:** ```go type ReserveInfo struct { LastUpdated time.Time `json:"lastUpdated" consensus_aggregation:"median"` TotalReserve decimal.Decimal `json:"totalReserve" consensus_aggregation:"median"` } func onTrigger(config *Config, runtime cre.Runtime, ...) (string, error) { // ... reserveInfoPromise := cre.RunInNodeMode(config, runtime, fetchReserveData, // The SDK inspects the ReserveInfo struct to determine the aggregation strategy. cre.ConsensusAggregationFromTags[*ReserveInfo](), ) // ... } ``` ### Supported struct tags You can apply the `consensus_aggregation` tag to the fields of your struct. | Tag Value | Description | Compatible Field Types | | --------------- | -------------------------------------------------------------------------------------------------------------------- | --------------------------------------- | | `median` | Computes the median of the field's value across all nodes. | Numeric types (`int`, `*big.Int`, etc.) | | `identical` | Ensures the field's value is identical across all nodes. | Primitives (`string`, `bool`), structs | | `nested` | Instructs the aggregator to recursively inspect the fields of a nested struct for more `consensus_aggregation` tags. | Structs | | `ignore` | This field will be ignored during consensus and its value will be indeterminate. | Any | | `common_prefix` | Finds the longest common prefix for a slice of values from all nodes. | Slices (`[]string`, `[]int`, etc.) | | `common_suffix` | Finds the longest common suffix for a slice of values from all nodes. | Slices | --- # SDK Reference: Core Source: https://docs.chain.link/cre/reference/sdk/core-go Last Updated: 2025-11-04 This page provides a reference for the core data structures and functions of the CRE Go SDK. These are the fundamental building blocks that every workflow uses, regardless of trigger types or capabilities. ## Key concepts and components ### `cre.Handler` The `cre.Handler` function is the cornerstone of every workflow. It registers a handler that links a specific trigger to a callback function containing your workflow logic. It is typically called within your [`InitWorkflow`](#initworkflow) function. **Usage:** ```go workflow := cre.Workflow[*Config]{ cre.Handler( // 1. A configured trigger, e.g., cron.Trigger(...). // This determines WHEN the workflow runs. triggerInstance, // 2. The callback function to execute when the trigger fires. // This is WHERE your workflow logic lives. myCallbackFunction, ), } ``` - **The Trigger**: An instance of a trigger capability (e.g., `cron.Trigger(...)`). This defines the event that will start your workflow. See the [Triggers reference](/cre/reference/sdk/triggers) for details. - **The Callback**: The function to be executed when the trigger fires. The signature of your callback function must match the output type of the trigger you are using. ### `Runtime` and `NodeRuntime` These interfaces provide access to capabilities and manage the execution context of your workflow. The key difference is who is responsible for creating a single, trusted result from the work of many nodes. - **`cre.Runtime` ("Easy Mode")**: Passed to your main trigger callback, this represents the **DON's (Decentralized Oracle Network) execution context**. It is used for operations that are already guaranteed to be Byzantine Fault Tolerant (BFT). When you use the `Runtime`, you ask the network to execute something, and CRE handles the underlying complexity to ensure you get back one final, secure, and trustworthy result. A common use case is writing a transaction to a blockchain with the EVM client. - **`cre.NodeRuntime` ("Manual Mode")**: Represents an **individual node's execution context**. This is used when a BFT guarantee cannot be provided automatically (e.g., calling a third-party API). You tell each node to perform a task on its own, and each node returns its own individual answer. You are then responsible for telling the SDK how to combine them into a single, trusted result by providing a consensus and aggregation algorithm. It is used exclusively inside a [`RunInNodeMode`](#sdkruninnodemode) block and is provided by that function—you do not get this type directly in your handler's callback. To learn more about how to aggregate results from `NodeRuntime`, see the [Consensus & Aggregation](/cre/reference/sdk/consensus) reference. ### `Promise` A placeholder for a result that has not yet been computed. Asynchronous operations in the SDK, such as calls made with the [EVM Client](/cre/reference/sdk/evm-client) or [HTTP Client](/cre/reference/sdk/http-client), immediately return a `Promise`. **Methods:** - **`.Await()`**: Pauses the execution of a callback and waits for the underlying asynchronous operation to complete. It returns the final result and an error. - **`cre.Then(...)`**: A function that allows chaining promises. It takes a promise and a function to execute when that promise resolves. The function returns a result or error directly. - **`cre.ThenPromise(...)`**: Similar to `cre.Then`, but for functions that return another promise. This is useful when the next step in the chain is also asynchronous. **`Await()` Example:** ```go // myClient.SendRequest returns a Promise promise := myClient.SendRequest(runtime, req) // Await blocks until the result is available result, err := promise.Await() if err != nil { return nil, fmt.Errorf("failed to send request: %w", err) } // Use the result logger := runtime.Logger() logger.Info("Request sent successfully", "response", result) ``` **`cre.Then()` Example:** `cre.Then` can help avoid nested `.Await()` calls. The following two code blocks are equivalent: ```go // With nested .Await() firstPromise := client.Step1(runtime, input) firstResult, err := firstPromise.Await() if err != nil { /* handle error */ } secondPromise := client.Step2(runtime, firstResult) secondResult, err := secondPromise.Await() if err != nil { /* handle error */ } ``` ```go // With cre.Then() firstPromise := client.Step1(runtime, input) secondPromise := cre.Then(firstPromise, func(firstResult Step1OutputType) (Step2OutputType, error) { return client.Step2(runtime, firstResult).Await() }) secondResult, err := secondPromise.Await() if err != nil { /* handle error */ } ``` **`cre.ThenPromise()` Example:** When your chaining function itself returns a promise, use `ThenPromise` to avoid unnecessary `.Await()` calls: ```go // When the chaining function returns a Promise firstPromise := client.Step1(runtime, input) secondPromise := cre.ThenPromise(firstPromise, func(firstResult Step1OutputType) cre.Promise[Step2OutputType] { return client.Step2(runtime, firstResult) // Returns a Promise directly }) secondResult, err := secondPromise.Await() if err != nil { /* handle error */ } ``` ## Workflow entry points Your workflow code requires two specific functions to serve as entry points for compilation and execution. ### `main()` This is the entry point of your workflow binary. You must define this function to create a WASM runner and start your workflow. **Required Signature:** ```go import "github.com/smartcontractkit/cre-sdk-go/cre/wasm" func main() { // The runner parses your workflow's static configuration and // is responsible for executing your workflow logic. wasm.NewRunner(cre.ParseJSON[*Config]).Run(InitWorkflow) } ``` ### `InitWorkflow` This is the second required entry point. The CRE runner calls this function to initialize your workflow and register all its handlers. **Required Signature:** ```go import ( "log/slog" "github.com/smartcontractkit/cre-sdk-go/cre" ) func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) ``` **Parameters:** - `config`: A pointer to your workflow's static configuration struct. - `logger`: A `slog.Logger` instance for structured logging. - `secretsProvider`: An interface for requesting secrets. **Returns:** - A `cre.Workflow` slice containing all the handlers for your workflow. - An `error` if initialization fails. **Example:** ```go import ( "log/slog" "github.com/smartcontractkit/cre-sdk-go/cre" "github.com/smartcontractkit/cre-sdk-go/capabilities/scheduler/cron" ) // onCronTrigger is the callback function executed by the handler. func onCronTrigger(config *Config, runtime cre.Runtime, payload *cron.Payload) (string, error) { // ... workflow logic ... return "complete", nil } func InitWorkflow(config *Config, logger *slog.Logger, secretsProvider cre.SecretsProvider) (cre.Workflow[*Config], error) { // ... workflow := cre.Workflow[*Config]{ cre.Handler( cron.Trigger(&cron.Config{Schedule: "0 */10 * * * *"}), onCronTrigger, ), } return workflow, nil } ``` ## `cre.RunInNodeMode` As explained in the [`Runtime` and `NodeRuntime`](#runtime-and-noderuntime) section, this helper function is the bridge between the DON-level execution context (`Runtime`) and the individual node-level context (`NodeRuntime`). It allows you to execute code on individual nodes and then aggregate their results back into a single, trusted outcome. **Signature:** ```go func RunInNodeMode[C, T any]( config C, runtime Runtime, fn func(config C, nodeRuntime NodeRuntime) (T, error), ca ConsensusAggregation[T], ) Promise[T] ``` **Example:** This example uses `RunInNodeMode` to fetch data from an API on each node, and then uses the DON-level `Runtime` to write the aggregated result onchain. ```go import "github.com/smartcontractkit/cre-sdk-go/cre" func onTrigger(config *Config, runtime cre.Runtime, ...) (string, error) { // 1. Run code on individual nodes using `RunInNodeMode` // The `fn` passed to it receives a `NodeRuntime` pricePromise := cre.RunInNodeMode(config, runtime, func(config *Config, nodeRuntime cre.NodeRuntime) (int, error) { // Use nodeRuntime to call a capability like the HTTP Client return fetchOffchainPrice(nodeRuntime) }, // The results from all nodes are aggregated with a consensus algorithm cre.ConsensusMedianAggregation[int](), ) price, err := pricePromise.Await() if err != nil { return "", err } // 2. Now, back in the DON context, use the top-level `runtime` // to perform an action that requires consensus, like an onchain write. txPromise := onchainContract.WriteReportUpdatePrice(runtime, price, nil) tx, err := txPromise.Await() //... } ``` --- # SDK Reference: EVM Client Source: https://docs.chain.link/cre/reference/sdk/evm-client-go Last Updated: 2025-11-04 This page provides a reference for the `evm.Client`, the low-level tool for all interactions with EVM-compatible blockchains. The client includes a comprehensive set of read, write, and utility methods for building chain-aware workflows. ## Client instantiation To use the client, you must instantiate it with the `ChainSelector` ID for the blockchain you intend to interact with. ```go import "github.com/smartcontractkit/cre-sdk-go/capabilities/blockchain/evm" // Instantiate a client for Ethereum Sepolia sepoliaClient := &evm.Client{ ChainSelector: 16015286601757825753, // ethereum-testnet-sepolia } ``` For chains that have a constant defined in the SDK, you can also use the `ChainSelectorFromName` helper for improved readability. See the [Chain Selectors](#chain-selectors) section for a full list of available constants. ```go // Alternatively, use the ChainSelectorFromName helper for better readability sepoliaSelector, err := evm.ChainSelectorFromName("ethereum-testnet-sepolia") sepoliaClient := &evm.Client{ ChainSelector: sepoliaSelector, } ``` ## Read & query methods These methods are used to read data from the blockchain without creating a transaction. ### `CallContract` Executes a `view` or `pure` function on a smart contract. To use this function, you construct an `evm.CallContractRequest` object, which holds the details of your call. The function returns a promise that resolves to an `evm.CallContractReply` containing the data returned by the contract. **Signature:** ```go func (c *Client) CallContract(runtime cre.Runtime, input *CallContractRequest) cre.Promise[*CallContractReply] ``` #### `evm.CallContractRequest` This is the main input object for the `CallContract` function. It acts as a wrapper for the call message and an optional block number. | Field | Type | Description | | ------------- | -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `Call` | `*evm.CallMsg` | Contains the actual details of the function call you want to make. | | `BlockNumber` | `*pb.BigInt` | Optional. The block number to query. Defaults to `latest`. Use `-2` for `latest` (the most recent block, which may be subject to re-orgs) or `-3` for `finalized` (a block that is considered immutable and safe from re-orgs). | #### `evm.CallMsg` This struct contains the core details of your onchain call. | Field | Type | Description | | ------ | -------- | ------------------------------------------------------------------------------------------------- | | `From` | `[]byte` | Optional. The 20-byte address of the sender. | | `To` | `[]byte` | The 20-byte address of the target contract. | | `Data` | `[]byte` | The ABI-encoded byte string for the function call, including the function selector and arguments. | #### `evm.CallContractReply` This is the object returned by the promise when the `CallContract` function successfully completes. | Field | Type | Description | | ------ | -------- | --------------------------------------------------- | | `Data` | `[]byte` | The ABI-encoded data returned by the contract call. | ### `BalanceAt` Retrieves the native token balance for a specific account. You provide the account address in an `evm.BalanceAtRequest`, and the function returns a promise that resolves to an `evm.BalanceAtReply` containing the balance. **Signature:** ```go func (c *Client) BalanceAt(runtime cre.Runtime, input *BalanceAtRequest) cre.Promise[*BalanceAtReply] ``` #### `evm.BalanceAtRequest` | Field | Type | Description | | ------------- | ------------ | ---------------------------------------------------------- | | `Account` | `[]byte` | The 20-byte address of the account to query. | | `BlockNumber` | `*pb.BigInt` | Optional. The block number to query. Defaults to `latest`. | #### `evm.BalanceAtReply` | Field | Type | Description | | --------- | ------------ | ---------------------------------- | | `Balance` | `*pb.BigInt` | The balance of the account in Wei. | ### `FilterLogs` Queries historical event logs that match a specific set of filter criteria defined in an `evm.FilterLogsRequest`. The function returns a promise that resolves to an `evm.FilterLogsReply` containing an array of matching logs. **Signature:** ```go func (c *Client) FilterLogs(runtime cre.Runtime, input *FilterLogsRequest) cre.Promise[*FilterLogsReply] ``` #### `evm.FilterLogsRequest` | Field | Type | Description | | ------------- | ------------------ | -------------------------------------------------------------------------------------------- | | `FilterQuery` | `*evm.FilterQuery` | A struct defining the filters for the log query, such as block range, addresses, and topics. | #### `evm.FilterLogsReply` | Field | Type | Description | | ------ | ------------ | ---------------------------------------------------- | | `Logs` | `[]*evm.Log` | An array of log objects that match the filter query. | ### `GetTransactionByHash` Retrieves a transaction by its hash. You provide the hash in an `evm.GetTransactionByHashRequest`, and the function returns a promise that resolves to an `evm.GetTransactionByHashReply` containing the transaction object. **Signature:** ```go func (c *Client) GetTransactionByHash(runtime cre.Runtime, input *GetTransactionByHashRequest) cre.Promise[*GetTransactionByHashReply] ``` #### `evm.GetTransactionByHashRequest` | Field | Type | Description | | ------ | -------- | ----------------------------------------------- | | `Hash` | `[]byte` | The 32-byte hash of the transaction to look up. | #### `evm.GetTransactionByHashReply` | Field | Type | Description | | ------------- | ------------------ | --------------------------------- | | `Transaction` | `*evm.Transaction` | The transaction object, if found. | ### `GetTransactionReceipt` Fetches the receipt for a transaction given its hash. You provide the hash in an `evm.GetTransactionReceiptRequest`, and the function returns a promise that resolves to an `evm.GetTransactionReceiptReply` containing the receipt. **Signature:** ```go func (c *Client) GetTransactionReceipt(runtime cre.Runtime, input *GetTransactionReceiptRequest) cre.Promise[*GetTransactionReceiptReply] ``` #### `evm.GetTransactionReceiptRequest` | Field | Type | Description | | ------ | -------- | ------------------------------------ | | `Hash` | `[]byte` | The 32-byte hash of the transaction. | #### `evm.GetTransactionReceiptReply` | Field | Type | Description | | --------- | -------------- | ----------------------------------------- | | `Receipt` | `*evm.Receipt` | The transaction receipt object, if found. | ### `HeaderByNumber` Retrieves a block header by its number. You provide the block number in an `evm.HeaderByNumberRequest`, and the function returns a promise that resolves to an `evm.HeaderByNumberReply` containing the header object. **Signature:** ```go func (c *Client) HeaderByNumber(runtime cre.Runtime, input *HeaderByNumberRequest) cre.Promise[*HeaderByNumberReply] ``` #### `evm.HeaderByNumberRequest` | Field | Type | Description | | ------------- | ------------ | -------------------------------------------------------------------------- | | `BlockNumber` | `*pb.BigInt` | The number of the block to retrieve. If `nil`, retrieves the latest block. | #### `evm.HeaderByNumberReply` | Field | Type | Description | | -------- | ------------- | ---------------------------------- | | `Header` | `*evm.Header` | The block header object, if found. | ### `EstimateGas` Estimates the gas required to execute a specific transaction. You provide the transaction details in an `evm.EstimateGasRequest`, and the function returns a promise that resolves to an `evm.EstimateGasReply` with the gas estimate. **Signature:** ```go func (c *Client) EstimateGas(runtime cre.Runtime, input *EstimateGasRequest) cre.Promise[*EstimateGasReply] ``` #### `evm.EstimateGasRequest` | Field | Type | Description | | ----- | -------------- | ------------------------------------------------------- | | `Msg` | `*evm.CallMsg` | The transaction message to simulate for gas estimation. | #### `evm.EstimateGasReply` | Field | Type | Description | | ----- | -------- | ----------------------------------------- | | `Gas` | `uint64` | The estimated amount of gas in gas units. | ## Write methods ### `WriteReport` Executes a state-changing transaction by submitting a report to a designated receiver contract. You provide the transaction details in an `evm.WriteCreReportRequest`, and the function returns a promise that resolves to an `evm.WriteReportReply` with the transaction status. **Signature:** ```go func (c *Client) WriteReport(runtime cre.Runtime, input *WriteCreReportRequest) cre.Promise[*WriteReportReply] ``` #### `evm.WriteCreReportRequest` | Field | Type | Description | | ----------- | ---------------- | ------------------------------------------------------ | | `Receiver` | `[]byte` | The 20-byte address of the receiver contract to call. | | `Report` | `*cre.Report` | The report data generated by the DON to be submitted. | | `GasConfig` | `*evm.GasConfig` | Optional. Gas limit configuration for the transaction. | #### `evm.WriteReportReply` | Field | Type | Description | | --------------------------------- | ---------------------------------- | ----------------------------------------------------------------------------------- | | `TxStatus` | `TxStatus` | The final status of the transaction: `SUCCESS`, `REVERTED`, or `FATAL`. | | `ReceiverContractExecutionStatus` | `*ReceiverContractExecutionStatus` | Optional. The status of the receiver contract's execution: `SUCCESS` or `REVERTED`. | | `TxHash` | `[]byte` | Optional. The 32-byte transaction hash of the onchain submission. | | `TransactionFee` | `*pb.BigInt` | Optional. The total fee paid for the transaction in Wei. | | `ErrorMessage` | `*string` | Optional. An error message if the transaction failed. | ## Chain Selectors A **chain selector** is a unique identifier for a blockchain network used throughout the CRE platform. The same chain can be referenced in three different ways depending on the context. All three formats are equivalent and refer to the same blockchain. ### Understanding the three formats | Format | Example | Used In | | ---------------------------- | ---------------------------- | -------------------------------------------------------------------------------------------------- | | **String Name** (kebab-case) | `"ethereum-testnet-sepolia"` | `project.yaml` configuration files, workflow `config.json` files, `ChainSelectorFromName()` helper | | **Go Constant** (PascalCase) | `evm.EthereumTestnetSepolia` | Directly in your workflow Go code when you need the numeric ID | | **Numeric ID** (uint64) | `16015286601757825753` | `evm.Client` instantiation in Go code | ### Complete chain selector reference This table shows all three equivalent formats for each supported chain: | Chain | String Name | Go Constant | Numeric ID | | ----------------------------- | ------------------------------------- | ----------------------------------- | -------------------- | | Avalanche Mainnet | avalanche-mainnet | evm.AvalancheMainnet | 6433500567565415381 | | Avalanche Fuji Testnet | avalanche-testnet-fuji | evm.AvalancheTestnetFuji | 14767482510784806043 | | BNB Smart Chain opBNB Mainnet | binance_smart_chain-mainnet-opbnb-1 | evm.BinanceSmartChainMainnetOpbnb1 | 465944652040885897 | | BNB Smart Chain opBNB Testnet | binance_smart_chain-testnet-opbnb-1 | evm.BinanceSmartChainTestnetOpbnb1 | 13274425992935471758 | | Ethereum Mainnet | ethereum-mainnet | evm.EthereumMainnet | 5009297550715157269 | | Ethereum Mainnet (Arbitrum) | ethereum-mainnet-arbitrum-1 | evm.EthereumMainnetArbitrum1 | 4949039107694359620 | | Ethereum Mainnet (Optimism) | ethereum-mainnet-optimism-1 | evm.EthereumMainnetOptimism1 | 3734403246176062136 | | Ethereum Sepolia Testnet | ethereum-testnet-sepolia | evm.EthereumTestnetSepolia | 16015286601757825753 | | Ethereum Sepolia (Arbitrum) | ethereum-testnet-sepolia-arbitrum-1 | evm.EthereumTestnetSepoliaArbitrum1 | 3478487238524512106 | | Ethereum Sepolia (Base) | ethereum-testnet-sepolia-base-1 | evm.EthereumTestnetSepoliaBase1 | 10344971235874465080 | | Ethereum Sepolia (Optimism) | ethereum-testnet-sepolia-optimism-1 | evm.EthereumTestnetSepoliaOptimism1 | 5224473277236331295 | | Polygon Mainnet | polygon-mainnet | evm.PolygonMainnet | 4051577828743386545 | | Polygon Amoy Testnet | polygon-testnet-amoy | evm.PolygonTestnetAmoy | 16281711391670634445 | ### Usage examples **In your `project.yaml` file (RPC configuration):** ```yaml local-simulation: rpcs: - chain-name: ethereum-testnet-sepolia # String name for RPC endpoint url: https://your-rpc-url.com ``` **In your workflow's `config.json` file (workflow-specific settings):** Your workflow configuration can include chain selector names as part of your custom config structure. For example: ```json { "schedule": "*/30 * * * * *", "evms": [ { "storageAddress": "0x1234...", "chainName": "ethereum-testnet-sepolia" // Use string name in your config } ] } ``` **In your workflow Go code (Option 1 - Using the constant):** ```go evmClient := &evm.Client{ ChainSelector: evm.EthereumTestnetSepolia, // Direct constant } ``` **In your workflow Go code (Option 2 - Using the helper function with config):** ```go // Read chain name from your config and convert to numeric selector sepoliaSelector, err := evm.ChainSelectorFromName(config.Evms[0].ChainName) if err != nil { return err } evmClient := &evm.Client{ ChainSelector: sepoliaSelector, } ``` ### `ChainSelectorFromName` A helper function to convert a string name to its numeric chain selector ID. **Signature:** ```go func ChainSelectorFromName(name string) (uint64, error) ``` **Parameters:** - `name`: The kebab-case string identifier for the chain (e.g., `"ethereum-testnet-sepolia"`) **Returns:** - The numeric chain selector ID - An error if the chain name is not recognized **Example:** ```go selector, err := evm.ChainSelectorFromName("ethereum-testnet-sepolia") if err != nil { return fmt.Errorf("invalid chain name: %w", err) } // selector now holds 16015286601757825753 ``` --- # SDK Reference: HTTP Client Source: https://docs.chain.link/cre/reference/sdk/http-client-go Last Updated: 2025-11-04 The HTTP Client lets you make requests to external APIs from your workflow. Each node in the DON executes the request independently, and the SDK uses consensus to provide a single, reliable result. **Common use cases:** - Fetching data from REST APIs ([GET requests](/cre/guides/workflow/using-http-client/get-request)) - Sending data to webhooks ([POST requests](/cre/guides/workflow/using-http-client/post-request)) - Submitting reports to offchain systems ([Report submission](/cre/guides/workflow/using-http-client/submitting-reports-http)) ## Quick reference | Method | Use When | Guide | | ------------------------------------------------------ | ----------------------------------------- | ----------------------------------------------------------------------------------------------------------------------- | | [`http.SendRequest`](#httpsendrequest) | Making HTTP calls (recommended) | [GET](/cre/guides/workflow/using-http-client/get-request) / [POST](/cre/guides/workflow/using-http-client/post-request) | | [`client.SendRequest`](#clientsendrequest) | Complex scenarios requiring fine control | [GET](/cre/guides/workflow/using-http-client/get-request#2-the-creruninnodemode-pattern-low-level) | | [`sendRequester.SendReport`](#sendrequestersendreport) | Submitting reports via HTTP (recommended) | [Report submission](/cre/guides/workflow/using-http-client/submitting-reports-http) | | [`client.SendReport`](#clientsendreport) | Complex report submission scenarios | [Report submission](/cre/guides/workflow/using-http-client/submitting-reports-http#advanced-low-level-pattern) | ## Core types ### `http.Request` Defines the parameters for an outgoing HTTP request. | Field | Type | Description | | --------------- | ---------------------- | ------------------------------------------------------------------------------------------------------------------------- | | `Url` | `string` | The URL of the API endpoint. | | `Method` | `string` | The HTTP method (e.g., `"GET"`, `"POST"`). | | `Headers` | `map[string]string` | Optional HTTP headers. | | `Body` | `[]byte` | Optional raw request body. | | `Timeout` | `*durationpb.Duration` | Optional request timeout duration. Set using `durationpb.New()`, e.g., `durationpb.New(30 * time.Second)` for 30 seconds. | | `CacheSettings` | `*CacheSettings` | Optional caching behavior for the request. | ### `http.Response` The result of the HTTP call from a single node. If the request fails, the error is returned by the `.Await()` call on the `Promise`. | Field | Type | Description | | ------------ | ------------------- | -------------------------- | | `StatusCode` | `uint32` | The HTTP status code. | | `Headers` | `map[string]string` | The HTTP response headers. | | `Body` | `[]byte` | The raw response body. | ### `http.CacheSettings` Defines caching behavior for the request. This is particularly useful for preventing duplicate execution of non-idempotent requests (POST, PUT, PATCH, DELETE). | Field | Type | Description | | -------- | ---------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `Store` | `bool` | If `true`, store the response in the cache for potential reuse by other nodes. | | `MaxAge` | `*durationpb.Duration` | Maximum age of a cached response that this workflow will accept. If zero or nil, the request will not attempt to read from cache (but may still store if `Store` is true). Max value is 10 minutes. | #### Understanding `CacheSettings` behavior When you make HTTP requests in CRE, **all nodes in the DON execute the request by default**. For read-only operations (like GET), this is fine—consensus ensures a reliable result. However, for **non-idempotent operations** (POST, PUT, PATCH, DELETE), multiple executions can cause problems: - Creating duplicate resources (e.g., multiple user accounts) - Triggering duplicate actions (e.g., sending multiple emails) - Unintended side effects (e.g., incrementing counters multiple times) **`CacheSettings` provides a solution** by enabling a shared cache across all nodes in the DON: 1. **Node 1** makes the HTTP request and stores the response in the shared cache 2. **Nodes 2, 3, etc.** check the cache first and reuse the cached response if it exists **Important considerations:** - **Best effort mechanism**: The caching works reliably in most scenarios, but is not guaranteed to prevent all duplicates. For example, gateway availability (network issues or deployments) can affect routing to different gateway instances. - **Request matching**: Caching only works when all nodes construct **identical requests** (same URL, headers, and body). Ensure your workflow generates deterministic request payloads. - **Understanding `MaxAge`**: This controls how stale your workflow will accept a cached response to be: - The cache system stores responses for up to 10 minutes by default (system-wide TTL) - `MaxAge` lets your workflow specify: "I'll only use cached data if it's fresher than X duration" - Set `MaxAge` using `durationpb.New()`, for example: `durationpb.New(60 * time.Second)` for 60 seconds - Setting `MaxAge` to `nil` or zero forces a fresh fetch every time (but still stores if `Store` is true) - For POST/PUT/PATCH/DELETE operations: Set this slightly longer than your workflow's expected execution time - For GET operations where you want to reuse data: Set this to your desired cache duration For practical examples, see the [POST request guide](/cre/guides/workflow/using-http-client/post-request#1-understanding-single-execution-with-cachesettings). ## Making HTTP requests ### `http.SendRequest` The recommended, high-level helper for making HTTP requests. Automatically handles the `cre.RunInNodeMode` pattern. **Signature:** ```go func SendRequest[C, T any]( config C, runtime cre.Runtime, client *Client, fn func(config C, logger *slog.Logger, sendRequester *SendRequester) (T, error), ca cre.ConsensusAggregation[T], ) cre.Promise[T] ``` **Parameters:** - `config`: Your workflow's configuration struct, passed to `fn` - `runtime`: The top-level `cre.Runtime` from your trigger callback - `client`: An initialized `*http.Client` - `fn`: Your request logic function that receives `config`, `logger`, and `sendRequester` - `ca`: The [consensus aggregation method](/cre/reference/sdk/consensus) **Example:** ```go func fetchData(config *Config, logger *slog.Logger, sendRequester *http.SendRequester) (*Data, error) { resp, err := sendRequester.SendRequest(&http.Request{ Url: config.ApiUrl, Method: "GET", }).Await() if err != nil { return nil, err } // Parse and return... } // In your trigger callback result, err := http.SendRequest(config, runtime, client, fetchData, cre.ConsensusAggregationFromTags[*Data](), ).Await() ``` **Guides:** - [GET Requests](/cre/guides/workflow/using-http-client/get-request#1-the-httpsendrequest-pattern-recommended) - [POST Requests](/cre/guides/workflow/using-http-client/post-request#1-the-httpsendrequest-pattern-recommended) ### `client.SendRequest` The lower-level method for complex scenarios. Must be manually wrapped in `cre.RunInNodeMode`. **Signature:** ```go func (c *Client) SendRequest(runtime cre.NodeRuntime, input *http.Request) cre.Promise[*http.Response] ``` **Parameters:** - `runtime`: A `cre.NodeRuntime` provided by `cre.RunInNodeMode` - `input`: An `*http.Request` struct **Returns:** - `cre.Promise[*http.Response]` **Guide:** - [Using `cre.RunInNodeMode`](/cre/guides/workflow/using-http-client/get-request#2-the-creruninnodemode-pattern-low-level) ## Submitting reports via HTTP These methods send cryptographically signed reports (generated via `runtime.GenerateReport()`) to an HTTP endpoint. For a comprehensive guide including transformation function patterns and best practices, see [Submitting Reports via HTTP](/cre/guides/workflow/using-http-client/submitting-reports-http). ### `sendRequester.SendReport` The recommended, high-level method for submitting reports via HTTP. **Signature:** ```go func (c *SendRequester) SendReport( report cre.Report, fn func(*sdk.ReportResponse) (*http.Request, error) ) cre.Promise[*http.Response] ``` **Parameters:** - `report`: The `cre.Report` object generated by `runtime.GenerateReport()` - `fn`: A transformation function that formats the report for your API. Receives `*sdk.ReportResponse` and returns `*http.Request`. **Returns:** - `cre.Promise[*http.Response]` **Example:** ```go func formatReport(r *sdk.ReportResponse) (*http.Request, error) { return &http.Request{ Url: "https://api.example.com/reports", Method: "POST", Body: r.RawReport, CacheSettings: &http.CacheSettings{ Store: true, MaxAge: durationpb.New(5 * time.Minute), }, }, nil } resp, err := sendRequester.SendReport(report, formatReport).Await() ``` **Guide:** - [Submitting Reports via HTTP](/cre/guides/workflow/using-http-client/submitting-reports-http) ### `client.SendReport` The lower-level version for complex scenarios. Must be manually wrapped in `cre.RunInNodeMode`. **Signature:** ```go func (c *Client) SendReport( runtime cre.NodeRuntime, report cre.Report, fn func(*sdk.ReportResponse) (*http.Request, error) ) cre.Promise[*http.Response] ``` **Parameters:** - `runtime`: A `cre.NodeRuntime` provided by `cre.RunInNodeMode` - `report`: The `cre.Report` object generated by `runtime.GenerateReport()` - `fn`: A transformation function that formats the report for your API **Returns:** - `cre.Promise[*http.Response]` **Guide:** - [Advanced: Low-level pattern](/cre/guides/workflow/using-http-client/submitting-reports-http#advanced-low-level-pattern) ## Report types These types are specific to report submission via HTTP. ### `sdk.ReportResponse` The internal structure of a report generated by `runtime.GenerateReport()`. This type is passed to your transformation function when using `SendReport`. It contains the encoded report data plus cryptographic signatures from the DON. | Field | Type | Description | | --------------- | ---------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------- | | `RawReport` | `[]byte` | The complete report in binary format, including metadata and your ABI-encoded payload. This is the primary data you'll send to your API. | | `ReportContext` | `[]byte` | Additional context data about the report (workflow execution details). Some APIs may require this to be included alongside the report. | | `Sigs` | `[]*sdk.AttributedSignature` | Array of cryptographic signatures from DON nodes. Each signature proves that a node validated and agreed on the report content. | | `ConfigDigest` | `[]byte` | Configuration digest identifying the DON configuration that generated this report. | | `SeqNr` | `uint64` | Sequence number of this report within the workflow execution. | **Example - Accessing report fields:** ```go func myTransformFunction(r *sdk.ReportResponse) (*http.Request, error) { logger.Info("Report details", "seqNr", r.SeqNr, "numSignatures", len(r.Sigs), "reportSize", len(r.RawReport), ) // Most APIs want the raw report in the body return &http.Request{ Url: "https://api.example.com/reports", Method: "POST", Body: r.RawReport, }, nil } ``` ### `sdk.AttributedSignature` Represents a single signature from a DON node on a report. | Field | Type | Description | | ----------- | -------- | -------------------------------------------------------------- | | `Signature` | `[]byte` | The cryptographic signature bytes. | | `SignerId` | `uint32` | The unique identifier of the node that created this signature. | **Example - Accessing signatures:** ```go func formatReport(r *sdk.ReportResponse) (*http.Request, error) { // Access signatures for i, sig := range r.Sigs { logger.Info("Signature", "index", i, "signerId", sig.SignerId, "length", len(sig.Signature)) } // Format for your API... } ``` For complete examples of including signatures in different formats (body, headers, JSON), see the [Submitting Reports via HTTP guide](/cre/guides/workflow/using-http-client/submitting-reports-http#formatting-patterns). --- # SDK Reference Source: https://docs.chain.link/cre/reference/sdk/overview-go Last Updated: 2025-11-04 This section provides a detailed technical reference for the public interfaces of the CRE Go SDK. Use this reference for quick lookups of specific functions, types, and method signatures. ## How to read this section The SDK Reference is broken down into several pages, each corresponding to a core part of the SDK's functionality: - **[Core SDK](/cre/reference/sdk/core)**: Covers the fundamental building blocks of any workflow, including `cre.Handler`, `cre.Runtime`, and `cre.Promise`. - **[Triggers](/cre/reference/sdk/triggers)**: Details the configuration and payload structures for all available trigger types (`Cron`, `HTTP`, `EVM Log`). - **[EVM Client](/cre/reference/sdk/evm-client)**: Provides a reference for the `evm.Client`, the primary tool for all EVM interactions, including reads and writes. - **[HTTP Client](/cre/reference/sdk/http-client)**: Provides a reference for the `http.Client`, used for making offchain API requests from individual nodes. - **[Consensus & Aggregation](/cre/reference/sdk/consensus)**: Describes how to use aggregators like `ConsensusMedianAggregation` and `ConsensusAggregationFromTags` with `RunInNodeMode` to process and consolidate data from multiple nodes. ## Contract Bindings For interacting with smart contracts, use the [CRE CLI's binding generator](/cre/guides/workflow/using-evm-client/generating-bindings) to automatically create type-safe Go bindings from your contract ABIs. These generated bindings work seamlessly with the EVM Client to provide a simple and reliable developer experience. --- # SDK Reference: Cron Trigger Source: https://docs.chain.link/cre/reference/sdk/triggers/cron-trigger-go Last Updated: 2025-11-04 The Cron Trigger fires at a specified schedule using standard cron expressions. It is ideal for workflows that need to run at regular intervals. ### `cron.Trigger` Creates the cron trigger instance. ```go import "github.com/smartcontractkit/cre-sdk-go/capabilities/scheduler/cron" cronTrigger := cron.Trigger(&cron.Config{ Schedule: "0 */10 * * * *", // Every 10 minutes }) ``` ### `cron.Config` The configuration struct for the cron trigger. | Field | Type | Description | | ---------- | ------ | --------------------------------------------------------------------------------------------------------------------------------------------- | | `Schedule` | string | A standard cron expression with 5 or 6 fields, where the optional 6th field represents seconds. **Note:** The minimum interval is 30 seconds. | ### `cron.Payload` The payload passed to the callback function. | Field | Type | Description | | ------------------------ | ------------------------ | ----------------------------------------- | | `ScheduledExecutionTime` | `*timestamppb.Timestamp` | The time the execution was scheduled for. | ### Callback Function Your callback function for cron triggers must conform to this signature: ```go func onCronTrigger(config *Config, runtime cre.Runtime, trigger *cron.Payload) (*YourReturnType, error) ``` **Parameters:** - `config`: Your workflow's static configuration struct. - `runtime`: The runtime object used to invoke capabilities. - `trigger`: The cron payload containing the scheduled execution time. --- # SDK Reference: EVM Log Trigger Source: https://docs.chain.link/cre/reference/sdk/triggers/evm-log-trigger-go Last Updated: 2025-11-04 The EVM Log Trigger fires when a specific log (event) is emitted by an onchain smart contract. ### `evm.LogTrigger` Creates the EVM log trigger instance. ```go import ( "github.com/ethereum/go-ethereum/common" "github.com/smartcontractkit/cre-sdk-go/capabilities/blockchain/evm" ) // The first argument is the Chain Selector for the network to monitor. // 16015286601757825753 is the selector for Sepolia testnet. logTrigger := evm.LogTrigger(16015286601757825753, &evm.FilterLogTriggerRequest{ Addresses: [][]byte{ common.HexToAddress("0x123...abc").Bytes(), }, // This example filters for a Transfer event signature Topics: []*evm.TopicValues{ { Values: [][]byte{ // Keccak256 hash of "Transfer(address,address,uint256)" common.HexToHash("0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef").Bytes(), }, }, }, Confidence: evm.ConfidenceLevel_CONFIDENCE_LEVEL_FINALIZED, }) ``` ### `evm.FilterLogTriggerRequest` The configuration struct for the EVM log trigger. | Field | Type | Description | | ------------ | -------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `Addresses` | `[][]byte` | A list of contract addresses to monitor, as 20-byte arrays. | | `Topics` | `[]*evm.TopicValues` | A fixed 4-element array to filter event topics. The first element contains event signatures, and the next three elements contain indexed argument values. An empty array element acts as a wildcard. | | `Confidence` | `ConfidenceLevel` | The block confirmation level to monitor. Can be:
  • **`evm.ConfidenceLevel_CONFIDENCE_LEVEL_SAFE` (default):** A block that is considered unlikely to be reorged but is not yet irreversible.
  • **`evm.ConfidenceLevel_CONFIDENCE_LEVEL_LATEST`:** The most recent block. This is the fastest but least secure, as the block could be orphaned. Best for non-critical, time-sensitive actions.
  • **`evm.ConfidenceLevel_CONFIDENCE_LEVEL_FINALIZED`:** A block that is considered irreversible. This is the safest option, as the event is guaranteed to be on the canonical chain, but it requires waiting longer for finality.
| ### `evm.Log` The generic payload passed to the callback function for any EVM log. **Fields:** | Field | Type | Description | | ------------- | ------------ | ----------------------------------------------- | | `Address` | `[]byte` | Address of the contract that emitted the log. | | `Topics` | `[][]byte` | Indexed log fields (including event signature). | | `Data` | `[]byte` | ABI-encoded non-indexed log data. | | `TxHash` | `[]byte` | Hash of the transaction. | | `BlockHash` | `[]byte` | Hash of the block. | | `BlockNumber` | `*pb.BigInt` | The block number containing the log. | | `TxIndex` | `uint32` | Index of the transaction within the block. | | `Index` | `uint32` | Index of the log within the block. | | `EventSig` | `[]byte` | Keccak256 hash of the event signature. | | `Removed` | `bool` | True if the log was removed during a reorg. | ### Callback Function Your callback function for EVM log triggers must conform to this signature: ```go func onEvmTrigger(config *Config, runtime cre.Runtime, log *evm.Log) (*YourReturnType, error) ``` **Parameters:** - `config`: Your workflow's static configuration struct. - `runtime`: The runtime object. - `log`: The generic `evm.Log` payload. You will need to manually decode the `Topics` and `Data` fields based on the event ABI. --- # SDK Reference: HTTP Trigger Source: https://docs.chain.link/cre/reference/sdk/triggers/http-trigger-go Last Updated: 2025-11-04 The HTTP Trigger fires when an HTTP request is made to the workflow's designated endpoint. This allows you to start workflows from external systems. ### `http.Trigger` Creates the HTTP trigger instance. ```go import "github.com/smartcontractkit/cre-sdk-go/capabilities/networking/http" httpTrigger := http.Trigger(&http.Config{ AuthorizedKeys: []*http.AuthorizedKey{ { Type: http.KeyType_KEY_TYPE_ECDSA_EVM, PublicKey: "0x...", }, }, }) ``` ### `http.Config` The configuration struct for the HTTP trigger. | Field | Type | Description | | ---------------- | ------------------ | ---------------------------------------------------------------------------------------------------------- | | `AuthorizedKeys` | `[]*AuthorizedKey` | **Required for deployed workflows.** A slice of EVM addresses authorized to trigger the workflow via HTTP. | ### `http.AuthorizedKey` Defines an EVM address authorized to trigger the workflow. | Field | Type | Description | | ----------- | --------- | -------------------------------------------------------------------------------------------------------------------- | | `Type` | `KeyType` | The type of the key. Must be `http.KeyType_KEY_TYPE_ECDSA_EVM` (currently the only supported authentication method). | | `PublicKey` | `string` | An EVM address (e.g., `"0xb08E004bd2b5aFf1F5F950d141f449B1c05800eb"`) authorized to trigger this workflow. | ### `http.Payload` The payload passed to the callback function. | Field | Type | Description | | ------- | ---------------- | ------------------------------------------------------------------------------ | | `Input` | `[]byte` | The JSON input from the HTTP request body as raw bytes. | | `Key` | `*AuthorizedKey` | The EVM address that signed the request (matches one of the `AuthorizedKeys`). | ### Callback Function Your callback function for HTTP triggers must conform to this signature: ```go import "github.com/smartcontractkit/cre-sdk-go/capabilities/networking/http" func onHttpTrigger(config *Config, runtime cre.Runtime, payload *http.Payload) (*YourReturnType, error) ``` **Parameters:** - `config`: Your workflow's static configuration struct. - `runtime`: The runtime object used to invoke capabilities. - `payload`: The HTTP payload containing the request input and signing key. --- # SDK Reference: Triggers Source: https://docs.chain.link/cre/reference/sdk/triggers/overview-go Last Updated: 2025-11-04 This section provides a reference for the built-in trigger capabilities of the CRE Go SDK. Each trigger type has its own configuration, payload structure, and required callback signature. - **[Cron Trigger](/cre/reference/sdk/triggers/cron-trigger)**: Fires at a specified schedule using standard cron expressions. - **[HTTP Trigger](/cre/reference/sdk/triggers/http-trigger)**: Fires when an HTTP request is made to the workflow's designated endpoint. - **[EVM Log Trigger](/cre/reference/sdk/triggers/evm-log-trigger)**: Fires when a specific log (event) is emitted by an onchain smart contract. --- # Running a Demo Workflow Source: https://docs.chain.link/cre/templates/running-demo-workflow-go Last Updated: 2025-11-04 This guide walks you through the core developer loop of CRE: initializing a project from a template and running it locally using the [simulator](/cre/guides/operations/simulating-workflows). By the end, you will have run the Custom Data Feed demo workflow and tested its two distinct behaviors: a **proactive** path where it fetches data from an API to write a result onchain, and a **reactive** path where it listens for onchain events to trigger new actions. ## What you'll do - **Initialize a project**: Use the `cre init` command to scaffold a complete project from the Custom Data Feed template. - **Configure your private key**: Add your funded Sepolia private key to the `.env` file. - **Generate bindings**: Use `cre generate-bindings` to create type-safe Go interfaces for the demo's smart contracts. - **Sync dependencies**: Use `go mod tidy` to download and sync your workflow's Go dependencies. - **Run the simulation**: Use `cre workflow simulate` to execute the end-to-end workflow and observe its output. ## 1. Prerequisites Before you begin, ensure you have the necessary tools installed: - **CRE CLI**: You must have the CRE CLI installed. See [Install the CLI](/cre/getting-started/cli-installation) for instructions. - **CRE account & authentication**: You must have a CRE account and be logged in with the CLI. Run cre whoami in your terminal to verify you're logged in, or run cre login to authenticate. See [Creating Your Account](/cre/account/creating-account) and [Logging in with the CLI](/cre/account/cli-login) for instructions. - **Go**: You must have Go version 1.24.5 or higher installed. Check your version with go version. See [Install Go](https://go.dev/doc/install) for instructions. - **Sepolia Testnet Account**: You need a private key for an account funded with Sepolia ETH. This is required because the demo workflow performs a write transaction. Go to faucets.chain.link to get some Sepolia ETH. ## 2. Initialize the demo project The `cre init` command scaffolds a new project from a template. The CLI will prompt you for configuration details during initialization. 1. **In your terminal, navigate to where you want your project created.** 2. **Run the init command:** ```bash cre init ``` 3. **Provide the following details when prompted:** - **Project name**: demo (this becomes your project directory name) - **Language**: Select `Golang` and press Enter. - **Pick a workflow template**: Select `Custom data feed: Updating on-chain data periodically using offchain API data` - **Sepolia RPC URL**: Press Enter to use the default public RPC (`https://ethereum-sepolia-rpc.publicnode.com`), or provide your own Sepolia RPC URL. - **Workflow name**: custom-data-feed **Result:** The CLI creates a new `demo` directory with all the necessary files and folders, including: - A `custom-data-feed` subdirectory containing your workflow code - A `contracts/evm/src/` directory with contract ABIs and a `keystone/` subdirectory - A pre-configured `project.yaml` with your RPC URL already set ## 3. Configure your private key The demo workflow needs your funded Sepolia private key to sign and broadcast transactions. **Open the `.env` file** in your project root (`demo`) and replace the placeholder private key with your actual funded Sepolia private key: ## 4. Generate contract bindings To interact with the demo's smart contracts, you need to generate type-safe Go bindings from their ABIs. 1. **Navigate into your project directory:** ```bash cd demo ``` 2. **Run the `generate-bindings` command:** ```bash cre generate-bindings evm ``` This command finds the contract ABIs inside the `contracts/evm/src/abi/` directory and generates Go packages in `contracts/evm/src/generated/`. ## 5. Sync your dependencies Your workflow code imports SDK packages and the newly generated contract bindings. Run `go mod tidy` to automatically download and sync all the required dependencies for the entire project. ```bash go mod tidy ``` ## 6. Run the simulation Now you are ready to compile and run the workflow. The workflow code (`workflow.go` and `main.go`) is cleverly designed to demonstrate two distinct, powerful workflow patterns. We will run each one separately. - **Path A: End-to-End Custom Data Feed**: Triggered by a CRON schedule, this workflow performs the full offchain to onchain data feed check and writes the result to the blockchain. - **Path B: Reactive Event Handling**: Triggered by an onchain event log, this workflow demonstrates how to use data from one event to react and query another contract. 1. **Ensure you are in the project root directory (`demo`).** 2. **Run the `simulate` command**: ```bash cre workflow simulate custom-data-feed --broadcast --target staging-settings ``` You will first see a `Workflow compiled` message, followed by the trigger selection menu. *** ### Path A: The end-to-end Custom Data Feed workflow This path executes the core functionality of the demo: fetching offchain reserve data and writing the result onchain. It is initiated by the CRON trigger. #### **Running with the CRON Trigger** 1. At the prompt, select the `cron-trigger` by pressing `1` and then `Enter`. ``` Workflow compiled 🚀 Workflow simulation ready. Please select a trigger: 1. cron-trigger@1.0.0 Trigger 2. evm:ChainSelector:16015286601757825753@1.0.0 LogTrigger Enter your choice (1-2): 1 ``` 2. The simulator will execute the full end-to-end workflow. The final logs will show the transaction hash from the successful onchain write. ```bash Workflow compiled 🚀 Workflow simulation ready. Please select a trigger: 1. cron-trigger@1.0.0 Trigger 2. evm:ChainSelector:16015286601757825753@1.0.0 LogTrigger Enter your choice (1-2): 1 2025-10-31T17:13:52Z [SIMULATION] Simulator Initialized 2025-10-31T17:13:52Z [SIMULATION] Running trigger trigger=cron-trigger@1.0.0 2025-10-31T17:13:52Z [USER LOG] msg="fetching por" url=https://api.real-time-reserves.verinumus.io/v1/chainlink/proof-of-reserves/TrueUSD evms="[{TokenAddress:0x4700A50d858Cb281847ca4Ee0938F80DEfB3F1dd ReserveManagerAddress:0x51933aD3A79c770cb6800585325649494120401a BalanceReaderAddress:0x4b0739c94C1389B55481cb7506c62430cA7211Cf MessageEmitterAddress:0x1d598672486ecB50685Da5497390571Ac4E93FDc ChainName:ethereum-testnet-sepolia GasLimit:1000000}]" 2025-10-31T17:13:53Z [USER LOG] msg=ReserveInfo reserveInfo="&{LastUpdated:2025-10-31 22:13:36.163 +0000 UTC TotalReserve:494515082.75}" 2025-10-31T17:13:53Z [USER LOG] msg=TotalSupply totalSupply=1000000000000000000000000 2025-10-31T17:13:53Z [USER LOG] msg=TotalReserveScaled totalReserveScaled=494515082750000000000000000 2025-10-31T17:13:53Z [USER LOG] msg="Getting native balances" address=0x4b0739c94C1389B55481cb7506c62430cA7211Cf tokenAddress=0x4700A50d858Cb281847ca4Ee0938F80DEfB3F1dd 2025-10-31T17:13:53Z [USER LOG] msg="Native token balance" token=0x4700A50d858Cb281847ca4Ee0938F80DEfB3F1dd balance=0 2025-10-31T17:13:53Z [USER LOG] msg="Updating reserves" totalSupply=1000000000000000000000000 totalReserveScaled=494515082750000000000000000 2025-10-31T17:13:53Z [USER LOG] msg="Writing report" totalSupply=1000000000000000000000000 totalReserveScaled=494515082750000000000000000 2025-10-31T17:14:00Z [USER LOG] msg="Write report succeeded" response="tx_status:TX_STATUS_SUCCESS receiver_contract_execution_status:RECEIVER_CONTRACT_EXECUTION_STATUS_SUCCESS tx_hash:\"E7\\xe6\\x8d5h\\x87b\\xfeZ\\x9b\\x81\\x86W\\xbcߩxaѬ\\xe3\\xb9\\xf0\\x0ewL\\xb1\\x1e\\x82g\\xa0\" transaction_fee:{abs_val:\"\\x11M)[@\" sign:1}" 2025-10-31T17:14:00Z [USER LOG] msg="Write report transaction succeeded at" txHash=0x4537e68d35688762fe5a9b818657bcdfa97861d1ace3b9f00e774cb11e8267a0 Workflow Simulation Result: "494515082.75" 2025-10-31T17:14:00Z [SIMULATION] Execution finished signal received 2025-10-31T17:14:00Z [SIMULATION] Skipping WorkflowEngineV2 ``` #### **Verifying the result onchain** 1. **Check the Transaction**: Copy the `txHash` from the logs in your terminal and paste it into the search bar on Sepolia Etherscan. You will see the full details of the transaction your workflow submitted. 2. **Check the Contract State**: While your transaction went to the Forwarder, the underlying ReserveManager contract's state was still updated. You can verify this change directly on Etherscan in two ways: **Option A: Read the contract state** - Navigate to the ReserveManager contract address used in the demo: `0x073671aE6EAa2468c203fDE3a79dEe0836adF032`. - Go to the `Read Contract` tab. - Check the values for `lastTotalMinted` and `lastTotalReserve`. They should now reflect the data your workflow just wrote to the chain. **Option B: Check the transaction events** - Go to your transaction on Etherscan (using the `txHash` from your logs). - Click on the `Logs` tab. - You'll see events emitted during the transaction, including the event from the ReserveManager contract confirming the data update. This completes the end-to-end loop: triggering a workflow, fetching data, and verifiably writing the result to a public blockchain. *** ### Path B: The reactive event handler This path demonstrates a more advanced, reactive pattern. It uses an onchain event (a log) as a trigger, inspects the data within that event, and uses that data to make an onchain read call. This path does not write any data onchain. #### **Running with the log trigger** 1. From the trigger menu, select the EVM Log Trigger by pressing `2` and then `Enter`. 2. When prompted, provide the following details for a real transaction on the Sepolia testnet that emitted a `MessageEmitted` event: - **Transaction hash**: 0x420721d7d00130a03c5b525b2dbfd42550906ddb3075e8377f9bb5d1a5992f8e - **Event index**: 0 3. The simulator will find the onchain event and use its payload to run the workflow. The final log will show the message it retrieved from the `MessageEmitter` contract. ```bash Workflow compiled 🚀 Workflow simulation ready. Please select a trigger: 1. cron-trigger@1.0.0 Trigger 2. evm:ChainSelector:16015286601757825753@1.0.0 LogTrigger Enter your choice (1-2): 2 🔗 EVM Trigger Configuration: Please provide the transaction hash and event index for the EVM log event. Enter transaction hash (0x...): 0x420721d7d00130a03c5b525b2dbfd42550906ddb3075e8377f9bb5d1a5992f8e Enter event index (0-based): 0 Fetching transaction receipt for transaction 0x420721d7d00130a03c5b525b2dbfd42550906ddb3075e8377f9bb5d1a5992f8e... Found log event at index 0: contract=0x1d598672486ecB50685Da5497390571Ac4E93FDc, topics=3 Created EVM trigger log for transaction 0x420721d7d00130a03c5b525b2dbfd42550906ddb3075e8377f9bb5d1a5992f8e, event 0 2025-10-31T17:49:53Z [SIMULATION] Simulator Initialized 2025-10-31T17:49:53Z [SIMULATION] Running trigger trigger=evm:ChainSelector:16015286601757825753@1.0.0 2025-10-31T17:49:53Z [USER LOG] msg="Message retrieved from the event log" message="this is a test message" 2025-10-31T17:49:53Z [USER LOG] msg="Message retrieved from the contract" message="this is a test message" Workflow Simulation Result: "this is a test message" 2025-10-31T17:49:53Z [SIMULATION] Execution finished signal received 2025-10-31T17:49:53Z [SIMULATION] Skipping WorkflowEngineV2 ``` #### **How it works: An event-driven pattern** What you just witnessed is a powerful event-driven capability. The workflow didn't just react to an event; it used the information *inside* that event to drive its next action. Here's how it works: 1. **You provide the event's "coordinates"**: By giving the simulator a transaction hash and a log index, you point it to a specific `MessageEmitted` event on the blockchain. 2. **The workflow receives the decoded event data**: The simulator passes a decoded log payload into the `onLogTrigger` callback function in `workflow.go`. The Go SDK automatically decodes the event log into a strongly-typed structure, making it easy to access event fields. 3. **The workflow extracts the emitter's address**: The code for `onLogTrigger` accesses `payload.Data.Emitter` to get the address of the entity that emitted the message. This is a decoded field from the event, not a raw topic. 4. **The workflow makes a new onchain call**: This is the key step. The workflow now takes the `emitter` address it just extracted from the decoded event and uses it as an argument to call the `GetLastMessage` function on the `MessageEmitter` contract. It is effectively asking, "What was the last message from the specific emitter involved in the event that triggered me?" 5. **The workflow logs the result**: Finally, it logs the message content it received from its `GetLastMessage` call and finishes. This pattern showcases how you can build sophisticated, interconnected institutional-grade smart contracts that react to onchain activity in real-time. ## 7. Exploring the code The workflow code (`workflow.go`) is a great example of how a single workflow can contain multiple, independent handlers to perform different tasks. The `main.go` file serves as the entry point that initializes the workflow runner. Here is a high-level tour of the code to show how the two paths you just tested are implemented. - **`InitWorkflow`**: This is the entry point. It initializes the cron trigger and returns a workflow with two handlers: one for the cron trigger and one for the EVM log trigger. The EVM log trigger is configured to listen for `MessageEmitted` events from the MessageEmitter contract. - **`onPORCronTrigger`**: The entry point for **Path A**. It's a lightweight callback that immediately delegates to the shared `doPOR` function, demonstrating how you can reuse core logic. - **`onLogTrigger`**: This is the self-contained entry point for **Path B**. It contains its own unique logic to handle the reactive pattern: it's triggered by an event, extracts data from that event (the emitter address and message), and uses that data to make a new onchain query. It does **not** call `doPOR`. - **`doPOR`**: This is the engine for **Path A**. It contains the core business logic for the Custom Data Feed workflow, orchestrating the sequence of helper functions to fetch API data, read contract state, and finally write the result back onchain. - **`fetchPOR`, `getTotalSupply`, `fetchNativeTokenBalance`, `updateReserves`, and `prepareMessageEmitter`**: These are the helper functions that execute the specific steps for the Custom Data Feed and reactive event handling workflows. They contain the calls to the SDK clients (`http.Client`, `evm.Client`) and the generated contract bindings. ## 8. Key Go SDK features in this demo This demo showcases several important patterns and features of the Go SDK: - **Generated Contract Bindings**: The workflow uses type-safe Go bindings generated from contract ABIs, providing compile-time safety for contract interactions. - **Multiple Trigger Handlers**: A single workflow can register multiple handlers for different trigger types using `cre.Handler()`. - **Consensus Aggregation from Tags**: The HTTP capability uses `cre.ConsensusAggregationFromTags[*ReserveInfo]()` to automatically aggregate offchain data based on struct field tags. - **Promise-Based Async Operations**: All SDK operations return promises that are awaited with `.Await()`, enabling both sequential and parallel execution patterns. - **Strongly-Typed Event Decoding**: EVM log triggers automatically decode event data into Go structs, eliminating manual topic parsing. - **Two-Step Write Pattern**: The workflow uses contract binding's `WriteReportFrom*` methods to generate a signed report and submit it onchain in one call. - **Chain Selector Management**: The SDK provides `evm.ChainSelectorFromName()` to convert human-readable chain names to chain selectors. - **Finalized Block Numbers**: Contract reads use `rpc.FinalizedBlockNumber` to ensure reading from finalized blockchain state. ## Next steps You have successfully run the Custom Data Feed demo workflow. To understand how the different pieces of the workflow code work, explore these detailed guides: - **How are the Cron and EVM Log events handled?** Learn how to use different event sources to start your workflow in the **[Using Triggers](/cre/guides/workflow/using-triggers)** guides. - **How does it fetch API data?** The demo uses the `http.Client` to fetch offchain reserve data. Learn more in the **[API Interactions](/cre/guides/workflow/using-http-client)** guide. - **How does it read from or write to the blockchain?** It uses the `evm.Client` for all onchain interactions. See the [**EVM Chain Interactions**](/cre/guides/workflow/using-evm-client) guides for details. - **How does it use a contract binding?** The demo uses pre-built bindings to interact with the contracts safely. Learn how to create your own in the **[Generating Bindings](/cre/guides/workflow/using-evm-client/generating-bindings)** guide.