When you first start with function.do, you experience an immediate "aha!" moment. The ability to encapsulate business logic into atomic, reusable functions and call them as simple API endpoints is a powerful paradigm shift. Building a simple chain is intuitive: Function A calls Function B, which calls Function C. It's clean, testable, and incredibly efficient.
But real-world applications are rarely that simple. What happens when a function in the middle of your chain fails? How do you manage and pass information—or "state"—across a dozen different function calls in a long-running process?
As you move from simple chains to complex, mission-critical services, you need patterns to ensure your workflows are both resilient and intelligent. This is where the true power of function.do's composable architecture shines. Let's explore the advanced patterns for robust error handling and effective state management in your composed workflows.
In a composed workflow, a single point of failure can bring the entire process to a halt. Graceful error handling isn't just about catching exceptions; it's about defining what the business process should do when things go wrong.
Imagine a standard e-commerce order processing workflow composed of three atomic functions:
If chargeCreditCard fails, you can't just let the process crash. You need to stop, potentially notify the user, and ensure you don't proceed to createShipment. Even worse, what if chargeCreditCard succeeds but createShipment fails? You now have the customer's money but no way to ship their product. You need to undo the charge.
The most fundamental pattern is to create an "orchestrator" function. This function's sole job is to manage the execution flow and handle errors between other functions.
By wrapping calls to other functions in a try/catch block, the orchestrator can react to failures and execute alternative logic.
import { Agent, property } from '@do-sdk/core';
// Assume 'creditCardProcessor' and 'shippingService' are other available agents
import { creditCardProcessor } from './charge-card';
import { shippingService } from './create-shipment';
// The Orchestrator Agent
export class OrderWorkflow extends Agent {
  @property()
  async processOrder(orderId: string, customerId: string, amount: number) {
    let paymentSuccessful = false;
    let paymentId: string | null = null;
    // Step 1: Attempt to charge the credit card
    try {
      const paymentResult = await creditCardProcessor.chargeCard(customerId, amount);
      paymentSuccessful = true;
      paymentId = paymentResult.id;
    } catch (error) {
      console.error(`Payment failed for order ${orderId}:`, error);
      // Optional: call another function to notify the user
      // await notifications.sendPaymentFailedEmail(customerId, orderId);
      throw new Error('Could not process payment.');
    }
    // Step 2: Attempt to create the shipment ONLY if payment was successful
    if (paymentSuccessful) {
      try {
        const shipmentResult = await shippingService.createShipment(customerId, orderId);
        return { status: 'SUCCESS', shipmentId: shipmentResult.id };
      } catch (error) {
        console.error(`Shipment creation failed for order ${orderId}:`, error);
        // This is where it gets interesting... we need to undo the charge.
        // We'll address this in the next pattern.
        throw new Error('Payment was processed, but shipment failed.');
      }
    }
  }
}
The scenario above—where a later step fails after an earlier one succeeded—is a classic distributed systems problem. The solution is the Saga pattern.
A Saga is a sequence of local transactions. Each transaction updates the system and triggers the next. If a transaction fails, the Saga executes a series of compensating transactions to undo the impact of the preceding successful transactions.
function.do's atomic nature makes this pattern incredibly clean to implement. For every action (chargeCard), you simply create a corresponding compensating action (refundCharge).
Let's improve our orchestrator to handle the shipment failure by refunding the charge.
// ... imports from previous example
export class ResilientOrderWorkflow extends Agent {
  @property()
  async processOrder(orderId: string, customerId: string, amount: number) {
    // Step 1: Charge the card
    let paymentResult;
    try {
      paymentResult = await creditCardProcessor.chargeCard(customerId, amount);
    } catch (error) {
      console.error(`Payment failed for order ${orderId}:`, error);
      throw new Error('Could not process payment.');
    }
    // Step 2: Create the shipment
    try {
      const shipmentResult = await shippingService.createShipment(customerId, orderId);
      return { 
        status: 'COMPLETE', 
        paymentId: paymentResult.id,
        shipmentId: shipmentResult.id
      };
    } catch (shipmentError) {
      console.error(`Shipment failed for ${orderId}, initiating refund.`, shipmentError);
      
      // COMPENSATING TRANSACTION: Undo the successful charge
      try {
        await creditCardProcessor.refundCharge(paymentResult.id);
      } catch (refundError) {
        console.error(`CRITICAL: Shipment failed AND refund failed for payment ${paymentResult.id}!`, refundError);
        // At this point, you'd trigger a manual review process
        // await humanIntervention.flagOrderForReview(orderId, 'Refund failed');
      }
      throw new Error('Order failed and payment has been refunded.');
    }
  }
}
With this pattern, your business logic is resilient. Each function remains simple and single-purpose (chargeCard, refundCharge), while the orchestrator handles the complex, stateful logic of the business process.
Atomic functions on function.do are inherently stateless, which is a key feature for ensuring they are reusable and predictable. But workflows are inherently stateful. The output of one step is the input for the next.
Consider a user onboarding workflow:
How does sendWelcomeEmail get the userId and subscriptionId created in previous, separate function calls?
While you can pass individual variables from one call to the next, this quickly becomes cumbersome as complexity grows. A much cleaner pattern is to use a State Context Object.
The orchestrator function creates and maintains a single context object. It passes this entire object to each function in the chain. Each function reads the data it needs from the context, performs its logic, and returns the updated context.
This creates a clear, predictable data flow and a complete audit trail of the workflow's execution.
import { Agent, property } from '@do-sdk/core';
// Assume other agents exist: 'users', 'subscriptions', 'mailer'
// Define the shape of our state object
interface OnboardingContext {
  inputDetails: { name: string; email: string; };
  userId?: string;
  subscriptionId?: string;
  emailSent: boolean;
  error?: string;
}
// Each atomic function is now designed to accept and return the context
// Example: The user creation agent
export class UserAgent extends Agent {
  @property()
  async createUser(context: OnboardingContext): Promise<OnboardingContext> {
    const { inputDetails } = context;
    // ...logic to create user in database...
    const newUserId = 'user-abc-123';
    return { ...context, userId: newUserId };
  }
}
// The Orchestrator using the context object
export class OnboardingWorkflow extends Agent {
  @property()
  async run(userDetails: { name: string; email: string; }): Promise<OnboardingContext> {
    
    // 1. Initialize the state context
    let context: OnboardingContext = {
      inputDetails: userDetails,
      emailSent: false,
    };
    try {
      // 2. Pass context through each step of the workflow
      context = await users.createUser(context);
      context = await subscriptions.createTrial(context);
      context = await mailer.sendWelcomeEmail(context);
      return context;
    } catch (error) {
      console.error("Onboarding workflow failed", error);
      // The context contains the state at the point of failure
      return { ...context, error: error.message }; 
    }
  }
}
Benefits of the State Context Object pattern:
The promise of function.do isn't just about writing simple serverless functions; it's about composing them into powerful, resilient, and intelligent business services. By moving beyond simple function chaining and embracing advanced patterns, you can tackle real-world complexity with confidence.
By leveraging these patterns, you transform your collection of atomic functions into a sophisticated Business Logic API, ready to power the core of your applications.
Ready to build your first resilient workflow? Start building on function.do today!