Introduction

Welcome back, fellow developer! In our previous project, we built a modern full-stack web application, laying the groundwork for how frontend and backend services interact on Void Cloud. Now, we’re going to dive into one of the most exciting and in-demand areas of modern development: Artificial Intelligence (AI).

This chapter focuses on building a scalable, AI-powered API using Void Cloud. Imagine an API that can summarize articles, translate text, or even generate creative content—all powered by advanced AI models. We’ll learn how to integrate an AI service into a Void Cloud function, ensuring it’s both secure and capable of handling high traffic with Void Cloud’s inherent scalability. This project is crucial because it demonstrates how to leverage serverless functions for computationally intensive tasks like AI inference, without worrying about infrastructure.

By the end of this chapter, you’ll have a fully deployed AI API, understand the architectural patterns for integrating AI, and be confident in building and scaling such services on Void Cloud. You’ll also gain practical experience in managing secrets for third-party services and observing your API’s behavior in production.

Prerequisites: Before we start, please ensure you’ve completed the previous chapters, especially those covering:

  • Void Cloud CLI installation and basic usage.
  • Deploying serverless functions.
  • Managing environment variables and secrets on Void Cloud.
  • Basic understanding of Node.js and TypeScript.

Ready to bring some intelligence to the cloud? Let’s begin!

Core Concepts: Architecting an AI-Powered API on Void Cloud

Building an AI-powered API isn’t just about calling an AI model; it’s about designing a robust, scalable, and secure system. Void Cloud provides an excellent foundation for this. Let’s explore the core concepts that underpin our project.

The AI API Architecture Flow

When a user interacts with our AI API, several steps occur behind the scenes. Understanding this flow is key to designing and debugging our service.

  1. Client Request: A user (or another application) sends an HTTP request (e.g., POST) to our API endpoint.
  2. Void Edge Network: The request first hits Void Cloud’s global edge network. This network efficiently routes the request to the nearest available Void Function instance.
  3. Void Function Execution: A Void Function (our serverless API handler written in TypeScript) is invoked. If it’s the first request in a while, there might be a “cold start” as the function environment initializes, but Void Cloud actively works to minimize this.
  4. Secure AI Service Integration: Inside our Void Function, we use an API key (securely stored as a Void Cloud secret) to authenticate with an external AI service (e.g., a hypothetical Void AI Service or a third-party LLM provider like OpenAI).
  5. AI Inference: The Void Function sends the user’s prompt (e.g., text to summarize) to the AI service. The AI service processes it and returns the generated output.
  6. Response to Client: Our Void Function receives the AI’s response, processes it if necessary, and then sends it back to the original client.

Here’s a simplified diagram of this flow:

flowchart TD A[Client Application] --> B(Void Cloud Edge Network) B --> C[Void Function: AI API Handler] C -->|Secure API Call| D[External AI Service] D -->|AI Generated Output| C C --> B B --> A

Figure 16.1: High-level architecture of an AI-powered API on Void Cloud.

The Serverless Advantage for AI Workloads

Why is Void Cloud’s serverless model particularly well-suited for AI APIs?

  • Automatic Scalability: AI inference can be spiky. One moment, you have no requests; the next, you have thousands. Void Cloud automatically scales your functions up and down based on demand, provisioning more instances when needed and scaling to zero when idle. This means you only pay for the compute time your function actually uses.
  • Reduced Operational Overhead: You don’t manage servers, operating systems, or runtime environments. Void Cloud handles all the underlying infrastructure, allowing you to focus purely on your code and the AI integration.
  • Cost Efficiency: With pay-per-execution billing, you avoid the cost of idle servers, which is common in traditional deployments. This is especially beneficial for services that might have unpredictable usage patterns.
  • Global Distribution: Void Cloud’s edge network helps reduce latency by executing functions closer to your users, improving the responsiveness of your AI API.

Integrating AI Services Securely

When interacting with external AI services, you’ll almost always need an API key for authentication and billing. Exposing these keys directly in your code or committing them to your repository is a major security risk. Void Cloud provides a robust solution: Secrets Management.

  • Environment Variables: Void Cloud allows you to define environment variables for your functions. These can be plain text or, for sensitive information, secrets.
  • Secrets: Secrets are encrypted values managed by Void Cloud. They are injected into your function’s runtime environment as environment variables but are never exposed in logs, build processes, or configuration files. This is the only safe way to handle API keys, database credentials, and other sensitive data.

We’ll use Void Cloud secrets to store our hypothetical VOID_AI_API_KEY securely.

Choosing Your AI Model and Service

For this project, we’ll assume we’re interacting with a generic “Void AI Service” that offers text generation capabilities. In a real-world scenario, you might choose from:

  • Large Language Models (LLMs): Like OpenAI’s GPT series, Google’s Gemini, or open-source alternatives hosted on platforms like Hugging Face.
  • Specialized AI Services: For tasks like image recognition, sentiment analysis, or speech-to-text.
  • Void Cloud’s Own AI Offerings (Hypothetical): Many cloud providers offer integrated AI services. We’ll simulate this with a simple placeholder.

The principles of integration remain largely the same: make an HTTP request (or use an SDK) to the AI service, pass your input, and process its output.

Ready to get our hands dirty? Let’s start coding!

Step-by-Step Implementation: Building Our AI API

We’ll build a simple API that takes a text prompt and returns a generated response from a hypothetical AI service.

1. Setting Up Your Project

First, let’s create a new directory for our project and initialize it with Void Cloud and Node.js.

  1. Create Project Directory:

    mkdir void-ai-api
    cd void-ai-api
    
  2. Initialize Node.js Project:

    npm init -y
    

    This creates a package.json file with default values.

  3. Install Dependencies: We’ll need TypeScript for development and node-fetch to make HTTP requests to our AI service. We’ll also need @types/node for Node.js type definitions.

    npm install typescript node-fetch
    npm install -D @types/node
    
    • typescript: The TypeScript compiler.
    • node-fetch: A lightweight module to bring fetch API to Node.js.
    • @types/node: Provides type definitions for Node.js APIs, essential for TypeScript.
  4. Initialize TypeScript:

    npx tsc --init
    

    This creates a tsconfig.json file. Let’s adjust a few settings in tsconfig.json to better suit our serverless function:

    // tsconfig.json
    {
      "compilerOptions": {
        "target": "es2020",             /* Specify ECMAScript target version */
        "module": "commonjs",           /* Specify module code generation */
        "outDir": "./dist",             /* Redirect output structure to the directory */
        "rootDir": "./src",             /* Specify the root directory of source files */
        "strict": true,                 /* Enable all strict type-checking options */
        "esModuleInterop": true,        /* Enables emit interoperability between CommonJS and ES Modules */
        "skipLibCheck": true,           /* Skip type checking all .d.ts files */
        "forceConsistentCasingInFileNames": true /* Ensure that casing is correct in imports */
      },
      "include": ["src/**/*.ts"],       /* Include all .ts files in the src directory */
      "exclude": ["node_modules"]       /* Exclude node_modules */
    }
    
    • target: Sets the JavaScript version for compilation. es2020 is a good modern target.
    • module: Specifies the module system. commonjs is standard for Node.js.
    • outDir: Where compiled JavaScript files will go.
    • rootDir: Where our source TypeScript files are located.
    • strict: Enables a suite of strict type-checking options, highly recommended for robust code.
    • esModuleInterop: Important for interoperability between CommonJS and ES Modules, especially with libraries like node-fetch.
  5. Create Source Directory:

    mkdir src
    mkdir src/api
    

    We’ll put our API function inside src/api.

  6. Initialize Void Cloud Project:

    void init
    

    Follow the prompts. Choose “Serverless Function” as the project type. This will create a void.json file.

    Let’s modify our void.json to explicitly define our API endpoint. Open void.json and add the routes section:

    // void.json (Void Cloud Configuration, assumed version 2.2.0 as of 2026-03-14)
    {
      "name": "void-ai-api",
      "version": "2.2.0",
      "build": {
        "command": "npx tsc",
        "outputDirectory": "dist"
      },
      "functions": {
        "api-handler": {
          "runtime": "nodejs20.x", // Assuming Node.js 20.x is the latest stable LTS for Void Cloud in 2026
          "handler": "dist/api/generate.handler",
          "memory": 512, // Allocate 512MB memory for AI tasks
          "timeout": 30 // Allow up to 30 seconds for AI inference
        }
      },
      "routes": [
        {
          "path": "/api/generate",
          "function": "api-handler",
          "methods": ["POST"]
        }
      ]
    }
    
    • name: Your project’s name.
    • version: The Void Cloud configuration version (hypothetically 2.2.0).
    • build: Tells Void Cloud how to build your project. npx tsc compiles our TypeScript, and outputDirectory specifies where the compiled JavaScript lands.
    • functions: Defines our serverless functions.
      • api-handler: The logical name for our function.
      • runtime: The Node.js runtime environment. We’ll use nodejs20.x, assuming Node.js 20 LTS is the current stable for serverless platforms in 2026.
      • handler: The entry point for our function, dist/api/generate.handler means the handler export from dist/api/generate.js.
      • memory: Allocated memory for the function. AI tasks can be memory-intensive, so 512MB is a reasonable starting point.
      • timeout: Maximum execution time in seconds. AI inference can take a few seconds, so 30s is safer than the default.
    • routes: Maps incoming HTTP requests to our functions. Here, POST /api/generate will trigger our api-handler function.

2. Designing the API Endpoint and AI Integration

Now, let’s write the TypeScript code for our api-handler function. This function will receive a request, call our hypothetical AI service, and return the AI’s response.

Create a new file src/api/generate.ts:

// src/api/generate.ts

import fetch from 'node-fetch'; // We'll use node-fetch for HTTP requests

// Define a simple interface for our expected request body
interface GenerateRequest {
  prompt: string;
  maxTokens?: number; // Optional, for controlling AI output length
}

// Define a simple interface for our expected AI service response
interface AiServiceResponse {
  id: string;
  generatedText: string;
  model: string;
  usage: {
    promptTokens: number;
    completionTokens: number;
    totalTokens: number;
  };
}

// The main handler function for our Void Cloud Serverless Function
// Void Cloud functions typically receive a request object and return a response object.
export const handler = async (event: { body: string | null; headers: Record<string, string> }): Promise<{ statusCode: number; headers: Record<string, string>; body: string }> => {
  // 1. Log the incoming request (useful for debugging)
  console.log('Received request for AI generation:', event.headers);

  // 2. Parse the request body
  let requestBody: GenerateRequest;
  try {
    if (!event.body) {
      throw new Error('Request body is missing.');
    }
    requestBody = JSON.parse(event.body);
  } catch (error) {
    console.error('Error parsing request body:', error);
    return {
      statusCode: 400,
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ message: 'Invalid JSON body or missing prompt.', error: (error as Error).message }),
    };
  }

  const { prompt, maxTokens = 100 } = requestBody;

  if (!prompt || typeof prompt !== 'string') {
    return {
      statusCode: 400,
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ message: 'Prompt is required and must be a string.' }),
    };
  }

  // 3. Retrieve the AI service API key securely from environment variables
  const voidAiApiKey = process.env.VOID_AI_API_KEY;

  if (!voidAiApiKey) {
    console.error('VOID_AI_API_KEY is not set. Cannot call AI service.');
    return {
      statusCode: 500,
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ message: 'Server configuration error: AI API key missing.' }),
    };
  }

  // 4. Call the hypothetical external AI service
  const aiServiceUrl = 'https://api.voidai.com/v1/generate'; // Hypothetical AI service URL

  try {
    console.log(`Calling AI service with prompt: "${prompt.substring(0, 50)}..."`);
    const aiResponse = await fetch(aiServiceUrl, {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'Authorization': `Bearer ${voidAiApiKey}`, // Securely pass the API key
      },
      body: JSON.stringify({
        model: 'void-llm-v5', // Hypothetical latest AI model
        prompt: prompt,
        max_tokens: maxTokens,
        temperature: 0.7, // Creativity level
      }),
    });

    if (!aiResponse.ok) {
      const errorData = await aiResponse.json();
      console.error('AI service error:', aiResponse.status, errorData);
      return {
        statusCode: aiResponse.status,
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({ message: 'Failed to get response from AI service.', details: errorData }),
      };
    }

    const aiData = (await aiResponse.json()) as AiServiceResponse;
    console.log('Successfully received AI response.');

    // 5. Return the AI's generated text as the API response
    return {
      statusCode: 200,
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({
        generatedText: aiData.generatedText,
        modelUsed: aiData.model,
        usage: aiData.usage,
      }),
    };
  } catch (error) {
    console.error('Error during AI service call:', error);
    return {
      statusCode: 500,
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ message: 'Internal server error during AI processing.', error: (error as Error).message }),
    };
  }
};

Let’s break down this code:

  • import fetch from 'node-fetch';: We import the fetch function to make HTTP requests, similar to what you’d use in a browser.
  • Interfaces (GenerateRequest, AiServiceResponse): These help us define the expected structure of our API’s input and the hypothetical AI service’s output, making our TypeScript code type-safe and easier to read.
  • export const handler = async (event) => { ... };: This is the core of our serverless function. Void Cloud (like most serverless platforms) expects a handler function that takes an event object (representing the incoming HTTP request) and returns a Promise resolving to a response object.
    • The event object typically contains body (the request payload) and headers.
    • The returned object must have statusCode, headers, and body.
  • Request Body Parsing: We safely parse the event.body as JSON. If it’s missing or invalid, we return a 400 Bad Request error. We also validate that the prompt field is present and a string.
  • Secure API Key Retrieval: process.env.VOID_AI_API_KEY is how we access environment variables. Crucially, this key will be injected by Void Cloud as a secret at runtime, never visible in our source code. We check if it’s present and return a 500 Internal Server Error if not.
  • fetch to AI Service:
    • We define a hypothetical aiServiceUrl. In a real scenario, this would be the actual endpoint for your chosen AI provider.
    • We send a POST request with the prompt and other parameters (like model, max_tokens, temperature).
    • Authorization: Bearer ${voidAiApiKey}``: This is how we securely pass our API key to the AI service. The Bearer token scheme is a common standard for API authentication.
    • Error Handling: We check aiResponse.ok to see if the AI service returned a successful status (2xx). If not, we log the error and return an appropriate status code to the client.
  • Response Handling: If the AI service responds successfully, we parse its JSON output and return the generatedText (and other relevant info) back to our API client.
  • try...catch Blocks: Essential for robust error handling, catching any network issues or unexpected problems during the AI service call.

3. Local Testing

Before deploying, let’s test our API locally using the Void Cloud CLI’s development server.

  1. Start the Local Development Server: Make sure you are in the void-ai-api project root.

    void dev
    

    The CLI will compile your TypeScript code, start a local server, and provide you with a URL, typically http://localhost:3000.

  2. Test with curl: Open a new terminal window and send a POST request to your local API.

    curl -X POST \
         -H "Content-Type: application/json" \
         -d '{"prompt": "Write a short, inspiring quote about the future of AI and humanity.", "maxTokens": 50}' \
         http://localhost:3000/api/generate
    

    What to expect:

    • Initially, you’ll likely see a 500 Internal Server Error because VOID_AI_API_KEY is not set in your local environment. This is expected and good, as it means our security check is working!
    • The local void dev server will print console logs from your function, showing the error.

    To simulate the AI service locally (optional but good practice): For proper local testing, you’d ideally mock the AI service or set a dummy VOID_AI_API_KEY that points to a mock server. For this tutorial, we’ll proceed assuming the AI service will work on deployment.

    Let’s temporarily bypass the AI key check for local testing only for now, to see the rest of the flow.

    • In src/api/generate.ts, temporarily comment out the if (!voidAiApiKey) block:
      // if (!voidAiApiKey) {
      //   console.error('VOID_AI_API_KEY is not set. Cannot call AI service.');
      //   return {
      //     statusCode: 500,
      //     headers: { 'Content-Type': 'application/json' },
      //     body: JSON.stringify({ message: 'Server configuration error: AI API key missing.' }),
      //   };
      // }
      
    • Now, when you run curl, you’ll likely get an error from node-fetch trying to connect to https://api.voidai.com/v1/generate (which is a placeholder). This confirms your function is running and attempting to reach the AI service.
    • IMPORTANT: Remember to uncomment this block before deploying! Security is paramount.

4. Deploying to Void Cloud

Now, let’s get our AI API live on Void Cloud! This involves setting up our secret and then deploying.

  1. Set Your AI Service API Key as a Void Cloud Secret: You’ll need a real API key from your chosen AI service (e.g., OpenAI, Google Cloud AI, or a hypothetical Void AI Service key). For demonstration, let’s assume you have a key that looks like sk-voidai-your_secret_key_here.

    void secrets set VOID_AI_API_KEY "sk-voidai-your_secret_key_here"
    
    • Replace "sk-voidai-your_secret_key_here" with your actual AI service API key.
    • Void Cloud securely encrypts this value and makes it available as process.env.VOID_AI_API_KEY to your deployed functions. It will not be visible in your dashboard or logs.
  2. Deploy Your Project: Ensure you’ve uncommented the if (!voidAiApiKey) check in src/api/generate.ts!

    void deploy
    

    The Void Cloud CLI will:

    • Compile your TypeScript code (npx tsc).
    • Bundle your compiled JavaScript and node_modules.
    • Upload the bundle to Void Cloud.
    • Provision your api-handler function.
    • Map the /api/generate route to your function.
    • Provide you with a public URL for your deployed API (e.g., https://void-ai-api-yourusername.void.app).

5. Testing the Live Deployment

Once deployment is complete, you’ll receive a URL. Let’s test it!

  1. Get Your Deployment URL: The void deploy command will output something like Your project is deployed at: https://void-ai-api-yourusername.void.app. Copy this URL.

  2. Test with curl (using the live URL):

    curl -X POST \
         -H "Content-Type: application/json" \
         -d '{"prompt": "Explain the concept of serverless cold starts in one sentence."}' \
         YOUR_DEPLOYMENT_URL/api/generate
    

    Replace YOUR_DEPLOYMENT_URL with the actual URL provided by void deploy.

    Expected Output: You should receive a JSON response similar to this (the generatedText will vary based on the AI model):

    {
      "generatedText": "Serverless cold starts occur when a function is invoked after a period of inactivity, requiring the platform to initialize its execution environment before processing the request.",
      "modelUsed": "void-llm-v5",
      "usage": {
        "promptTokens": 14,
        "completionTokens": 30,
        "totalTokens": 44
      }
    }
    
    • If you get an error, check the void logs command (see troubleshooting below) and ensure your VOID_AI_API_KEY secret was set correctly.

Congratulations! You’ve just built and deployed a scalable, AI-powered API on Void Cloud.

Mini-Challenge: Extend Your AI API

Now that you have a working AI generation API, let’s add another feature.

Challenge: Implement a new API endpoint, /api/translate, that takes a text and targetLanguage and returns a translated version of the text.

Requirements:

  1. Create a new TypeScript file, e.g., src/api/translate.ts.
  2. Modify void.json to add a new route /api/translate that points to this new function.
  3. Inside the translate.ts function, reuse the pattern for calling the AI service, but adjust the prompt to instruct the AI to perform a translation.
    • Example prompt: Translate the following English text to French: "Hello, world!"
    • You can still use the same https://api.voidai.com/v1/generate endpoint, just change the prompt.
  4. Deploy your updated project.
  5. Test the new /api/translate endpoint using curl.

Hint:

  • Your translate.ts handler will look very similar to generate.ts.
  • Remember to add the new function and route definition to your void.json file. The functions block can have multiple entries, and the routes block can have multiple route definitions.
  • You might need to adjust the handler path for your new function. For example, dist/api/translate.handler.

What to Observe/Learn:

  • How easy it is to extend your API with new serverless functions.
  • How Void Cloud manages multiple functions within a single project.
  • The flexibility of using a single generic AI model for multiple tasks by simply changing the prompt.

Common Pitfalls & Troubleshooting

Working with serverless functions and external APIs can sometimes lead to unexpected issues. Here are some common pitfalls and how to troubleshoot them:

  1. Missing or Incorrect VOID_AI_API_KEY Secret:

    • Symptom: Your deployed function returns a 500 Internal Server Error with a message like “Server configuration error: AI API key missing.”
    • Fix: Double-check that you ran void secrets set VOID_AI_API_KEY "YOUR_KEY" with the correct key. You can verify if the secret is set (but not its value) using void secrets list. If it’s incorrect, run void secrets set again. Remember secrets are tied to specific projects and environments.
  2. node-fetch or AI Service Connection Issues:

    • Symptom: Your function times out or returns a 500 Internal Server Error with a message like “Failed to get response from AI service.” or “Error during AI service call: TypeError: fetch failed”.
    • Fix:
      • Check AI Service Status: Is the external AI service (e.g., api.voidai.com) actually up and running? Check their status page if available.
      • Network Access: Ensure your Void Cloud function has outbound network access (which it typically does by default).
      • AI Key Validity: Is your AI key valid and has enough credits/quota with the AI provider?
      • Timeout: If the AI service is slow, your Void Cloud function might time out. Increase the timeout value in void.json for your function (e.g., from 30 to 60 seconds).
  3. Invalid void.json Configuration:

    • Symptom: void deploy fails with a configuration error, or your deployed function gives a 404 Not Found for the route.
    • Fix:
      • Syntax: Carefully check your void.json for any JSON syntax errors (missing commas, extra brackets).
      • handler Path: Ensure the handler path in void.json correctly points to your compiled JavaScript file and exported function (e.g., dist/api/generate.handler means dist/api/generate.js and exports.handler).
      • build Configuration: Verify build.command and build.outputDirectory are correct so Void Cloud can find your compiled code.
  4. Cold Starts Affecting Initial Response Time:

    • Symptom: The very first request to your API after a period of inactivity takes significantly longer (e.g., 2-5 seconds) than subsequent requests.
    • Explanation: This is a characteristic of serverless functions. Void Cloud needs to initialize a new execution environment.
    • Mitigation (Void Cloud specifics): Void Cloud employs various techniques to minimize cold starts (e.g., pre-warming instances, optimizing runtime startup). For critical production APIs, you might explore “provisioned concurrency” or “minimum instances” if Void Cloud offers such features (common in serverless platforms) to keep instances warm. For this project, observe it, but don’t worry too much about fixing it unless performance becomes a critical bottleneck.

Using void logs: When troubleshooting, the void logs command is your best friend.

void logs <function-name> --follow

Replace <function-name> with the name of your function from void.json (e.g., api-handler). The --follow flag will stream logs in real-time, which is incredibly useful during debugging. Look for console.log messages and console.error outputs from your function.

Summary

Phew! You’ve successfully navigated the complexities of building and deploying a scalable, AI-powered API on Void Cloud.

Here are the key takeaways from this chapter:

  • AI API Architecture: You understand the end-to-end flow from client request through Void Cloud’s edge, serverless function execution, secure AI service integration, and back to the client.
  • Serverless for AI: Void Cloud’s serverless functions are ideal for AI workloads due to automatic scalability, cost efficiency, and reduced operational overhead.
  • Secure Secrets Management: You learned the critical importance of using Void Cloud secrets (like VOID_AI_API_KEY) to protect sensitive credentials, ensuring they are never exposed in your code or config files.
  • Void Cloud Configuration: You configured void.json to define your function’s runtime, memory, timeout, build process, and API routes.
  • Hands-on Implementation: You wrote a TypeScript serverless function that parses requests, makes authenticated calls to a hypothetical external AI service, and returns structured responses.
  • Local and Cloud Deployment: You practiced local development with void dev and deployed your API to the Void Cloud production environment using void deploy.
  • Troubleshooting: You gained insight into common issues like missing secrets, service connection problems, configuration errors, and understanding cold starts.

This project empowers you to integrate intelligent capabilities into your applications with confidence, leveraging the power and scalability of Void Cloud.

What’s Next?

In the upcoming chapters, we’ll continue to build on this foundation. We might explore:

  • Adding a database to persist AI-generated content or user prompts.
  • Implementing authentication and authorization for our API.
  • Advanced monitoring and observability techniques to gain deeper insights into our API’s performance and usage.
  • Exploring more complex AI patterns like real-time streaming or integrating multiple AI models.

Keep experimenting, keep learning, and keep building amazing things with Void Cloud!

References

  1. Void Cloud Official Documentation: Serverless Functions (Conceptual) https://docs.voidcloud.com/serverless-functions/overview
  2. Void Cloud Official Documentation: Secrets Management (Conceptual) https://docs.voidcloud.com/security/secrets-management
  3. Void Cloud Official Documentation: CLI Reference (Conceptual) https://docs.voidcloud.com/cli/reference
  4. Void Cloud Official Documentation: Node.js Runtime (Conceptual) https://docs.voidcloud.com/runtimes/nodejs
  5. Node.js fetch API for server-side HTTP requests: node-fetch on npm https://www.npmjs.com/package/node-fetch
  6. TypeScript Handbook: Interfaces https://www.typescriptlang.org/docs/handbook/interfaces.html
  7. OpenAI API Documentation (for general AI API pattern reference) https://platform.openai.com/docs/api-reference

This page is AI-assisted and reviewed. It references official documentation and recognized resources where relevant.