Maximal Instruction Prompting: a strategy for software development with LLMs

Published:

I've been trying new ideas to see how we might innovate software development using Large Language Models (LLMs). This article introduces a new concept, Maximal Instruction Prompting (MIP), as a strategy for software development with LLMs.

I've used this approach for several projects, and the concept continues to evolve with each new use. However, even in these early stages, I've seen great results.

This article describes the MIP strategy and the benefits it can offer. It provides more detailed explanations and ends with an example of an application built this way.

The MIP strategy

LLMs are incredible tools. Like all tools, though, getting good results requires us to use them in the right way. The idea behind Maximal Instruction Prompting (MIP) is to provide a more effective way to use LLMs for software development.

The MIP strategy comes from observing problems using LLMs in software development. Most developers use LLMs to accelerate coding tasks, but we should be able to do far more.

Summary of MIP

MIP is an approach for capturing detailed natural language requirements to let an LLM build matching software source code.

These are the core ideas:

  • Describe all the requirements that our software must meet.
  • Capture inputs from all perspectives.
  • Describe all requirements comprehensively, and exactly once.
  • Use examples that clarify requirements.

None of these are new but we can combine them to great effect.

Advantages of MIP

For MIP to be valuable, it must offer significant practical advantages over any alternatives. As a couple of my more commercially minded friends might say, "Show us the money!"

Here are some advantages:

  • MIP describes the software's functionality rather than how we'd like to evolve it. This makes it easy for an LLM and a human design team to understand. Most software development methodologies focus on evolution and any resulting software rapidly becomes defined by the implementation rather than the intended design.
  • It focuses on clarity so an LLM can generate software that meets those requirements.
  • It makes it easy to try out new ideas and variations. Even small changes in requirements might lead to significant changes in implementation, but the MIP approach allows an LLM to make these sorts of changes quickly and cheaply.
  • An LLM can leverage a MIP description to help create tests and user documentation. [Aside: both of these are areas I will be looking into over the coming months]
  • Automated building of software makes it easy to change technologies and implementations. For example, we might prototype in one language and then create a production implementation in another!
  • We can build more robust systems because MIP makes building multiple versions of the same software inexpensive. For example, we could imagine building various versions of some library function to prevent downstream developers from inadvertently using unintended implementation details.

Limitations of MIP

MIP cannot solve all the problems with LLMs but does provide some mitigations:

  • LLMs intentionally create random and surprising outputs at times. This probabilistic approach is a strength in many instances but also causes LLMs to hallucinate. If these occur in the wrong place, we might get faulty software. MIP makes it easy for us to build different software implementations should this happen, and discarding broken versions is inexpensive. So, while MIP doesn't eliminate the problem of LLMs hallucinating, it can reduce the impact.
  • Even when given clear and specific instructions, an LLM will often ignore some. We can mitigate this by walking through the explicit requirements to ask the LLM to demonstrate how it met each one. We must solve this with an iterative dialogue, but the LLM can rapidly correct its mistakes.
  • This approach does not eliminate the need for a dialogue with the LLM but advances the conversation quickly while maintaining a clear focus on an outcome.
  • Any prompt's complexity has the same limits as dialogue-based approaches. LLMs have a limited context window, so this approach only works if we can keep our working data set inside that. The approach described can still work for larger software designs, but we must break this into smaller components and build each separately.

MIP in detail

Describe all the requirements

Like most tools, LLMs are inherently passive, doing nothing until we set them into action. When we wish to use them to help us build software, a designer uses prompts to guide the LLM on what they want to see.

For example, in "Can my tools build tools? Pre-rendering web pages with help from ChatGPT", I used an interactive approach to build new software far faster than I would have done previously. I've used this approach to develop the core of several other software components, including the syntax highlighting engine used in both that and this article.

While interactive engagement with LLMs is one of their greatest strengths, it has a dark side. LLMs excel at holding conversations, and we can be seduced into spending hours discussing minor details while ignoring substantive issues that might render those other conversations irrelevant (bikeshedding-as-a-service). While a human counterpart might attempt to pull the discussion back on track, an LLM will not.

A human might also ask questions during a conversation to better understand some task, but an LLM will only do this if asked. Ambiguity offers an LLM yet more scope to descend into hallucinatory rabbit holes as it tries to fill any void with creative, irrelevant, or plain wrong content.

"LLMs are useless", we might cry. Still, we know that once we have two or more people working on a software design, we must plan to collaborate. Our team must maintain a clear and shared view of what they will build. With an LLM, we must recognize that we always have at least two team members.

As with an all-human team, we can maintain alignment by crafting a long-form description of our goals. The same works with an LLM. We can present detailed written requirements to the LLM, giving it a clearer picture of what we want it to achieve.

Capture inputs from all perspectives

In conventional software development, we recognize the value of many different skills and perspectives. We require product definition and vision aligned with commercial goals. We need developers who are experts in all relevant technologies. We need to plan for testing. We may need experts in UX and UI, security, operational deployment, etc. Each role can provide more details about what we're looking to build.

With an LLM, we're looking to outsource the implementation of some or all of these elements, so we need to give our LLM a clear understanding of each one. Even if we want the LLM to focus solely on the coding aspect of software development, our LLM must understand these other motivations.

Consider a testability requirement, for instance. We might emphasize the need for our software to support 90%+ automated test coverage. This requirement could lead to an implementation that allows for easier mocking. We must explicitly state this to avoid our LLM taking a different approach. Similarly, providing examples of specific user journeys or behavioural requirements can guide the LLM towards unique and innovative solutions.

It is worth noting that even though our LLM may do most of the coding, we still need considerable software engineering expertise. In addition to providing an expert review of an LLM's output, we must capture important technology choices and implementation strategies as requirements.

Describe all requirements comprehensively, and exactly once

A significant cause of confusion and error within human software development teams is a misalignment of expectations about how software should work. Often, this arises from requirements that overlap, are ambiguous, or, even worse, are contradictory.

If such things confuse humans, we can expect them to confuse LLMs too. Unlike humans, however, LLMs won't tell us they're confused. Instead, they'll do unexpected things. To address this, we can take inspiration from Barbara Minto's MECE (mutually exclusive, collectively exhaustive) principle.

Given a list of requirements, MECE aims to drive out ambiguity and contradiction:

  • Mutually exclusive: the list's members should exclude each other, i.e. be distinct from each other.
  • Collectively exhaustive: the list's members should exhaust the relevant field, i.e. contain everything that belongs on the list.

By carefully enumerating requirements, we give our LLM clarity and make it far easier for our human designers to understand the details of the software they're shaping.

Use examples that clarify requirements

A conventional approach in software product development is to provide "acceptance criteria". These are specific examples of how the completed software should behave. Such examples allow developers and testers to check their understanding of how the finished software should operate.

These examples do not replace the requirements but clarify them.

This approach also works for LLMs. By providing examples of use, an LLM can more readily verify that any software it produces meets the expectation of whoever provided the prompt.

A practical example

Let's look at an example.

In "Can my tools build tools?", I posed the problem of building a pre-rendering application. Let's expand that into a complete application called "siterender" and define a richer set of features than the original:

  • Fetches and parses sitemaps from URLs or local files.
  • Supports sitemaps and sitemap indexes (nested sitemaps).
  • Replaces URL prefixes based on specified rules.
  • Renders pages in parallel using Puppeteer.
  • Parallelizes rendering operations for maximum speed/throughput.
  • Saves rendered HTML content to a specified output directory.
  • Retry mechanism for rendering and browser launch/close operations.

We expand these requirements to cover all aspects of how we expect the application to behave. We then try to group them, provide an exhaustive list, and work to ensure they don't contradict each other.

The prompt

The MIP-style prompt is 77 lines, but some are long. I plan to improve from this text format, but it works for now.

I divided the prompt into sections, each with a distinct set of requirements that describe our desired application behaviour. I iterated this prompt from a much earlier form and had ChatGPT 4o help reformat it and suggest improvements to remove ambiguity or overlaps.

I used earlier versions of the prompt to build fully working implementations, but this is the first version directly targeting TypeScript. Most earlier implementations used JavaScript, but I had ChatGPT build one using Python. The Python version was 20% slower, however!

The following is a product requirement.  Please generate code to match this requirement.

Problem statement:

I have a single page application website and want to pre-render all the pages so I can allow the website to be crawled by spiders that cannot render JavaScript.

Requirements:

1. Environment Setup:
  - The tool must be compatible with Node.js version 14.x or later and headless Google Chrome (current version).
  - The software should be written in TypeScript.
  - Use the latest available stable version of puppeteer for rendering.
  - Use the latest available stable version of yargs for command line options.
  - Use the latest available stable version of axios to handle the HTTP requests.
  - Use the latest available stable version of fast-xml-parser to handle the XML parsing.
  - Any package dependencies that you might need must be explicitly stated in these requirements.  If you need to use a dependency that is not listed here then ask for approval or use a different approach.
2. Tool Invocation:
  - The tool should be invoked as a script with Node.js using ES6 modules.
  - Ensure that the tool can be run from the command line with appropriate parameters.
  - The tool does not need a configuration file.
3. Sitemap Handling:
  - The tool will scan a sitemap.xml file that will provide a list of URLs to be pre-rendered.
  - The sitemap will be defined by either a file or a URL.  Either option can be provided but not both, and one option must be provided by the user.
  - If the sitemap is being defined by a file then the --sitemap-file paramater will define where to find the sitemap file.
  - If the sitemap is being defined by a URL then the --sitemap-url parameter will define the URL where to find the sitemap file.
  - If the response from the web server to fetching the sitemap via a URL is redirect status code then log the new URL and load from that URL instead.
  - Include an option to replace the protocol and hostname in the URLs using the --replace-url parameter.  The --replace-url parameter should take the form "<new-url-prefix>=<old-url-prefix>".
  - Only one --relace-url parameter can be provided and it only supports a single old and new host pair.
  - The tool must check that the form of any --replace-url parameter correctly matches the form expected, noting that the hostname prefix may include a port number.
  - The logic should handle either a sitemap or a sitemap index.
  - When the tool encounters a sitemap index it should emit a console message: "Processing sitemap index".  It should then process each sitemap within the sitemap index.
  - For each sitemap found in a sitemap index, emit a console message: "Processing sitemap: <name>" where <name> is the name of the sitemap file, and then process the sitemap file.
4. File Management:
  - Rendered pages should be saved in a specified output directory.
  - The output directory is defined by the non-optional --output command line parameter.
  - For each page, the directory structure should match the URL path, but should not include the hostname or any port number.
  - If the page does not have an explicit ".html" or ".htm" name then assume it is a directory and create a file "index.html" as the file name within that output directory.  Example: URL https://davehudson.io/blog/post should be saved as <output-directory>/blog/post/index.html where <output-directory> is the path specified as the output directory.
  - If the output directory or any subdirectories do not exist then they should be created.  If creating the directories fails then emit a failure message to the console and exit with an error status.
  - Before attempting any to invoke puppeteer for a given URL, any previous output file that matches what will be the new output file must be deleted.
  - The tool must not delete any file or directory that will not be written as an output file.
  - If any file or directory operations fail then log a message to the console and exit with an error status.
5. Rendering:
  - Once the directory structure is in place and all old index.html files have been deleted, render all the pages.
  - The tool should parallelize rendering operations.
  - The default number of parallel tasks should default to the number of available CPU cores.
  - Include an optional parameter --parallel-renders to specify a different number of parallel tasks.
  - As soon as a render completes then the next available render should be started.
  - For performance reasons, only start puppeteer once.
  - The tool should handle network errors, timeouts, and rendering errors.
  - If an error occurs, even if it's handled, log a message indicating the URL that had the error along with a message indicating why the error occurred.
  - In the event of any errors, ensure that all puppeteer resources are released correctly.
6. Retry Mechanism:
  - Implement a retry mechanism for failed render attempts with a default of 3 retries.
  - The retry mechanism should be applied to invocations of puppeteer as well as for network failures.
  - Allow this to be configurable with the --max-retries parameter.
  - Use exponential backoff with jitter for retries, with the backoff time doubling after each attempt and a random jitter added.
  - Ensure that the backoff time does not exceed 8 seconds.
  - The tool should exit with an error status if any page fails to render after retries.
7. Successful completion:
  - If all pages render successfully, exit with a success status.
8. Logging:
  - Emit console messages for progress and errors.
  - The tool does not need to log to a file.
  - Example messages:
  - “Rendered: <url>”, where <url> is the URL has been rendered.
  - “Failed: <url>: <reason>”, where <url> is the URL being processed, and <reason> is the failure reason.
  - There should be no other console messages.
9.  Usage Information:
  - If invoked with invalid parameters, display correct usage information.
  - Include a -h parameter to display a help message with all valid parameters and their usage.
10. Code Quality:
  - Use JSDoc annotations for all functions.
  - Use 4 spaces for indentation in the source code.
  - Where any functions may throw errors, include try/catch blocks to handle those error conditions.
  - For code style, do not use an "else" if the previous block in an if statement ends with a "return" statement.
  - The code should be split into two files.  The first one, siterender.ts contains the code to start the application.  The second one, logic.ts should contain all of the logic of the application to make it easier to construct tests.
  - If the user specifies a command line option that is not known then print an error to the console indicating the faulty option, print the help, and exit with an error status.
The MIP-style prompt for siterender

The implementation

The output TypeScript source code appears at the end of this article. It consists of over 300 lines of code split across two files.

As noted in the section on limitations, ChatGPT didn't produce a working version of this software the first time but did so with one of the earlier JavaScript editions. To mitigate this, I asked how it implemented each feature in an interactive dialogue. Where necessary, it changed its implementation to conform with the requirements.

The total interactive time required to reach the published output was around 90 minutes, which included about 30 minutes of testing. I evolved this version of the prompt over a few weeks. While not part of the prompt provided here, a further interactive session lasting 3 hours resulted in a test program that provides > 90% test coverage for the logic.ts file, including test error handling and retry logic. A future version of the prompt will be updated to include that testability requirement. During the 3 hours, the approach was to "pair program" with the LLM, letting it have the keyboard.

The general quality of the code is high. I'm not too fond of the use of a global variable for the XML parser in logic.ts, but we can refine this later. The implementation is robust and testable. If we disconnect the network, the retry logic keeps going and will recover if the network comes back up. Similarly, if we set the number of parallel rendering tasks to 8x the number of CPUs, we see a lot of timeouts and retries as Chrome can't keep up, but the application completes its task quickly.

Project site

I decided to make the siterender application a distinct project so you can follow and help shape its evolution.

The project site for siterender is here: siterender. The site has links to the source code and notes about how to contribute if you're interested.

Conclusion

I've introduced the concept of a Maximal Instruction Prompt (MIP) and explained how I've been using this approach to build software with ChatGPT.

Why not give this a try yourself? If you do, I'm curious to hear about your experiences!

The code

ChatGPT 4o generated the following source files as a result of the MIP-style prompt. While some interactive work was required, the only direct human intervention was to slightly adjust the whitespacing to improve readability.

#!/usr/bin/env node

import yargs from 'yargs';
import {hideBin} from 'yargs/helpers';
import {startRendering, Args} from './logic';
import os from 'os';

const argv = yargs(hideBin(process.argv))
    .usage('Usage: $0 --sitemap-file <file> OR --sitemap-url <url> --output <directory> [options]')
    .option('sitemap-file', {
        describe: 'Path to the sitemap file',
        type: 'string',
    })
    .option('sitemap-url', {
        describe: 'URL of the sitemap file',
        type: 'string',
    })
    .option('output', {
        describe: 'Output directory for rendered pages',
        type: 'string',
        demandOption: true,
    })
    .option('replace-url', {
        describe: 'Replace URL prefix in the form <new-url-prefix>=<old-url-prefix>',
        type: 'string',
    })
    .option('parallel-renders', {
        describe: 'Number of parallel renders',
        type: 'number',
        default: os.cpus().length,
    })
    .option('max-retries', {
        describe: 'Maximum number of retries for rendering',
        type: 'number',
        default: 3,
    })
    .conflicts('sitemap-file', 'sitemap-url')
    .check((argv) => {
        if (!argv['sitemap-file'] && !argv['sitemap-url']) {
            throw new Error('Either --sitemap-file or --sitemap-url must be provided');
        }
        if (argv['replace-url'] && !argv['replace-url'].includes('=')) {
            throw new Error('--replace-url must be in the form <new-url-prefix>=<old-url-prefix>');
        }
        return true;
    })
    .strict()
    .fail((msg, err, yargs) => {
        if (err) {
            console.error('Error:', err.message);
        } else {
            console.error('Error:', msg);
        }
        console.error(yargs.help());
        process.exit(1);
    })
    .help('h')
    .alias('h', 'help')
    .argv;

const args: Args = {
    sitemapFile: argv['sitemap-file'],
    sitemapUrl: argv['sitemap-url'],
    output: argv.output,
    replaceUrl: argv['replace-url'],
    parallelRenders: argv['parallel-renders'],
    maxRetries: argv['max-retries'],
};

startRendering(args)
    .then(() => process.exit(0))
    .catch((err) => {
        console.error(err);
        process.exit(1);
    });

siterender's prerender.ts source code generated by ChatGPT 4o
import puppeteer from 'puppeteer';
import axios from 'axios';
import fs from 'fs';
import path from 'path';
import {XMLParser} from 'fast-xml-parser';
import type {Browser} from 'puppeteer';

export interface Args {
    sitemapFile?: string;
    sitemapUrl?: string;
    output: string;
    replaceUrl?: string;
    parallelRenders: number;
    maxRetries: number;
}

interface UrlReplacement {
    newPrefix: string;
    oldPrefix: string;
}

const parser = new XMLParser();

/**
 * Start the rendering process based on the provided arguments.
 * @param argv - Command line arguments.
 */
export async function startRendering(argv: Args): Promise<void> {
    const urls = await getUrls(argv);
    const urlReplacement = argv.replaceUrl ? parseUrlReplacement(argv.replaceUrl) : null;
    const browser = await puppeteer.launch();

    try {
        await renderPages(browser, urls, argv.output, argv.parallelRenders, argv.maxRetries, urlReplacement);
    } finally {
        await browser.close();
    }
}

/**
 * Get URLs from the sitemap.
 * @param argv - Command line arguments.
 * @returns List of URLs to render.
 */
async function getUrls(argv: Args): Promise<string[]> {
    const sitemapContent = argv.sitemapFile
        ? await fs.promises.readFile(argv.sitemapFile, 'utf8')
        : await fetchSitemap(argv.sitemapUrl!);

    const sitemap = parser.parse(sitemapContent);

    if (sitemap.sitemapindex) {
        console.log('Processing sitemap index');
        const sitemaps = Array.isArray(sitemap.sitemapindex.sitemap)
            ? sitemap.sitemapindex.sitemap
            : [sitemap.sitemapindex.sitemap];
        let urls: string[] = [];
        for (const map of sitemaps) {
            console.log(`Processing sitemap: ${map.loc}`);
            const subSitemapContent = await fetchSitemap(map.loc);
            urls = urls.concat(parseSitemap(subSitemapContent));
        }

        return urls;
    }

    return parseSitemap(sitemapContent);
}

/**
 * Parse URLs from sitemap content.
 * @param content - XML content of the sitemap.
 * @returns List of URLs.
 */
export function parseSitemap(content: string): string[] {
    const sitemap = parser.parse(content);
    if (sitemap.urlset) {
        return Array.isArray(sitemap.urlset.url)
            ? sitemap.urlset.url.map((u: any) => u.loc)
            : [sitemap.urlset.url.loc];
    }

    return [];
}

/**
 * Fetch the sitemap content from a URL.
 * @param url - URL of the sitemap.
 * @returns Sitemap content.
 */
export async function fetchSitemap(url: string): Promise<string> {
    try {
        const response = await axios.get(url, {maxRedirects: 5});
        return response.data;
    } catch (error) {
        const errorMessage = `Failed to fetch sitemap from ${url}: ${error instanceof Error ? `Error: ${error.message}` : `Error: ${String(error)}`}`;
        console.error(errorMessage);
        throw new Error(errorMessage);
    }
}

/**
 * Parse the URL replacement parameter.
 * @param replaceUrl - URL replacement string.
 * @returns URL replacement object.
 */
function parseUrlReplacement(replaceUrl: string): UrlReplacement {
    const [newPrefix, oldPrefix] = replaceUrl.split('=');
    return {newPrefix, oldPrefix};
}

/**
 * Render the pages using Puppeteer.
 * @param browser - Puppeteer browser instance.
 * @param urls - List of URLs to render.
 * @param outputDir - Output directory for rendered pages.
 * @param parallelRenders - Number of parallel renders.
 * @param maxRetries - Maximum number of retries for rendering.
 * @param urlReplacement - URL replacement object.
 */
async function renderPages(
    browser: Browser,
    urls: string[],
    outputDir: string,
    parallelRenders: number,
    maxRetries: number,
    urlReplacement: UrlReplacement | null
): Promise<void> {
    const queue = [...urls];
    const tasks: Promise<void>[] = [];

    for (let i = 0; i < parallelRenders; i++) {
        tasks.push(processQueue(browser, queue, outputDir, maxRetries, urlReplacement));
    }

    await Promise.all(tasks);
}

/**
 * Process the queue of URLs to render.
 * @param browser - Puppeteer browser instance.
 * @param queue - Queue of URLs to render.
 * @param outputDir - Output directory for rendered pages.
 * @param maxRetries - Maximum number of retries for rendering.
 * @param urlReplacement - URL replacement object.
 */
async function processQueue(
    browser: Browser,
    queue: string[],
    outputDir: string,
    maxRetries: number,
    urlReplacement: UrlReplacement | null
): Promise<void> {
    while (queue.length > 0) {
        const url = queue.shift()!;
        const finalUrl = urlReplacement ? url.replace(urlReplacement.oldPrefix, urlReplacement.newPrefix) : url;

        await retryRenderPage(browser, finalUrl, outputDir, maxRetries);
    }
}

/**
 * Retry rendering a page with the specified number of retries.
 * @param browser - Puppeteer browser instance.
 * @param url - URL of the page to render.
 * @param outputDir - Output directory for rendered page.
 * @param maxRetries - Maximum number of retries for rendering.
 */
async function retryRenderPage(
    browser: Browser,
    url: string,
    outputDir: string,
    maxRetries: number
): Promise<void> {
    for (let attempt = 0; attempt <= maxRetries; attempt++) {
        try {
            await renderPage(browser, url, outputDir);
            console.log(`Rendered: ${url}`);
            return;
        } catch (error) {
            if (attempt < maxRetries) {
                console.log(`Retrying (${attempt + 1}/${maxRetries}) for ${url}`);
                const backoff = Math.min(Math.pow(2, attempt) * 1000, 8000);
                const jitter = Math.random() * 1000;
                await new Promise((resolve) => setTimeout(resolve, backoff + jitter));
            } else {
                console.error(`Failed: ${url}: ${error instanceof Error ? error.message : String(error)}`);
                process.exit(1);
            }
        }
    }
}

/**
 * Render a single page using Puppeteer.
 * @param browser - Puppeteer browser instance.
 * @param url - URL of the page to render.
 * @param outputDir - Output directory for rendered page.
 */
export async function renderPage(browser: Browser, url: string, outputDir: string): Promise<void> {
    const page = await browser.newPage();
    try {
        const parsedUrl = new URL(url);
        const filePath = getFilePath(parsedUrl, outputDir);

        await ensureDirectoryExistence(filePath);
        await deletePreviousFile(filePath);

        await page.goto(url, {waitUntil: 'networkidle2'});

        const content = await page.content();

        await fs.promises.writeFile(filePath, content);
    } catch (error) {
        console.error(`Failed to render page ${url}: ${error instanceof Error ? error.message : String(error)}`);
        throw error;
    } finally {
        await page.close();
    }
}

/**
 * Get the file path for the rendered page.
 * @param parsedUrl - Parsed URL object.
 * @param outputDir - Output directory.
 * @returns File path for the rendered page.
 */
export function getFilePath(parsedUrl: URL, outputDir: string): string {
    let pathName = parsedUrl.pathname;
    if (pathName.endsWith('/')) {
        pathName = `${pathName}index.html`;
    } else if (!path.extname(pathName)) {
        pathName = `${pathName}/index.html`;
    }

    return path.join(outputDir, pathName);
}

/**
 * Ensure that the directory for the file path exists.
 * @param filePath - File path.
 */
export async function ensureDirectoryExistence(filePath: string): Promise<void> {
    const dir = path.dirname(filePath);
    if (!fs.existsSync(dir)) {
        try {
            await fs.promises.mkdir(dir, {recursive: true});
        } catch (error) {
            console.error(`Failed to create directory ${dir}: ${error instanceof Error ? error.message : String(error)}`);
            process.exit(1);
        }
    }
}

/**
 * Delete the previous file if it exists.
 * @param filePath - File path.
 */
export async function deletePreviousFile(filePath: string): Promise<void> {
    if (fs.existsSync(filePath)) {
        try {
            await fs.promises.unlink(filePath);
        } catch (error) {
            console.error(`Failed to delete file ${filePath}: ${error instanceof Error ? error.message : String(error)}`);
            process.exit(1);
        }
    }
}

siterender's logic.ts source code generated by ChatGPT 4o

Related articles