Optimise your LLM workflow with the Chief Wiggum Workflow.
The days of big limits on LLMs like Claude and GitHub Copilot are over. We now have strict limits depending on which plan you are on. For GitHub Copilot, it's "Premium Requests." You get 50 on the free plan and up to 1,500 on the higher plans. It’s very easy to exceed these limits in the first week of the month if you’re not careful, leaving yourself in a pickle for the rest of the month.
So, how can we get around this? The answer: The WIGGUM Workflow, or Weighted Incremental Grouping for Greater Usage Management.
The WIGGUM workflow is quite simple. You create a Markdown file and list all your tasks, bugs, and tech debt, grouping them into phases. You then prompt your LLM to complete all tasks in a single phase. When prompted, the LLM will walk the list and complete all the tasks. The LLM will also mark the items it has completed on your list, creating a primitive state machine.
Below is an example of a TASKS.md file.
# My Project Tasks (TASKS.md)
## Phase 1: Feature: User API
- [X] Implement the new GET endpoint for `/user`. Syntax should follow the projects style guide and include middleware for session validation. Please refer to the `User` type on what to return.
- [X] Generate the types and frontend generated code with `mise run gen`.
- [X] Connect the new User api `useQuery` (generated with Orval) to the existing UI in the `<Header />` react component.
## Phase 2: Bugs
- [ ] The /user API needs to pass the avatar image url. Please add this and update the UI.
- [ ]
## Phase 3: Tech Debt
- [ ] Library Migration: Upgrade `react` to the latest.
- [ ] Logging Cleanup: Remove all `console.log` and `console.error` statements from production-facing modules.
- [ ] Update all api's to have better openapi.json comments to improve DX. If you have many items on your list, the LLM will sometimes only complete some of them. But this is okay; we have implemented a state machine to track the progress. Simply re-prompt the LLM to "Continue with Phase 6" until you’ve completed all the tasks on your list.
If we take a step back and look at Chief Wiggum here: the box is the Markdown file, the stack of donuts represents your multiple tasks, and the act of him eating is the AI processing that context in a single, efficient bite.
So why does the WIGGUM workflow actually work? For starters, it protects your premium requests by batching multiple tasks into a single prompt. It also shrinks your overall token footprint; by sharing context across a grouped list of tasks, you avoid generating multiple bloated contexts for every minor fix. On top of the massive time savings from batching, you get a built-in memory bank. Having a hard copy of your tasks right in your repository means you always have a reliable state machine to fall back on, making it incredibly easy to seamlessly swap between different models, agents, or tools if the need arises.