Back to Blog

GSD vs. Ralph Loops: The Better Way to Build Apps with AI

5 min read

You need to stop using Ralph loops. At least, you need to stop using them the way everyone on Twitter thinks you should.

Here is what really works: Ralph loops are just a weapon. The GSD (Get Shit Done) framework is the entire armory.

Most people using Claude Code or other AI coding agents get stuck in a "garbage in, garbage out" scenario. They throw a vague idea at a loop, the context window fills up, and the code degrades. The GSD framework solves this by forcing you to define a Product Requirements Document (PRD), breaking features into atomic tasks, and spinning up fresh sub-agents for every single step to prevent context rot.

I’ve tested this extensively, and here is exactly how to set it up.

Why Are Ralph Loops Failing for Most Developers?

The hype around Ralph loops is justified—the fundamentals are solid. Concepts like context window management, atomic tasks, and persistence until completion are critical for Claude Code workflows.

But here’s the thing: A loop is only a technique in a bash script. It doesn't know what to build. It entirely depends on the instructions you give it.

If your features aren't defined tightly, or if you don't know what "done" actually looks like, the loop will just run in circles until your token budget is gone. Most of us don't need a weapon; we need a system that ensures the weapon is pointed at the right target.

How Does the GSD Framework Prevent Context Rot?

GSD takes the principles of the Ralph loop but wraps them in a 6-step architectural process designed to stop context rot.

Context Rot is the idea that as a context window fills up, the AI's IQ effectively drops. Autocompact only solves this so much. Eventually, the model gets confused by its own history.

GSD handles this by using sub-agents.

Instead of one long chat session, GSD breaks your project into plans. For every atomic task within a plan, it spawns a new sub-agent with fresh context to execute the code.

  1. Task 1: New Agent -> Fresh Context -> Execute -> Commit.
  2. Task 2: New Agent -> Fresh Context -> Execute -> Commit.

This ensures you get the "smartest" version of the model for every single line of code written.

How Do You Install and Use GSD?

Setting this up is straightforward. You are going to use npx to interact with the repo directly.

Step 1: Installation

Run this line in your terminal to install the tool globally:

npx get-shit-done-cc@latest

I recommend installing it globally so you can use it across any project folder.

Step 2: Initialize a Project

Once installed, you interact with GSD inside Claude Code using slash commands. To start from scratch:

/gsd:new_project

If you already have code and want GSD to understand it:

/gsd:map_codebase

Step 3: Define the Scope

The system will interview you. When I tested this to build a Content Creation Remixer (an app that takes an article and turns it into a 60-second script), it asked me deep questions about input pipelines, database choices, and template presets.

It generates four critical documents automatically:

  1. Project: The high-level PRD.
  2. Requirements: Technical specs (features, inputs, outputs).
  3. Roadmap: Phased execution plan.
  4. State: A living document of what is built vs. what is pending.

What Does the Execution Workflow Look Like?

GSD breaks the build process down into specific phases. It doesn't just "code the app." It forces a structure:

  1. Initialize: Research and PRD generation.
  2. Discuss: You and the AI agree on the specific implementation of a phase.
  3. Plan: The AI breaks the phase into atomic tasks (often creating 200+ lines of XML planning documents specifically for the agents).
  4. Execute: Sub-agents code the features in parallel or sequence.
  5. Verify: The AI tests the code, and asks you to verify it manually (e.g., "Run npm run dev and check the localhost").
  6. Repeat.

In my Content Remixer test, Phase 1 (The Foundation) was broken down into two specific plans: bootstrapping the Next.js 15 project and setting up the OpenAI integration.

How Long Does It Actually Take?

This is not for people who want a one-shot slot machine experience. It is methodical.

When I ran Phase 1 of the Content Remixer, it took about 22 minutes and 18 seconds to complete.

It ran multiple waves of planning and execution. It feels slow compared to a standard prompt, but you are trading speed for accuracy. You aren't playing whack-a-mole with bugs later because the system validated the code as it was written.

Is GSD Right for Your Project?

I've looked at the code and the results, and here is my verdict:

Use Ralph Loops if:

  • You are an advanced developer.
  • You have a crystal clear technical spec in your head.
  • You need to execute a small, contained task quickly.

Use GSD if:

  • You are building a complex project end-to-end.
  • You don't have a technical background and need help scaffolding a PRD.
  • You are tired of context rot ruining your sessions after 30 messages.

While GSD uses more tokens upfront (due to the sub-agents and planning docs), it usually saves tokens in the long run because you adhere to the rule: Measure twice, prompt once.

FAQ

What is the main difference between Ralph Loops and GSD?

Ralph Loops are a technique (a bash loop) that persists until a task is done. GSD is a full framework that manages project requirements, planning, and context windows before executing the loop. Ralph Loops assume you have a blueprint; GSD helps you build it.

Does GSD cost more money to run?

Initially, yes. GSD uses sub-agents and extensive planning documents, which consume more input tokens. However, it typically prevents the "doom loop" of fixing bad code repeatedly, which often costs more in the long run.

Can I use GSD with existing code?

Yes. You can use the command /gsd:map_codebase to have the framework analyze your current repository. It will generate a state document based on what you have already built and help you plan the next phases.

What models should I use with GSD?

For the planning phase, use the smartest model available (Claude 3.5 Sonnet or Opus if available/affordable). The planning documents determine the success of the execution agents, so don't cheap out on the intelligence layer during the architectural phase.


If you want to go deeper into builds like this, join the free Chase AI community for templates, prompts, and live breakdowns.