Something not making sense? For help or questions, ping @mikestilling in Slack.
Now that you understand the workflow, tools, and code fundamentals, let's go full send. In this final lesson, we'll use AI to 100x all the things. 😂
If you've made it here, you've already done the hard part. You understand how the workflow fits together—editor, GitHub, deploy. You've written HTML, styled with CSS, animated with JavaScript, and even vibe coded a WebGL effect. That foundation changes everything about how useful AI is going to be for you.
Think of it this way: AI is a multiplier. If you know nothing, 1000x of nothing is still nothing. But now that you understand enough—the structure, the terminology, the feedback loop—AI takes that and cranks it way, way up.
This lesson is where it all comes together. We're going to cover:
Let's go full send.
In Lesson 4, we got our first taste of prompting when we vibe coded a WebGL effect. Now let's dig into what makes a prompt actually good—because the difference between a mediocre prompt and a great one is the difference between "this is broken garbage" and "holy cow, that's exactly what I wanted."
Cursor has several ways to interact with AI, but the one we'll focus on is Agent mode. When you open Cursor's chat panel (the sidebar on the right, or ⌘L), you can type prompts that tell the AI what to do. In Agent mode, the AI can:
npm startIt's like having a junior developer sitting next to you who's read every piece of documentation ever written—but who still needs you to tell them what to build and where to put it.
The single most impactful thing you can do to improve AI output is give it context. In Cursor, you do this with @ references. These let you point the AI at specific parts of your codebase so it understands your patterns and conventions.
Here are the most useful ones:
@filename — reference a specific file (e.g. @first-page.webc)@foldername — reference an entire folder (e.g. @src/assets/css/)@codebase — let AI search your entire codebase for relevant contextFor example, if you want AI to create a new page that matches your existing style, referencing an existing page gives it a working template to follow.
The quality of AI output is directly proportional to the quality of your input. Vague prompts produce vague results. Specific prompts produce specific results.
The specific prompt works better because:
You don't need to specify every single class—just enough for AI to understand the vibe. It'll fill in the gaps. And if it doesn't get it right, that's what iteration is for.
Here's the thing most people get wrong: they try to nail it in one prompt. That almost never works. Instead, treat AI like you'd treat a conversation with a collaborator. Start broad, then refine.
A typical flow looks like this:
This iterative loop is the same feedback loop from Lesson 3—write a little, save, preview, adjust. The only difference is that AI is writing the code for you now.
Let's say we want to add a new section to first-page.webc with a centered headline, a subhead, and a row of three feature cards below it. Here's how I'd prompt this in Cursor:
Notice how I'm referencing the file, using component names AI can find in my code, and describing the design using the same utility class language we learned in Lesson 3. This is exactly why learning the fundamentals first matters. You can now speak the language.
So you know how to prompt and you've got a feel for the code. But what about when you have a specific design in Figma that you want to recreate? Let's cover two approaches: describing designs in prompts, and feeding Figma directly into Cursor.
The most straightforward way to get a Figma design into code is to describe it to AI in your prompt and manually export any images or assets you need.
Since you now know the basics of HTML/CSS and utility classes, you can describe a Figma design using the same language:
For images and assets, export them from Figma (right-click → Export) in 2x resolution as .png or .jpg. Drop them into your project's /src/assets/images/ folder and reference them in your prompt:
If you'd rather skip the manual description and let AI see the design, you have a couple options.
Screenshots: The simplest method. Take a screenshot of your Figma design and drag it directly into Cursor's chat panel. AI can analyze the image and generate code that matches the layout, colors, and spacing it sees.
Figma MCP: For a tighter integration, you can connect Figma to Cursor using an MCP (Model Context Protocol) server. This lets AI pull design data—like component structures, styles, and layer names—directly from your Figma file without screenshots. Setting up MCP is a bit more involved, but once it's configured, it's a little more reliable.
My recommendation: start with screenshots. They require zero setup and work great for 80% of what you'll need. As you get more comfortable, you can explore MCP if you want a tighter Figma-to-code pipeline.
Time to put everything together. We're going to vibe code an entire page in protohelm using only prompts. No hand-writing code. Pure vibes.
The goal here isn't to produce a pixel-perfect page—it's to show you how quickly you can go from zero to something tangible using AI and the foundations you've built.
Our first prompt will set up the new page with the right boilerplate. Since AI has access to our codebase, we can reference existing files for it to follow:
After AI creates the file, save it and check the browser at localhost:8080/vibes/ — you should see a blank page with the nav and footer. That's our canvas.
Now let's add a hero section. We'll describe the layout using the terminology and components we've learned:
Save, preview. If the spacing feels off or the text sizing isn't right, just follow up:
See how this works? Broad structure first, then dial in the details. This is exactly how we designed in Lesson 3, just faster.
Keep going. Add a feature grid, a testimonial, a full-bleed image—whatever you want. Each new section is just another prompt:
Remember the GSAP animation from Lesson 4? Let's have AI add that, too:
At this point, you've built a multi-section, animated page entirely through prompts. Save your work, preview it, and push it to GitHub.
This loop—prompt, review, refine—is your new superpower. It works for simple tweaks and entire pages alike.
Before you go off and start building everything with AI, here are some hard-won tips that'll save you time and frustration.
AI rarely gets everything perfect on the first try. That's normal. Think of the first output as a rough draft. Two to three follow-up prompts usually gets you where you want to be. If you're on round ten and it's still not right, try a different approach.
Don't blindly accept every change. Cursor shows you diffs—the before and after of every edit AI makes. Skim through them. You'll start to recognize when something looks off, even if you can't articulate exactly why. Trust your design eye.
Sometimes AI is overkill. Changing text-14 to text-16? Just do it yourself. Swapping a color from neutral-600 to neutral-500? Faster by hand. Save AI for the stuff that would take you more than a minute or two to figure out.
Git is your unlimited undo button. Commit after every meaningful chunk of work. If AI makes a mess of your code and you can't figure out how to fix it, you can always roll back to the last commit. This is genuinely the most important safety net you have.
The single best habit you can build: always reference existing files in your prompts. When AI can see how your codebase is already structured, it follows the same conventions. Without that context, it'll make something up—and it probably won't match.
AI sometimes invents things that don't exist—CSS classes that aren't real, JavaScript APIs that don't work, or component names that aren't in your codebase. If something isn't rendering correctly, this is often why. A quick sanity check of the generated code usually reveals the issue.
If you've actually gotten through all five of these lessons—seriously, congratulations. This stuff is legitimately hard to learn, especially the first time around.
And you just did it.
Let's take a second to appreciate everything you've picked up:
That is a massive amount of ground to cover. It's okay if some of it still feels fuzzy. The point was never to become an expert—it was to understand enough so that AI can handle the rest. And now you do.
In 2026, the bar for making incredible things has never been lower. You don't need a CS degree. You don't need years of experience. You just need to understand enough to direct the machine. Go make something cool.