What I Learned Building Whole Earth AI (video demo)
A project-based learning tool built on LLMs
Whole Earth AI is a project-based learning tool that turns any personal goal into a step-by-step project guide with multimedia, quizzes, materials lists, and branching deep dives. Since I’m not really an engineer, I mostly used Whole Earth AI to teach me how to build Whole Earth AI.
I built it to explore two questions:
1. What new UX patterns do LLMs make possible?
2. What might the Montessori Method applied to software for adults look like?
Here’s what I learned:
We learn best by doing
Seymour Papert, John Dewey, and Jean Piaget (among many others) showed us the value of open-ended hands-on learning.
When I first started building Whole Earth AI, I thought the core challenge was information transfer—how quickly we could get knowledge into a learner’s head. This was largely a matter of personalizing the learning material—which does help(!) as evidenced by Bloom’s 2 sigma problem.
However, I quickly discovered that information transfer is dwarfed by the challenge of getting learners to stick with the learning journey over time, even when it becomes hard. The real problem to solve was motivation.
What’s more motivating than learning something new? Doing something new—and not just any new thing. A specific new thing.
Say I want to build a mid-century oak coffee table with sashimono joinery. As a novice I’d need separate books on woodworking, mid-century design, and sashimono technique—and then piece them together myself.
Or, I could go to a woodworking class where we build a birdhouse. But I don’t want to build a birdhouse—I want to build a mid-century modern oak coffee table with sashimono joinery!
LLMs can tailor learning to your project, not a prefab assignment.
Sometimes the project is common but the learner is not. Imagine a nurse who wants to switch to UX design. The nurse may have lots of transferable skills from their prior career. There’s no existing guide on “How to Become a UX Designer for Nurses” — but Whole Earth AI can generate one easily.
Our own idiosyncratic projects and goals are far more motivating than abstract learning-for-learning’s sake, therefore Whole Earth AI is a project-based learning tool.1
Useful guidance starts with rich context
If you’ve used LLMs extensively, you know that context window management becomes just as central (if not more central) than good prompting. LLMs really only have access to their training data (and maybe the internet), but beyond that all they know is what you tell them.
When you’re new, you don’t yet know which details—or constraints—really matter. How big do you want that mid-century modern coffee table to be? The answer might make a big difference for how you should approach building it!
You don’t know what you don’t know, therefore, Whole Earth AI prompts you (for important context) more than you prompt it.
Recognition over recall
One of the shortcomings of AI chatbots from a UX perspective is that they often require you to recall information. If you don’t remember an important detail, it’s not making it into that context window.
That’s why UX designers lean heavily on the concept of recognition over recall. The basic idea is that recognizing correct or relevant information from a set of choices is less cognitively demanding than having to remember it on your own.
By proposing plausible options to choose from (in addition to open-ended answers), Whole Earth AI also spurs new consideration and widens the scope of awareness for possible goals. In user studies it was common to see these suggestions take users’ projects in new directions.
Building context can be a learning opportunity in itself while reducing cognitive load, therefore, Whole Earth AI offers multiple choice options.
No two learners are alike
Even if the project is exactly the same for both, learners have different levels of pre-existing knowledge. Subject matter is multi-dimensional. One learner might know a lot about woodworking but nothing about mid-century modern design, while another may know lots about mid-century modern design and nothing about woodworking.
This goes back to the subject of efficient information transfer.
Learning material should always be as adapted to the learner as possible, therefore, Whole Earth AI creates assessment quizzes to inform the focus of the project guide it creates.
Preparation produces confidence
Scrambling for materials mid-project induces a state of anxiety and chaos. Chefs know this—mise en place cooking allows for a state of confidence and efficiency going into a recipe. We want all of our ingredients gathered, organized, and ideally measured out before we begin cooking. Then all we have to think about is combining them.
This principle applies to learning projects too. If we can be confident going into a project that we have everything we’re going to need, we’ve removed an enormous amount of friction. It’s also an opportunity to price things out before we’re in too deep!
There should be as little friction as possible between us and our learning goals, therefore, Whole Earth AI provides a comprehensive list of necessary materials (with price estimates) before you dive in.
Learning depends on trust
As we all know, LLMs are not always trustworthy. They’ve gotten much better, but they still hallucinate—and worse still, they hallucinate with great confidence. This is especially problematic for learners, who are naturally ill-equipped to distinguish between fact and confident lie.
LLMs are more trustworthy when they have reliable information to work from. We can give an LLM access to the web, but as we know, just because something was published online doesn’t make it trustworthy either.
Whole Earth AI pulls in content from the web for every lesson, but before it synthesizes it into instructional content, it undergoes a multi-layered filtering process for ensuring that all content is both immediately relevant and high-quality. This process is expensive and not always perfect, but the improvement to quality and reliability is worth it. Content then references sources in-line. In short: fewer hallucinations, more aha-moments.
Learning is built on trust, therefore, Whole Earth AI does its best to gather and synthesize from reliable sources.
Information prefers certain modalities
Some information is best explained in text. Other times we need images to drive the point home. Sometimes video is best.
LLMs can provide basic instructional text out of the box, but pulling together images and video in a coherent and useful way is less straightforward.
Here’s how Whole Earth AI does it under the hood:
Receive the lesson subject and each of the tasks. Split the tasks into parallel LLM calls for generating instructional content for that task.
Once instructional content is generated for a task, another LLM reviews the content and identifies any place within the text where an image would help make the point of the text clearer. Then it generates an image search query.
This image search query is sent to an image search API, which pulls in the top 20 images for that query.
Another LLM—this time multimodal—reviews those images alongside the instructional text and determines which of the 20 images is best for the content.
This image is placed within the text.
In parallel, we grab a video for the lesson:
The lesson title and project goal are provided to an LLM, which translates this information into a youtube search query.
We pull the most popular youtube video on the subject from the Youtube API.
Another LLM reviews the Youtube video metadata and determines if it’s a good and relevant video for the lesson.
If not, it repeats the process until we find a good video.
It’s a lot of LLM+API gymnastics, but that’s what it takes to reliably create a coherent multimedia experience with LLMs today (and when something as salient as video or images doesn’t exactly match the content, it feels really bad and loses user trust). The payoff: every lesson surfaces a crisp image or video that clarifies your next step.
A multimedia learning experience is more enjoyable and more effective, therefore, Whole Earth AI can search, assess, and synthesize multimedia relevant to the lesson.
LLMs like focus
Just like humans, LLMs don’t perform great when you ask them to multitask. An early version of Whole Earth AI sent the entire lesson generation prompt (with all its tasks) to a single LLM call. So in the case of our demo, all of the content for the lesson “Establish Project Parameters” including instructions for “Fundamentals of rooftop greenhouses, hydroponics, and smart-ag basics”, “Choose primary crop species to optimize system design”, “Measure and record rooftop footprint & load limits”, and “Determine local building code & permitting requirements” would all be handled by a single LLM prompt.
That didn’t work well. What did work well was splitting all of these task instructions up into dedicated LLM calls.
The less you ask of the AI, the better, therefore, Whole Earth AI breaks up content as much as possible before passing to an LLM.
Anything can spur a question
You know what kind of sucks about chatbots? They’re linear. They spit out a ton of tokens making all sorts of points, and I have to respond to it wholesale.
But maybe I have multiple questions on multiple points. I can ask all of them together but I risk confusing the LLM (see above), or I can ask them one at a time but the point I’m referencing rides off into the sunset. It’s just too much friction for curiosity.
So I developed a different UX pattern. Any chunk of text can branch into a deep dive. When I click on a chunk of text, I can ask any question I want — but what if there was even less friction than that?
Whole Earth AI has quick fire question buttons for digging deeper — “How do I do this?”, “Explain this like I’m 5”, “Tell me more” and “Give me an example”. When I click these buttons, I get a deep dive in the chosen direction regarding that specific chunk of text.
My personal favorite part is that we can do this recursively — deep dives within deep dives!
Learners should always be encouraged to follow any path of questioning, therefore Whole Earth AI makes digging deeper frictionless and infinite.
Visible progress keeps us going
As plenty of psychological research and common sense will attest — we love line-go-up. We can’t help it. We want to see our progress.
There’s much more I’d love to explore on this theme, but one instantiation of the idea in Whole Earth AI is indicators for leveling up skills. Every time the user completes a task, an LLM reviews it against the focus areas they said they were interested in when they created the project. If completing the task indicates an uplevel in skill, the user gets a popup notification to this effect.
As mentioned before, a core problem of successful learning is motivation, therefore, Whole Earth AI helps you track your progress.
Projects get more specific over time
By definition, learning means you don’t know how to do everything upfront. When you start a project you’ve never done before, you don’t know what decisions to make, and you may not even know what decisions will have to be made.
Imagine you want to create a database from scratch, but you’ve never built a database before. The context questions that Whole Earth AI asks when you’re creating a project guide might ask “Do you want to use PostgreSQL, MongoDB, MySQL, or SQLite?” But if you’ve never worked with databases before, these words might mean nothing to you—so you say “I don’t know”.
Well, you’re going to need to know at some point to accomplish your project goal of building a database, because guidance (AI-generated or otherwise) is going to need that specificity in order to be useful.
Whole Earth AI is built for this. It creates carefully placed ‘decision tasks’ throughout your project. When you reach a decision task, the lesson content is designed to help you make that decision with all the most relevant information for your project goal.
So, at this point you might have learned a bit about databases and are asked yet again, “What kind of database do you want to use?”
Now you have what you need to answer that question. When you finally select PostgreSQL (basically always the right answer), an LLM reviews the remainder of your project guide with this choice in mind and updates the guide accordingly so it can be specifically adapted to this new context. Your guide is now about building a PostgreSQL database. Your project guide isn’t static or linear, it’s a generative choose-your-own-adventure game.
LLMs need specificity to be useful to us, but we can never know all the specifics when we start a project, therefore, Whole Earth AI provides decisions as tasks, helps you decide, and updates the remainder of the course accordingly.
Learning is an adaptive feedback loop
Lesson quizzes are a fun way to test yourself along your learning journey, but in Whole Earth AI, they serve a dual-purpose.
Answers to lesson quizzes update a live learner knowledge model that is continually refined throughout the course of your project. This is a model of what you know, so the AI can adapt accordingly in subsequent lessons. If you’re struggling with that sashimono joinery, it will emphasize that in future lessons. If you’ve got mid-century modern design down pat, it’ll set that aside and focus elsewhere. It’s a little like a loosely-structured spaced repetition system.
Assessment is an opportunity to turn learning into a feedback loop for personalization, therefore, Whole Earth AI uses quiz responses to update the AI’s model of your knowledge to help you where you’re struggling.
Learning is social
Mitchel Resnick (who studied under Seymour Papert) wrote that creative learning revolves around the four P’s: Projects, Passion, Play, and Peers. Learning together is better than learning alone.
I’d love to explore more ways for Whole Earth AI to be social—it could even connect you with people in your local community exploring similar projects to you.
As it is, the social features are pretty simple. You can make projects public for others to see.
But if you see someone’s project that you’d like to try for yourself, you can also fork that project—just like a Github repo—which adds it to your own collection of projects and lets you personalize it.
Learning is better together, and your peers are creative, therefore, Whole Earth AI lets you explore other’s projects and fork them.
Thanks for taking a look at Whole Earth AI!
I’m off building other things now, but I’m taking these learnings with me to inform the new venture. I hope they might inspire some explorations of your own.
Whole Earth AI did not happen in a vacuum-sealed lab. Huge thanks to my fiancée, family, and friends, notably Gordon Brander, Alice Albrecht, Nick Bowden, Tom Critchlow, Alex Komoroske, Jonathan Lebensold, Steve Klise, Alex Taber, Oshan Jarow, Jared Pereira, Tasshin Fogleman, Agree Ahmed, Rachel Inman, Greg Neiswander, Max Bittker, Gretchen Roehrs, Brian W. Jones, Morgan Allen, John Hess, and a generous grant from the Cosmos Institute.
p.s. Here’s some of the books that inspired Whole Earth AI.

Another reason Whole Earth AI became a project-based learning tool (and not just a generic learning tool) is that LLMs are bad writers. In theory, you could create an LLM-generated course on a subject like ‘Ancient Rome’, but there are a ton of well-written histories of Rome at your local library, and almost all of them are going to be more enjoyable to read than LLM-generated content.
I believe this is a fundamentally architectural problem with LLMs. In order to provide coherent and reliable information, they are predicting words in the weighted average of a vector space. LLMs write like the median of the internet—safe, predictable, surprise-free. Good writing wiggles the eyebrows; average vectors rarely do.
But project-based learning is less reliant on good writing. It’s more instructional, procedural knowledge that mostly just needs to be accurate. So LLMs are better suited for this kind of learning.
I love this learning- and feedback-loop-based approach, Kasey. I noticed both Gordon Pask and the WEC on your book list. I’d like to extend an invitation to you to give a talk on WE AI to the UK Cybernetics Society (CybSoc.org, https://youtube.com/@cyberneticssociety). I’m on the board. We are focusing on getting cybernetics principles back into the mainstream to approach global social challenges and opportunities. AI looms large in our discussions through mostly through challenge-based talks up to now. I think WE AI would be a great opportunity-based talk. Does this appeal to you?