It started the way most side projects start. My kid wanted a story where he was the hero. Not a generic “insert your name here” book. He wanted to see himself in the illustrations, fighting tigers in our backyard, exploring space with his best friend.
He’s six. I’m a software engineer. So instead of saying “that’s not how books work,” I opened a terminal.
The first attempt was a Python script that stitched together GPT-generated text with DALL-E images. It took about 40 minutes to produce something that looked like a ransom note made by a children’s book committee. The illustrations were inconsistent. The story didn’t track. His “character” looked like a different person on every page.
But he loved it. He asked me to make another one the next night. And the night after that.
After making about a dozen of these, I started noticing what made the good ones good. The character has to look like the same person across every page. That’s the hardest part. AI image generation wants to give you a new interpretation every time, and getting it to maintain a consistent character across 12 pages took more engineering than everything else combined.
Kids also know when a story is just a sequence of things that happen. There needs to be a problem, a struggle, and a resolution. The AI tends to resolve conflict immediately (“and then he solved it!”) unless you specifically engineer the narrative structure. And the pacing matters. A bedtime story is 8-12 pages. Not 4. Not 20.
His best friend came over one weekend and they wanted a story with both of them in it. That one turned out to be the best book I’d made. She loved it. Her mom saw it and asked if I could make one for her daughter’s birthday. I did. She cried. Not because it was technically impressive, but because her kid had never seen herself as the main character in a real book before.
That was the moment. A parent seeing their child in a real, beautiful book. That’s the product.
I rebuilt everything from scratch. The janky Python script became a real application: Next.js frontend, FastAPI backend, PostgreSQL for state, and a pipeline that coordinates text generation, character consistency, illustration generation, and layout into a single coherent book.
The hard problems were all in the details. Character consistency required building a reference pipeline. You upload photos, the system extracts a character model, and every illustration is generated against that reference. Getting this reliable took weeks. The narrative structure required a story engine that plans the full arc before generating any text. And print quality was its own rabbit hole. A book that looks beautiful on your phone looks like garbage at 300 DPI unless you plan for it from the start.
A custom illustrated children’s book used to cost $2,000+ and take weeks. At $20 and 10 minutes, it’s something any parent can do on a Tuesday night because their kid asked for a story about tigers. That price point changes who the customer is. It’s not “parents who commission custom art.” It’s every parent who wants their kid to see themselves in a book.
I still make books for my kid. He has opinions now about what the stories should be about (currently: tigers in every possible scenario). The books are sitting on his shelf next to the ones from the bookstore. He doesn’t know or care that AI made them. He just knows his dad makes him books where he’s the main character.
PageWeaver is live if you want to make one for your kid.