Building apps over text in an Uber
The world has changed. Heading to do an AI training at a partner, I texted with OpenClaw in the back seat of the Uber about two ideas for tools I wanted for the training. Just as we hit the Queens/Midtown tunnel, I told it to build, and it one-shotted two fully functional apps in 4 mins. They worked perfectly.


The Tools It Built
Translation Pipeline: You give it an article and it translates to multiple languages at once. User can download the translations with the click of a button. When I ran this, a person in the room read the Russian translation and said it was about ready to publish there on the spot.

AI Image Detector: Drop a folder of images in the window and it runs analysis to determine the likelihood that the images were altered using AI. Helpful when viewing social media feeds about protests, for example. (Yes, I know this is a cat/mouse game as AI gets better at generating images. This is a proof of concept built in 4 mins. 😄)

The Tech Stack
-
OpenClaw communicating over Signal. This is currently my preferred method for talking with my bot. I’m looking at moving to something like Pika as soon as it gets more mature.
-
Maple was used as the backing LLM in the tools for strong data privacy. It’s an encrypted TEE LLM provider, so no app content was shared with third parties
-
OpenClaw spun up four subagents. They used GLM-5 and Opus 4.6. I wanted to test open models vs proprietary models, so I had OpenClaw use both to build both tools. Open vs Closed Showdown! The verdict? They performed the same. Both app versions had the same features and good UX. Open models are here and they are really good.
-
After the tools were built, I had my main OpenClaw agent review the code and test the apps for me, since I was still in the car and not able to run them right then. After a few minutes, it assured me that its own tests passed and they were all functional.
Why
The organization I was visiting is HRF. They work in highly sensitive environments and need productivity tools that minimize how much data is shared with third parties. The translation tool, for example, was demo’d using a public press release. It could just as easily be used while drafting non-public legal comms that are shared with NGOs in different countries.
Both of these tools run the software locally and use Maple for the AI processing because it is end-to-end encrypted. They could be changed to use a local AI model as well to run fully offline, acknowledging performance and accuracy tradeoffs.
Why did I build them over text and not on a laptop? It had been a long day, I was getting in late at night, and I wanted to use my last remaining minutes to add something extra to the training the next day. OpenClaw and AI models made it so easy.
Conclusion
Anyone can build from anywhere. You don’t need to know how to code, even though I do. You just need to know how to communicate ideas effectively, have a conversation, and refine thoughts. If you can do that, the AI tools can do the rest of the work.
In the past, I would have stayed up for hours trying to scrap together a single, very rough prototype that gives people a glimpse of the concept. Instead, AI did the work for me while in the car. When I got to the hotel, I spent 5 minutes testing these fully functional apps, and went to bed 😴.
If the future of AI is that we all get more sleep, I can get behind that.
Do you trust your bot? Are you reviewing line by line?