Skip to main content

Command Palette

Search for a command to run...

10x to 100x Engineer

7 DAYS OF AI ASSISTED CODING EXPERIMENTS: Part 1 of 4

Published
6 min read
10x to 100x Engineer
N

I am a fullstack developer. I have been freelancing since 2010 (coding since 2007). Over the years I have worked as a one person team where I build backends, microservices, managed large AWS infrastructure, and managed large NoSQL databases (Cassandra, MongoDB, and ElasticSearch). Being a pre-nosql revolution programmer, I still love PostgreSQL, and MySQL as a sharp knife in my toolbelt.

After 2015, I have started taking small and medium size projects and MVPs for startups and businesses, and delivered complete solutions to them. The deliverables included in these projects are: mobile apps (React Native), desktop apps (Electron), webapps (React), resilient backends (NodeJS, MongoDB/DocumentDB, DynamoDB, Firestore, Cassandra), and infrastructure/IaC (Terraform).

I am interested but have no experience in the following tech: ML, Blockchain, Gene Editing, Rust, CRISPR.

Oh, I also authored a couple editions of Mastering Apache Cassandra book as single author.

I have been using an AI assistant for a couple years now, but until Jan 2026, I was using the AI agents more like a better auto-complete and a faster web-search. It was great. In the mid Jan 2026, I wanted to build a simple pain tracking app that’s completely offline, no login required, ads free, simple, and shows trends for myself to track my condition so that I understand flare ups, time of day patterns of the pain, the good days-bad days, and how’s average pain varies over time in different body parts.

This is a fairly simple app. A week of coding to build a functioning app, nothing polished. 10 days to polish things up. 15 to build a release candidate. About a two week of work for me. Then I thought, well, let’s check how good AI coding agents have become. So, I swiped my card for a month-long subscription of an AI service and fired-up my IDE. What I am going to tell here is how things snowballed from casually building an app in a set-up (React, React Native, Typescript, Sqlite, and plotting libraries) for which I can churn code in my dream to diving deep into interacting with Android’s MediaRouter2 and looking into Android’s codebase to understand why I am targeting a certain minimum SDK version while progressively building deliverables at a mind boggling speed without much fear of not knowing what AI is doing.

Since it’s a long article, I am going to split it into multiple smaller posts.

Day 1: Pain Recorder App

Tuesday morning, sitting next to my mom in a hospital after her successful surgery of a life threatening condition, I started to feel my DISH (Diffuse Idiopathic Skeletal Hyperostosis) bothering me. As I popped Gabapentin, I realized I am delaying my visit to the doctor to figure out what’s going on. I always fumble when doctors ask what hurts, how much, when it hurts the most, and so on. It’s usually the recollection of the last couple of days; and if it happened to be good days, it goes like, “there wasn’t much pain in the last couple of days, but it was hurting before… I can’t clearly recall” I feel like doctors get insincere once they hear this. So, sitting there, I decided to write an app. Glued to an uncomfortable chair with my 9 year old MacBook Pro with its infamous broken keyboard -- I wished I had a junior engineer to write the code while I oversee, because the stack (React Native) was so familiar to me, there wasn't a new learning with this project.

I decided to give an AI agent a chance. I prepared a very clear phase wise development plan, created a palette that’s colorblind friendly, wrote down unit tests that it must generate and pass, and got my Balsamiq mockups to provide me the UI sketches as PNGs. Initialized a new Expo project, and started feeding the AI. Lo and behold, 4 hours of careful instructions and validating every phase I had a production ready app. It blew my mind.

(I will paste the URL here once the Play Store approves my app, it seems you can’t publish a health related app as an individual.)

Day 2: A website of my choice

In the evening, I decided, I would like to publish it on the Play Store. Which demanded a location to read privacy policy and terms and conditions. I got workslocally.com registered and decided to use Github pages to publish a simple Markdown driven site. It looked ugly. Signed off for the day.

The newly found confidence in AI’s capability, told me to make a better website, still simple but not unoriginal. I have been wanting to use Jekyll for quite some time now, and this is the time. Ruby on my geriatric computer full of Ruby tombstones wouldn’t work and my love for Ruby isn’t that great. I found Eleventy -- similar in concept and a quick 30 minutes of reading got me what it does and it’s pretty slick. Again, I can build a website in 4 hours with good CSS and mobile support, but why not make AI worth my dollars.

Within 30 minutes, I had a functioning framework for a website with placeholder text, and got my Github actions written to generate the page and deploy to Github pages every time a commit is made in the main branch.

I filled up the content, merged into main and you can now see https://workslocally.com

Day 2 and 3: An Invoice Reader Image Processing

I’ve been bugged by an auditor to automate the invoice reading that his team does. His team takes hundreds to thousands of invoice pictures, and PDFs of printed, or handwritten invoices and extracts various pieces of information from it to create a unified excel sheet. I, while impressed by the capabilities of modern LLMs, it was the fact that it's indeterministic, made me uneasy. Reading invoices that were handwritten, possibly in Hindi made me even less confident. But, I was on a rampage. Bring it on.

I decided to use FastAPI, Gemini, and React+Vite. FastAPI and Vite weren’t really the things I used in professional development. A few toy projects here and there. I drew a back of a napkin UI diagram, got it approved by the client; and instead of providing exact mockups to the coding agent, I gave it vague ideas and let it do design for me. AI spat the code, and I (git) committed when a UI or UI component turned out to be of my liking and modify or undo when I didn’t like the result, and tweaked the prompt.

A six hours of effort and an MVP is ready. After a half an hour of discussion with a demo later, I spent a couple of hours building a production ready webapp using ReactJS, FastAPI (Python), and Postgres. The client, a medium sized auditing firm, wasn't really keen on SRE and uptime and had an on premise Ubuntu server. I got it deployed using Docker Compose and gave a primer to their admin person. (As of now) They have been trying this software for six weeks now, and liking it. There might be more feature requests.

So far, so good

These are low hanging fruits for LLMs in early 2026. I knew a greenfield run of the mill project isn't going to be a toughie for LLM. Although, it did hallucinate a little here and there, but still better than a junior engineer. I wanted to drag LLM to some real messes, and we will see in the next posts, it's not there yet but the gap is closing real fast.

The cover image was generated by ChatGPT 5.2 using the following prompt: "a tron like bike with a programmer like guy on it zooming fast on a road made of colorful coding view of dark UI of an editor and it's a curve. The shot is a low shot taken from the road level... the scene is like '80s sci-fi movies, and so are the colors."