I Know Kung Fu
7 DAYS OF AI ASSISTED CODING EXPERIMENTS: Part 2 of 4

I am a fullstack developer. I have been freelancing since 2010 (coding since 2007). Over the years I have worked as a one person team where I build backends, microservices, managed large AWS infrastructure, and managed large NoSQL databases (Cassandra, MongoDB, and ElasticSearch). Being a pre-nosql revolution programmer, I still love PostgreSQL, and MySQL as a sharp knife in my toolbelt.
After 2015, I have started taking small and medium size projects and MVPs for startups and businesses, and delivered complete solutions to them. The deliverables included in these projects are: mobile apps (React Native), desktop apps (Electron), webapps (React), resilient backends (NodeJS, MongoDB/DocumentDB, DynamoDB, Firestore, Cassandra), and infrastructure/IaC (Terraform).
I am interested but have no experience in the following tech: ML, Blockchain, Gene Editing, Rust, CRISPR.
Oh, I also authored a couple editions of Mastering Apache Cassandra book as single author.
Projects from the scratch using modern and common stack is easy thing to do. It was always easy, even before the AI assisted coding was mainstream, writing a webapp, deploying it to any cloud, making it scale, collecting the stats, alerts, making payment gateways work was so mind numbing easy that I could do it half asleep. It's the amount of typing that bothered me, which AI seems to have solved. Next, I decided to up the ante for AI. Let's see if it can actually do a messy cleanup job; and then whether I can use it to learn something new quickly. I had mixed results.
Day 4: Reanimating a 2016 WebApp in 2026
Back in 2016, MERN stack was very different from today. React wasn’t yet function oriented, no hooks, Redux was marred with complicated “stitching” to get it working, and Babel was a complicated mess. Node wasn’t nice as well. Typescript wasn’t fully baked and EcmaScript didn’t have a lot of syntax niceties, promises were still requiring Bluebird (or Q). MongoDB ORMs, like Mongoose, weren't supporting (schema) types properly either. But the was a fully functional solid functioning software, serving a decade in production.
The software is still running in production thanks to containerization to provide infrastructure for 2016 stack. The client who I did project for, approached me multiple times in the last couple of years to add new features, and I said no; dreading entangling my foot in a mess that I no longer feel familiar or comfortable working with.
What was nagging me so far in my experiments with AI assisted coding, was the fact that all these projects I did so for were, what commonly known as, green field projects. No real figuring out, no existing coding pattern to learn, no deprecated library to replace without rewriting the entire codebase. I had almost entirely forgotten how things were done a decade ago. And, if I had to try, I would rather request a rewrite of the whole codebase. Well, this time, I was over enthusiastic about AI, and unsurprisingly it did poorly.
Attempt 1: Robot, modernize this code!
I woke up from my daydream of AI freeing us developers from the nagging nitty-gritty of coding and focusing on architecture when I tried this. I allowed the LLM to read and understand the code and then provided the latest stack, asked it to update the codebase to use React 19, keep Booostrap CSS (yeah!) styles untouched, update Redux to its modern form, bake Typescript in backend and front-end code. The prompt was more fine-grained, more explicit, I provided guardrails, and tests that it must pass. And click “Go!”.
It whirred and grunted, and whistled, and stumbled, and recovered, and it went on like this a good 21 minutes. Exhausted more than 90% of daily token quota and presented me with a non building, non functional hot garbage. In desperate attempts to fix it, I finished all the daily tokens.
"Well, maybe this brand’s LLM isn’t so great at learning, editing, and fixing the code.", I thought. Subsequently, I bought two more LLM subscriptions -- top three LLMs for coding all resulting in the similar fate, a non-functioning Frankenstein’s monster. Time to call it a day.
Attempt 2: Rewrite the app learning from the codebase
The next day, I was still reluctant to put in more effort in refactoring, so I asked the LLMs to learn the code and create a replica app (in UI and functionality) using the modern stack. Before lunch, I had a web app that was nothing like the original. It did have similar texts and some (MongoDB) collections were similar but LLM got the app completely wrong.
Attempt 3: Guide LLM like you’re buddy coding with a junior developer
I realized the LLMs were not there, yet. They were very close, and very soon they are likely to be able to do the messy job. So, if I wanted to refactor this, I would have to do a little more legwork than I expected.
Since I developed the whole system, I knew if I had to refactor this code, where would I start, what and which files to touch first and what changes need to be done in what order. I didn’t want to switch to another state management library because this was a proof of concept whether LLMs can help in real life messy scenarios like this. So, I started giving AI piecemeal tasks in the order I would do it.
I started with updating dependencies, Node versions, removing old Babel config, and adding prettier and modern tooling. I realized back in 2016, I didn’t care to segregate data logic from UI logic. I carefully went file by file refactoring the whole UI. It was broken all the time; and there were more than desirable times when I had to fix the code by hand because LLM wouldn’t just do the job. Once I was done, the UI was working and the back-end was still old.
It was getting evening, and I was frustrated. A quick look at the back-end code, I realized I could keep the existing code as-is and update the Node.js library. So, instead of changing every function, I decided to use a completely different ORM and added new code in Typescript and kept the old code.
Bottom line: it was frustrating, but AI was a massive help. I could finish the job in a day that would have taken me something like 30 days. If it was manual effort by me, I would rather have it rewritten faster.
Day 5: Deep Diving Android Development
The last time I touched Android development was way back in 2014. It was frustrating. The IDE was clunky, the emulator was slow, the horrible XML layout sent chills to my spine, and I didn't even dive deep, I was just building an application that makes API calls and renders screens. Since then, React Native has been my favorite way to develop applications for mobile.
For the last few years, I had an annoying problem with my Android 12 phone: I had to play music to change the audio destination from bluetooth to phone or to another bluetooth headset. All I wanted was to open the audio output selector. How hard would it be? “Pretty easy!” (All hard projects start like this.)
Learning Kotlin
I’ve used Kotlin for server apps, but I can’t describe myself as an expert Kotlin developer. I can read and write code, but I still roll my eyes on syntactic sugar the language adds. It’s nice. A million times better than Java.
Since I’ve never used all the bells and whistles that come with Kotlin, vibe coding with an assistance that explains what and why something is done, and gives me links for further reading, seemed like a good idea. I can comfortably say that the amount of learning I had in 8 hours that day, is more than what I could have learnt in a week.
Early Failure
As it turned out, there was no way to switch audio in Android 12 and below, it’s a platform level restriction. The only way is to play music, and have the user choose the audio device from the pull down menu of the card that shows media playing.
This is amazing, the amount of hoops I’ve jumped through to find a way to get it working for Android 12, and finally coming to this conclusion is mind boggling. In an unassisted, pre-AI, post Google world, these experiments and attempts would have easily eaten up a couple of days.
AI initially pretended it's possible to programmatically select an audio device on Android 12 without having to have an audio streaming.
When that failed during the tests, and I proposed playing a silent audio if no audio was playing for a moment, just to enable the audio switcher to work. AI spewed out some non-working code.
After some Google searches, and reading old forums, I realized it can't force show the audio output device selector for Android 12. (As we will see, I later learnt that It's an restriction at the SDK level.) The AI agent concurred.
I was hard pressing against AI because I had an old phone with Android 12 and wanted it to work. So, I wanted it to look up the internet and give me when and what exactly changed that allowed this to work in Android 13. AI took its sweet time and gave me the exact location in the Android's source code where and when it was allowed for code to show the audio change menu, and it was not for Android 12, the codebase belonged to Android 13+.
Although, I found out this is a fundamentally impossible to do it in Android 12, I’ve failed with a satisfying “why” answer in 3 hours. Figuring things out in almost unfamiliar coding / SDK territory could easily have taken up a day, at least, and likely, I wouldn't have had a good answer to defend. So, I had a bittersweet feeling at this moment.
Getting a functioning app and knowing Android APIs
It always bothered me that all my projects so far were web services: content delivery, feed management, or business application. It bothered me why people don’t use PWAs instead of building the apps. The app gives them no extra benefit except being able to download from the app / Play stores. (But, that's another topic for another day).
This time I wanted to play with Android's APIs, doing native things: things like writing an app that has a Quick Tile (the Android’s pull down menu where you have Wi-Fi tile, and Airport Mode tile etc), intercepting media playback requests, making Android API calls to show the switch audio device drawer.
With a newly found excitement of ability to do it while learning made me understand and build and app that has:
A quick tile to change audio output device (for Android 13 and above)
A fancy UI written using Jetpack Compose, which feels oddly similar to React (or other modern web dev tooling)
Build a rule engine and an observer to prompt to switch to a preferred audio output device if the user set it so. (Of course, the ideal case would have been to automatically switching to the desired output device, but it seems like Android doesn't like third party apps doing that)
I will update this post once the app is approved on Android 13. I am still testing it on a borrowed phone.
The End of the Honeymoon Period
What seemed like an unstoppable force screaming through social media, crushing software jobs by causing relentless firing, and the one which The Big Players claim it (the AI) to be our gods, it's not there yet.
Don't get me wrong, it's powerful: powerful than Google search, which was powerful than community forums, which was powerful than reading Complete References. It's just the jump in efficiency like it always has been, but this time the jump is really high.
Coming down from early exhilarating experience of AI crest into the trough of messy interactions with LLMs, the AI agents seem less god like and more like excellent pattern matchers. Sometimes it feels smart, and most of the times it does a good job even with poorly specified specs. It's an important tool in my developer's pocket: the most powerful one for sure!
The cover image was generated by ChatGPT 5.2 with prompt "an image of a coder in wild west style with a smoking gun and a broken Thinkpad... the opponent has knives. Kinda like the coder brought a gun to sword fight."



