Zero to iOS Feature: AI-First Development in Unfamiliar Territory
Hey 👋
Last week I did something I never thought I’d do: I built a feature in an iOS app.
Some context: I’m a web developer. Backend preference. I’ve never touched Swift, never opened Xcode with intent, and definitely never worked on the Clay codebase (our internal iOS app at Automattic).
But I had an experiment in mind: how far can AI take me in completely foreign territory?
To make it interesting, I did this alongside Ryan — a colleague who’s an experienced iOS developer. Same feature, same timeframe. He used his expertise. I used… Claude.
The Feature
Contact Photo Matching. The idea is:
- Scan the user’s photo library for faces
- Match them against contact avatars
- Show matched photos in the contact view
- Let users confirm/reject, share, or set as avatar
All on-device using Apple’s Vision framework and Core ML. Sounds complicated? It is. At least, it would be for someone who’s never written Swift.
Act 1: Surprisingly Fast Start
Day 1, morning. I started by creating a PRD with Claude — phases, scope, architecture decisions. Within a few hours I had a working prototype: photo grid, face detection, basic UI.
I remember thinking: “This is going to be easy!”
I was wrong.
Act 2: The Wall
Day 1, afternoon. Face matching results were… random. Same image would give different confidence scores every run. 0.85, then 0.91, then 0.78. For the exact same face.
I went in circles with Claude for hours:
Me: “Results are inconsistent” Claude: “Let me add more logging…” Claude: “The values look correct” Me: “But they change every time!” Claude: “That’s strange. Maybe it’s a Vision Framework bug on the simulator?” Me: “That can’t be right…”
Claude kept insisting. I kept dismissing. We tried a dozen other fixes. Nothing worked.
Act 3: Fresh Start
Day 2. Fresh conversation, fresh mind.
This time I tried a different approach: “Build me a debug view.”
I asked Claude to create a visualization showing:
- Face bounding boxes overlaid on images
- Landmark points (eyes, nose, mouth)
- Raw numerical values
- History of values across runs
And suddenly I could see the problem. Landmark values were drifting on every detection run:
Run 1: leftEye.x = 0.3421
Run 2: leftEye.x = 0.3398
Run 3: leftEye.x = 0.3445
Run 4: leftEye.x = 0.3367
Same image. Same face. Different landmarks.
Ryan happened to be nearby. “Want to test on my device?”
Seconds later: values consistent. The simulator was the problem.
Claude was right. It’s a known Vision Framework bug on the simulator (since 2021!). VNDetectFaceLandmarksRequest with CPU-only mode returns inconsistent landmarks. Only on simulator. Works fine on real hardware.
I had wasted hours dismissing something the AI correctly identified early on.
What I Learned
1. AI accelerates unfamiliar territory
Would I have learned Swift, Xcode, the Vision framework, AND built this feature in two days? No chance. AI didn’t replace domain knowledge — it lowered the barrier to entry dramatically.
2. When AI insists on something, investigate properly
I kept dismissing the simulator bug theory because it felt wrong. But Claude mentioned it multiple times. I should have validated it earlier instead of going in circles.
3. Fresh starts beat endless debugging
Day 1 ended with frustration and a polluted conversation context. Day 2’s fresh start brought immediate progress. Both humans and AI benefit from “sleeping on it.” Context pollution is real.
4. Debug visualizations are game-changers
Abstract bugs become obvious with the right visualization. Building debug views used to take hours. Now Claude builds them in minutes. Ask for debugging tools early.
5. Parallel agents = free time
I ran independent tasks in parallel — multiple Claude sessions handling different pieces. While waiting for agents to finish, I started a separate session and built… a Bomberman clone. Because why not.
The Result
Two developers. Same feature. Very different starting points. Similar end results.
Ryan’s code is probably cleaner. But mine works. In 2 days. With zero iOS experience.
AI didn’t make me an iOS developer. But it let me pretend to be one long enough to ship something useful.
This is part of a series where I’m trying to make sense of what’s happening with AI. Not hype, just thoughts. More soon.
Send a comment