You could pay a human to read receipts, 1 every 30 seconds (that’s slow!), $15/hr (twice the US federal minimum wage!), plus tax and overhead ($15x1.35) comes out to $20.25/hr over 5 hours. $101 all in.
Sure, sure, a human solution doesn’t scale. But this sort of project makes me feel like we haven’t hit the industrialization moment that i thought we had quite yet.
Receipt scanning OCR has been around for a long time. Circa 2010 I ran enough HITs on Mechanical Turk [1] that I got my own account representative at AWS and I wondered what other kind of HITs other people were running and thought I would "go native" and try making $100 from Turk.
I am pretty good at making judgements for training sets, I have many times made data sets with 2,000-20,000 judgements; I can sustain the 2000 judgements/day of the median Freebase annotator and manage short burst much higher than that with mild perceptual side effects.
I gave up as a Turk though because the other HITs that were easy to find was the task of accurately transcribing cell phone snaps of mangled, damaged, crumpled, torn, poorly printed, poorly photographed or otherwise defective receipts. I can only imagine that these receipts had been rejected by a rather good classical OCR system. The damage was bad enough I could not honestly say I had done a 100% correct job on any single receipt, as I was being asked to do.
[1] in today's lingo: Multimodal with prompts like "Is this a photograph of an X?" and "Write a headline to describe this image"
I'd wager gemini flash could get decent results. Id be willing to try on 100 receipts and report cost
>>So I told Codex “we have unlimited tokens, let’s use them all,” and we pivoted to sending every receipt through Codex for structured extraction. From that one sentence, Codex came back with a parallel worker architecture - sharding, health management, checkpointing, retry logic. The whole thing. When I ran out of tokens on Codex mid-run, it auto-switched to Claude and kept going. I didn’t ask it to do that. I didn’t know it had happened until I read the logs.
----
For anybody still thinking my goodness, how wasteful is this SINGLE EXAMPLE: remember that all of the receipts from the article have helped better-train whichever GPT is deciphering all this thermalprinting.
For a small business owner (like my former self), paying $1500 to have an AI decipher all my receipts is still a heck of a lot cheaper than my accountant's rate. It would also motivate me to actually keep receipts (instead of throw-away/guessing), simply to undaunt the monumental task of recordskeeping.
----
>>But the runs kept crashing. Long CLI jobs died when sessions timed out. The script committed results at end-of-run, so early deaths lost everything. I watched it happen three times. On the fourth attempt I said “I would have expected we start a new process per batch.” That was the fix ... Codex patched it, launched it in a tmux session, and the ETA dropped from 12 hours to 3. Not a hard fix. Just the kind of thing you know after you’ve watched enough overnight jobs die at 3 AM.
>>11,345 receipts processed. The thing that was supposed to take all night finished before I went to bed.
Spherical cows aside though, I do agree with you that I should not consider scalability as a given.