Yesterday at work there was a call for a hero. There was a lot of unformatted, unordered data that needed parts of it stripped out and neatly arranged into the form of XML insert statements for a production system. Time was short.
When I was faced with this challenge I set to work using programming patterns I initially learned while working on this very journal. The year was 2006, a hard drive crash in a data-centre in Texas had taken with it five weeks of important journal entries. With no local backups the only method I had to resurrect them was to write a script that would churn through Google’s cache of the entries, rip the important HTML parts out, and then filter all the data into neat SQL insert statements.
I remember running that script on my journal. I’d tested it first, so I was confident it would work, but watching it run and re-create two dozen entries in the blink of an eye was a proud moment for me. I felt the same way yesterday as I watched six-hundred-thousand records zooming into a production data repository at the same sort of speed. I didn’t mention to anyone that my experience in doing this kind of thing came solely from bradism.coming using 0.00004% of the volume of data.
Now I know I'm ready though, ready for enterprise journalling.