I’ve been using ChatGPT quite a bit over the last few weeks to help me with some research and writing, and I’ve settled on a metaphor to describe the experience. In terms of both output and general demeanor, ChatGPT reminds me of times I have had a college intern at my disposal.

In both cases there’s a version of the ‘garbage in, garbage out’ principle upfront. Namely, you have to provide clear and thorough instructions. If I leave room for interpretation, then it’s nearly guaranteed that both a human intern and ChatGPT will interpret things in comically unpredictable ways. But even if the instructions are clear, the work they hand in is decidedly weak, amateur. I provide feedback, and they make some changes accordingly. Rinse and repeat. After some number of iterations, in the best case, the deliverable is passable, something I can settle for (though still not great).
Though the quality of output is about equal, the iteration cycle is much faster with ChatGPT than with a human intern, so points for that I suppose (sincere apologies, interns), and I still get the agreeable enthusiasm bordering on sycophancy.
Leave a Reply