Prompt Engineering (or Not)

AI Overview
Author

Kerry Back

Published

May 7, 2025

I use Menti surveys periodically to ask students what they would like to learn more about, and how to write effective prompts frequently shows up. In the first bit of time after ChatGPT 3.5 was introduced, there was a lot of chatter online about prompt engineering. This seems to have died down somewhat, and for good reason. The models are so good now that they don’t generally require any special prompting. In particular, you do not need to tell Julius “you are an expert coder …” If any role-setting is even needed now, which is doubtful, I think it must be supplied in the context provided by Julius to the LLMs.

I take advice on this topic from Ethan Mollick, a Wharton management professor who studies these things and whose blog, One Useful Thing, you should certainly follow, if you do not already. I direct students to one of Mollick’s posts in particular, which discusses good enough prompting. Mollick suggests that instead of worrying about how to write prompts, people should treat an LLM as an “infinitely patient coworker.” I find that I naturally come back to this several times in my course, reminding students “Remember what Ethan Mollick said: treat AI as an infinitely patient coworker.” Actually, I modify the quote from “coworker” to “colleague.” The term “coworker” suggests a peer to me, but AI can serve equally well as a senior colleague who mentors, or a junior colleague who assists, or a peer who collaborates. And, it can move seamlessly from one to the other.

A question students ask is whether they should give specific instructions in a chat, or just a brief statement of what they want and then see what they get. Here, I say “think of AI as an ininitely patient assistant.” How detailed should the instructions be that you give an assistant? You can be very detailed, but that is a waste of time if the assistant is competent to do the task alone. On the other hand, if you are not detailed, you may find the assistant does something different from what you want, and you have to send him/her/it back to do it again. This process could go on for quite awhile and end up being a waste of time. How to know which path to follow? You need to know your assistant. So, I encourage students to experiment to learn the AI’s capabilities. Some tasks will have appeared frequently in its training data, and it will need very little instruction. It may be less familiar with others and need more. Here, I stress the “infinitely patient” part. You can experiment all you want.

If you find yourself on an iterative path of correcting errors, it can often be useful to just start a new chat and this time give detailed instructions. The frustrating part about correcting errors is that the LLM will regenerate code each time and sometimes, when it tries to fix one error, it introduces new errors (I mean doing something other than what you want, not syntax errors that it will fix on its own) in code that worked fine before. If this begins to happen, it is probably time for a fresh start.

What about tasks that involve multiple steps? Should you ask for everything you want in a single prompt or just ask for one step at a time? Again, I ask students to experiment. What they consistently report back to me is that it is better to just ask for one step at a time. However, one thing to keep in mind is that an LLM can be somewhat forgetful. Even though the context windows are large enough today that an LLM will be able to re-read your entire chat each time you prompt it, it cannot be equally attentive to everything in the chat. So, it can be helpful to prompt it in later steps to “use this that you got before and that that you got before and …” instead of relying on it knowing what it should use from previous steps. In order to ensure that the LLM correctly recovers what it created in previous steps, I find it very helpful to structure each step as: “create a python function that … and save it as a .py file”” Then in subsequent steps, I can say “read the .py file and use the python function to …” Then, each successive step can be taken without prior code being rewritten.

What do you think? What works for you? Let us know in the comments below.


Also on substack at kerryback.substack.com