In today’s column, I am continuing my ongoing coverage of prompt engineering strategies and tactics that aid in getting the most out of using generative AI apps such as ChatGPT, GPT-4, Bard, Gemini, Claude, etc. The focus this time will be on the use of a significant prompting strategy that tells generative AI to re-read your stated question or problem that needs to be solved. Doing so can substantially boost your AI-generated results and is a quite worthy technique in your prompt engineering skillset.

If you are interested in prompt engineering overall, you might find of interest my comprehensive guide on over fifty other keystone prompting strategies, see the discussion at the link here.

Get yourself ready for yet another engaging and useful journey that intertwines art and science when it comes to wisely composing prompts and getting the most out of generative AI.

Why Invoking A Second Pass Can Be Noticeably Handy

Let’s start by considering why humans sometimes opt to re-read a question.

To quickly clarify, I do not intend to anthropomorphize AI and thus please do not falsely equate what humans do cognitively with what generative AI is doing. I only am drawing a distant parallel to showcase facets that can arise when re-reading content. In a moment, I’ll be explaining the details associated with the mathematical and computational aspects of re-reading that come into play within generative AI, thanks.

Back to the human considerations.

Suppose that someone hands you a piece of paper that has a written question on it. You would undoubtedly read the question before trying to come up with an answer (I hope you would read it first or else your derived answer might not be pertinent to the question being posed). Upon having read the question, you regale the person with a divine answer. Good for you.

Imagine though that the question is a hard one. You might be tempted to read the question a second time. This re-reading of the question might aid you in better understanding the nature of the question. This in turn might aid you in coming up with a better answer than if you had tried to answer based on your first or initial reading of the question.

So far, so good.

There is also the factor that by having read the question a second time, your first reading might have primed your mind. It got your mental juices going and into the ballpark of whatever the question entailed. The second reading will therefore allow you to more readily gauge the context and nuances of the question. In a sense, you have reduced the element of surprise that customarily occurs when first encountering a question.

You don’t necessarily re-read questions all the time. If a question is easy, there probably is little value in re-reading it. The same goes for a very short question. The odds are that a notably brief question doesn’t need a re-reading either.

Okay, we’ve established that re-reading a question can be advantageous.

That being said, I dare say that if we always did a re-reading of every question that we encountered, our world would seem a lot harder to cope with. There is added time required to do a re-reading. In that sense, you might be delaying your answer by carrying out a re-reading action. A real-time timing impact might arise. A trade-off exists between the effort and time needed to do a re-reading versus the added benefits of doing a re-reading.

We can somewhat suggest that there could be similar benefits and tradeoffs when having generative AI process a question that someone has entered as a prompt into the AI.

Here’s the deal.

Most of today’s popular generative AI apps tend to do a single pass on an entered prompt. The text is essentially examined on a unidirectional basis, moving from left to right. Word by word, the sentence or sentences that you entered are examined via various mathematical and computational processes. As a side note, there are exceptions to this rule of thumb, such that some generative AI apps work on a bidirectional basis or otherwise examine an entered prompt in a more back-and-forth manner.

Let’s focus on the unidirectional ones.

If a question that you entered is being parsed on a one-time basis, there is a chance that some subtleties might be inadvertently glossed over. A second pass would potentially aid in doing a further detailed examination of the question. Kind of a clean-up of whatever might have been missed or miscalculated about the question.

A re-reading could also augment the AI contextual facets. The initial context that was calculated in the first pass can feed into what is computationally happening during the second pass. For example, your question might be asking about Abraham Lincoln. The internal retrieval of data about the life of Lincoln is instantiated on the first pass. During a re-reading or second pass, the retrieved context is already there and raring to go.

Doing Some Unpacking About Re-Reading

I think you can readily discern that there is a possible added value to invoking a second pass of your entered prompts.

You could simply tell the AI in your prompt to go ahead and re-read the prompt. That’s all you need to do. Easy-peasy. Just give an explicit instruction and then leave the driving to the generative AI.

When I mention this during my classes and presentations on prompt engineering, there invariably will be eager attendees who raise their hand and ask if it might be useful to always force generative AI to do a re-reading. This could be a custom instruction that you have the generative AI always use, see my explanation about how to set up permanent standing defaults or instructions in AI at the link here.

I vote that you not blindly opt to always make use of a re-reading action.

The re-reading will take a bit longer for the prompt to get processed by the AI. In addition, if you are paying for the use of generative AI, you will likely end up having to pay for those extra processing cycles. For background about how to reduce latency and speed up your response time while using generative AI, see my discussion at the link here.

Is the delay in response and the added dollar cost worth doing a re-read?

By and large, if you typically enter questions or prompts that are short and sweet, the use of a re-reading action is not going to get you any manifest bang for the buck. You will needlessly be consuming added time and cost. The generated response to your question is probably going to be about the same as not having done a re-read.

The other side of that coin is that many people don’t realize they can use a re-read as a prompting tactic or technique. They are either unaware of the idea, or they have learned about it but forget to use it when needed. That’s not wise either.

Overall, there is the Goldilocks principle at work. Do not use a re-read all the time (unless you have a solid reason for doing so). Don’t forget to use a re-read when the re-read is pertinent to the matter at hand. Try to make use of a re-read at the right time and right place. The porridge should neither be too hot nor too cold.

I urge that anyone seeking to be a prompt engineer or who considers themselves to be a prompt engineer ought to have the re-read instruction in their back pocket and include the re-read in their prompt engineering skillset.

There is something else you should consider too.

Realize and keep at the top of your mind that you can combine various prompting techniques.

Allow me to elaborate.

You might know that one of the most common prompting techniques entails getting generative AI to proceed on a step-by-step basis and explaining what the AI is doing during the processing of your prompt. This is known as invoking a chain-of-thought (CoT). You can do this by merely telling the AI to process the prompt in a stepwise approach. See my coverage at the link here and the link here.

Can you combine the use of chain-of-thought with the use of a re-reading instruction?

Absolutely.

Here’s why you might be tempted to do so.

When you use a re-read by itself, you won’t necessarily see what is happening. The only thing you will see is the result or generated answer to your question. It could be that the answer turns out to be incorrect or somewhat off-target. You won’t particularly know why. If you had included an instruction to do a stepwise process or CoT, the odds are that you would be able to inspect the displayed explanation and discern where the AI went awry. Happy face.

I have a question for you about this.

Should you always combine a re-read with a CoT instruction?

I hope that you immediately sense that always using them together might not be worthwhile. Once again, just as a re-read will cause a potential delay and added cost, the use of chain-of-thought also tends to have an added delay and added cost. If you are sensitive to timing and cost, you will want to judiciously use the CoT and re-read as a pair.

A final worthy point for now about the use of a re-read instruction is that you normally would only aim to do one re-read and not go overboard with more re-reads in one prompt.

The logic is this. Some people figure that if a re-read can help, why not do a double re-read or maybe a triple re-read? You tell the AI to re-read the question, and then on top of this, tell the AI to re-read the question an additional time. Perhaps you tell the AI to re-read the question three times, four times, five times, and so on.

In my experience, you are not going to get any useful added value beyond just one re-read. The chances of having the AI pick up on additional subtitles after the first re-read seem mighty low. I am not saying that this might be a payoff in very rare or unusual cases, and only suggesting that as a rule of thumb stick with one re-read in a prompt.

There is another downside to multiple re-reads at a time in a single prompt.

I’ve found that there is a notable chance that the generated answer will end up being worse than either absent of a re-read or with a single re-read. A solid chance exists that you will slide the AI into making up fake stuff, often referred to as a so-called AI hallucination (see my discussion about AI hallucinations at the link here and the link here). For various internal computational reasons, the third or fourth re-read can push the AI into a mode of making errors or crafting fictitious content.

You might think of re-reading as something that becomes problematic if overused in a single prompt.

I want to also emphasize that I am referring to the downsides of multiple uses of re-read in a single prompt. I contrast this to using a re-read in one prompt, and then a re-read in a different prompt, and so on. You can use the re-read as many times across different prompts as you like. The danger zone arises when you used to re-read too many times in one prompt.

Research On The Re-Read Gets The Spotlight

Readers of my column know that I tend to focus on prompting strategies that have empirical research to back them up. I do this because there are tons of prompting approaches that people wildly proclaim. This person or that person will insist that this approach or that approach is the best thing since sliced bread. That’s nice to hear, but it is more reassuring to see that bona fide research has been undertaken. It helps to decide whether the prompting technique is worthy of actively adopting.

I’d like to walk you through a pivotal study on the re-read instruction.

In a recent research study entitled “Re-Reading Improves Reasoning in Large Language Models” by Xiaohan Xu, Chongyang Tao, Tao Shen, Can Xu, Hongbo Xu, Guodong Long, Jian-guang Lou, arXiv, February 29, 2024, the paper noted these vital points about the re-read instruction (excerpts):

  • “To enhance the reasoning capabilities of off-the-shelf Large Language Models (LLMs), we introduce a simple, yet general and effective prompting method, RE2, i.e., Re-Reading the question as input.”
  • “Crucially, RE2 facilitates a “bidirectional” encoding in unidirectional decoder-only LLMs because the first pass could provide global information for the second pass.”
  • “We then evaluate RE2 on extensive reasoning benchmarks across 14 datasets, spanning 112 experiments, to validate its effectiveness and generality.”
  • “Our findings indicate that, with the exception of a few scenarios on vanilla ChatGPT,

RE2 consistently enhances the reasoning performance of LLMs through a simple re-reading strategy.”

The research examined the use of the re-read instruction by utilizing various datasets of questions and answers. They wanted to see whether the re-read would increase the chances of getting correct answers.

When such a study is undertaken, the types of questions usually have to be relatively closed-ended such that there is an apparent right-or-wrong answer. I say this because some uses of generative AI are that way, namely, you ask questions that ought to have a definitive right answer, while other times you are asking mushier or open-ended questions that do not have a purely right or wrong answer per se.

To give you a sense of what I mean by closed-end questions that aim to have a definitive answer, consider an arithmetic question that you might have had while taking math in school. This type of question will provide a setup of what you are supposed to calculate, and then you must arrive at a definitive answer that can readily be said to be correct or incorrect.

For example, consider this question: “Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?” You would need to figure out how to arrive at a definitive answer. You either arrive at the correct number of tennis balls or you do not.

The beauty of these types of questions when doing prompt engineering experiments is that you can easily keep tabs on how well the AI does.

Consider this. You feed in a bunch of closed-end questions and grade how many were answered correctly versus incorrectly. You then feed in similar questions and include the new prompting instruction that you want to test out. After doing so, you tally how many of those questions were answered correctly versus incorrectly. Voila, you can compare how things went without the new instruction and how things went with it included.

Per the experiments performed, the research paper suggests that you can suitably add a re-read instruction to improve the chances of having the AI determine correct answers.

They also advocate writing the question in its entirety a second time, rather than merely obliquely referring to the question as posed. Let me show you this. Go ahead and look at this example (I bolded the part that says to read the question again):

  • “Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now? Read the question again: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?” (ibid).

Observe that the stated question was repeated in its entirety.

Do you need to do that?

In my experimentation, which I’ll get to shortly, I found that it is wise to repeat the question verbatim. If you don’t do so, there is a slightly increased chance that the AI will not dutifully re-read the question. It might take a shortcut. Thus, you are better off by doing a copy-paste of the stated question rather than trying to just indirectly refer to it.

Another twist is that I tried the “Read the question again” and then I repeated the question, but I purposely changed the question. Why? I wanted to see if the AI would detect that I hadn’t faithfully repeated the question. Indeed, I got caught. I urge you to faithfully show the same exact question and not play games with the AI.

The generalized template advised by the researchers is to provide your input query, followed by “Read the question again” and then followed by the repeated query, per this overall format (I have put the “Read the question again” in bold for your ease of seeing it here; you don’t need to bold that portion when you use the instruction in real life):

  • Template: “Q: {Input Query}. Read the question again: {Input Query}.”

That is the shortcut template version.

A longer version is coming up next herein, so hang in there.

Deeper Into The Nuances Of The Re-Read Prompt

You might recall that I had forewarned not to try and repetitively use the re-read in a single prompt.

Let’s see what the researchers have to say (excerpts):

  • “An overarching pattern emerges across all models: performance improves until the number of re-reads reaches 2 or 3, after which it begins to decline with further increases in question re-reading times.” (ibid).
  • “The potential reasons for inferior performance when reading the question multiple times are two-fold: i) overly repeating questions may act as demonstrations to encourage LLMs to repeat the question rather than generate the answer, and ii) repeating the question significantly increases the inconsistency of the LLMs between our inference and pretraining/alignment (intuitively in the learning corpora, we usually repeat a question twice to emphasize the key part, rather not more).” (ibid).

The bottom-line is don’t repeat the re-read while within a single prompt.

Moving on, I mentioned that you can use a re-read in conjunction with other prompting strategies. The researchers opted to explore the re-read with the chain-of-thought (CoT) approach (excerpts):

  • “We further conduct experiments to examine the influence of RE2 within the context of CoT prompting.
  • “Consequently, RE2 demonstrates strong generality and compatibility with most thought-eliciting prompting methods, including CoT.”
  • “It’s noteworthy that, in general, question re-reading consistently improves reasoning performance compared to the standard CoT prompting without question re-reading (P0).”

You can see that there were synergies between the two techniques.

Give that a bit of contemplation. The interesting angle is that though at this moment we are examining the re-read instruction, you can turn your head in the other direction and think about the chain-of-thought instruction. You are to start thinking of the CoT as being augmented with the re-read. I say this because you might at this time mindfully use the CoT with the re-read, but I am saying to also extend your thinking to consider using the re-read when you are already intending to use the CoT.

A mind-bender, for sure.

When I played around with the re-read instruction, I came up with these three core steps for composing my related prompts:

  • (1) Provide your desired question, followed by the re-read phrase or something similar that says “Read the question again”, and then show again the same exact question you are asking.
  • (2) If you want to include some specific instructions about showcasing or displaying the AI-generated answer, indicate what you have in mind.
  • (3) If you believe that having the AI undertake a chain-of-thought (CoT) process is worthwhile in the matter at hand, provide a prompt indication invoking CoT, such as “Let’s think step by step”.

The researchers proposed this fuller template:

  • “Q: {question}. Read the question again: {question}. #Answer format instruction#. A: Let’s think step by step.”

Typical kinds of answer format instructions that might be used (in my words):

  • “Your answer should be either yes or no.”
  • “Your answer should be a date in the format of mm/dd/yyyy.”
  • “Your answer should be an essay of no more than three paragraphs in length.”
  • “Your answer should be a rounded integer.”
  • Etc.

My example prompt (in my words):

  • “How much wood could a woodchuck chuck if a woodchuck could chuck wood? Read the question again: How much wood could a woodchuck chuck if a woodchuck could chuck wood? Provide the answer as a numeric value indicating the number of cords of wood. Let’s think step by step.”

I believe you are ready to start trying out the re-read prompting strategy, congratulations.

Using ChatGPT To Explore The Nature Of Re-Reading

I will next proceed to examine further the nature of re-reading as a specialized prompting technique.

This will consist of a series of dialogues with ChatGPT. ChatGPT is a logical choice in this case due to its immense popularity as a generative AI app. An estimated one hundred million weekly active users are said to be utilizing ChatGPT. That’s a lot of people and a lot of generative AI usage underway.

A few quick comments before we launch into using ChatGPT.

If you are going to try to do the same prompts that I show here, realize that the probabilistic and statistical properties will likely produce slightly different results than what I show here. That’s the nature of generative AI and how it is devised.

You also should expect that different generative AI apps will respond in different ways. Not all generative AI apps are the same. The key is that sometimes a particular prompt will work in one generative AI app and not another. Nonetheless, the odds are relatively high that you will get roughly similar responses from all the major generative AI apps such as GPT-4, Gemini, Bard, Claude, etc.

There is also a timing factor to be considered. A prompt that worked today might not work the same way tomorrow. Most of the generative AI apps are continually being updated. The updates might alter internal mechanisms that could change how the generative AI reacts to your prompting.

We are ready to get underway with ChatGPT.

I opted to start by asking ChatGPT about the overall nature of the re-reading prompting technique.

As an aside, whenever you are starting a conversation with generative AI, I recommend as a prompt engineering technique to begin by asking a question that will establish if the AI has been data-trained on the topic at hand. A generated response that is vacuous will give you a heads-up that you might be barking up the wrong tree with that generative AI app.

Here we go.

  • My entered prompt: “Does telling generative AI to re-read a prompt provide any added value regarding potentially having the AI do a better job of answering a question that is within the prompt?”
  • ChatGPT generated response:

“Research studies indicate that the benefits of having generative AI re-read a prompt can be valid, particularly in the context of bidirectional encoding enhancing the generative process.”

Bidirectional Encoding Benefits: In natural language processing, bidirectional encoding allows models to consider the context from both directions (left-to-right and right-to-left). This can provide a more comprehensive understanding of the text, as the model can utilize global information from the entire prompt.”

Enhanced Contextual Understanding: Re-reading a prompt can enable the model to better grasp nuances, context, and relationships within the text. This can lead to more accurate and contextually relevant responses, especially for complex or detailed prompts.”

Augmented Processing: The process of re-reading can serve as a form of reinforcement, allowing the model to refine its understanding and potentially correct any misinterpretations from the first pass. This can be especially useful for decoder-only models, which might benefit from a secondary pass to incorporate broader contextual information.”

Empirical Evidence: Research studies providing empirical evidence of improved performance through re-reading suggest practical benefits. These studies likely involve experiments demonstrating that iterative processing can enhance the quality of responses.”

End of ChatGPT response

The response by ChatGPT was reassuring.

You can see that ChatGPT was able to stridently respond to what a re-read prompt consists of. In addition, happily, the points made are akin to the points I made earlier. If the points made by ChatGPT had differed radically, one of the two of us would have somehow gone askew.

Next, I will ask ChatGPT about the tradeoffs or potential downsides of the re-read.

Take a look.

  • My entered prompt: “Is the use of a re-reading prompting strategy always a surefire payoff?”
  • ChatGPT generated response:

“Instructing generative AI to re-read a prompt does not inherently improve the quality of its response. Here are some points to consider.”

Single Pass Processing: Most generative AI models, including ones like GPT-4, process the prompt in a single pass. They generate responses based on the input provided without re-reading or reconsidering the prompt in the middle of generating a response.”

Contextual Understanding: These models are designed to understand and retain the context provided within a single prompt. If the prompt is clear and well-structured, the AI can generate a coherent and relevant response.”

Clarification: If a response seems off-track or incomplete, providing additional context, clarifications, or rephrasing the question can be more effective than simply asking the AI to re-read the prompt.”

Prompt Quality: The quality and clarity of the prompt are crucial. A well-crafted prompt with specific details and clear instructions is more likely to yield a better response than one that is vague or ambiguous.”

End of ChatGPT response

Good, the answer again matches what I’ve been saying during this discussion.

As a recap, you might or might not find much gain from a re-read instruction. The nature of the question and the mechanizations of the generative AI are crucial determiners.

Let’s get ChatGPT to give us an example of when doing a re-read isn’t going to be a boost.

Here we go.

  • My entered prompt: “Show me an example of a question that doing a re-read on will not add any benefit to producing a better answer.”
  • ChatGPT generated prompt:

“Let’s consider a simple factual question that does not benefit from a re-read.”

“Question: “What is the capital of France?”

“First, I’ll provide a response as if generated in a single pass.”

“Response: “The capital of France is Paris.”

“Now, let’s undertake a re-reading process to see if it improves the response.”

“Enhanced Response: “Paris is the capital of France.”

“For simple factual questions like “What is the capital of France?”, re-reading the prompt does not significantly enhance the response. The initial response already fully addresses the question, and additional passes do not provide substantial added value. This demonstrates that re-reading a prompt is more beneficial for complex or multi-faceted questions, rather than straightforward factual queries.”

End of ChatGPT response

Asking about the capital of France is a very straightforward question.

The question itself is short in length. There isn’t any ambiguity involved in the question. A first read is probably all you need to do. A second read would seem like unnecessary overkill.

I’ll shift gears and have ChatGPT come up with an open-ended question that then tells the AI to do a re-read.

Let’s see what happens.

  • My entered prompt: “Show me an example of an open-ended question that would be better answered via using a re-read prompt. Go ahead and then process the question, explaining how the re-reading activity has helped produce a better answer.”
  • ChatGPT generated response:

“Let’s take a complex question that involves multiple layers of information and requires a comprehensive response.”

“Here’s an example: “How does climate change impact global food security, and what measures can be taken to mitigate these effects?”

“First, I’ll provide a response as if generated in a single pass.”

“Response: “Climate change affects global food security by altering weather patterns, leading to extreme weather events, and changing growing seasons. These changes can reduce crop yields, disrupt supply chains, and increase food prices. To mitigate these effects, measures such as adopting sustainable farming practices, improving water management, and developing climate-resistant crops can be implemented.”

“Now, let’s undertake a re-reading process to improve the response.”

“Enhanced Response: “Climate change significantly impacts global food security through several mechanisms. Firstly, it alters weather patterns, resulting in more frequent and severe extreme weather events such as droughts, floods, and hurricanes. These events can devastate crops, reduce yields, and disrupt supply chains. For instance, prolonged droughts in regions like Sub-Saharan Africa have led to significant decreases in maize and wheat production, which are staple foods for millions. Additionally, changing growing seasons can lead to mismatches in planting and harvesting times, affecting crop productivity. For example, warmer temperatures can cause some crops to mature too quickly, leading to reduced quality and yields.”

“By re-reading the prompt and refining the response, the answer becomes more detailed and nuanced. It covers specific examples, provides a broader context, and offers a more comprehensive discussion of mitigation measures. This approach ensures that the response addresses the complexities of the issue more effectively.”

End of ChatGPT response

I am a bit on the fence about that example.

The answer based on the re-read doesn’t seem to be that much better than the initial answer based on the first read.

One aspect to keep in mind is that I purposely made use of an open-ended question. The difficulty with assessing open-ended questions is that we don’t have a definitive answer like we do when asking a closed-ended question. This makes judging the answers more speculative.

Finally, I’d like to have ChatGPT provide a roundup of purported advice on the use of the re-read instruction.

Hold your breath and see what ChatGPT said.

  • My entered prompt: “If you were going to give advice to a prompt engineer who has just found out about the potential use of a re-reading prompting strategy, what advice would you give them?”
  • ChatGPT generated response:

Here’s a structured approach to guide a prompt engineer:

Evaluate the Complexity of Queries: Determine if the typical queries handled by the model are complex or multifaceted. Re-reading is most beneficial for detailed and layered questions.”

Identify Critical Use Cases: Identify scenarios where accuracy and comprehensiveness are crucial, such as in customer support, medical information, or legal advice.”

Analyze Resource Availability: Assess the available computational resources and budget. Re-reading requires more processing power and may increase operational costs.”

Evaluate Latency Tolerance: Determine the acceptable response time for your application. Re-reading will increase latency, which might be a trade-off in time-sensitive applications.

Iterative Approach: Start with a pilot implementation to understand the performance and benefits. Gradually scale up based on initial results and feedback.”

Dynamic Decision Making: Implement a dynamic strategy where re-reading is only triggered for complex or ambiguous prompts. Use simpler, faster processing for straightforward queries.”

Performance Monitoring: Continuously monitor the performance and cost implications. Use metrics such as response time, accuracy, user satisfaction, and resource usage.”

“Using a re-reading prompting strategy can significantly enhance the quality of responses for complex queries. However, it comes with increased computational costs and latency. A balanced approach that dynamically applies re-reading based on query complexity, coupled with continuous monitoring and feedback, will help in optimizing the benefits while managing the costs effectively.”

End of ChatGPT response

If you are still holding your breath, please let go and take a new breath.

The response by ChatGPT was pretty on-target. We saw that you need to consider the circumstances underlying the use of the re-read instruction, and you need to weigh the potential added benefit against the costs or possible downsides.

Conclusion

There is a notable quote by C. S. Lewis, a famous British author and literary scholar about the value of re-reading:

  • “We do not enjoy a story fully at the first reading. Not till the curiosity, the sheer narrative lust, has been given its sop and laid asleep, are we at leisure to savor the real beauties.”

I would ascribe that sense of humankind value to his quote and certainly find that when I re-read things, it often has a momentous beauteous reflective quality. Of course, there are times when I just wasn’t paying close attention at the get-go and the re-read gets my mind back into proper focus. That’s a pragmatic way to value the re-read.

When you are using generative AI, make sure to keep in mind that if your question is especially complicated or lengthy, you should be considering the use of the re-read instruction. It is an astute prompting tactic.

If you aren’t yet sure about the re-read, and if I can be daringly provocative, perhaps re-read the above discussion, and then log into your favorite generative AI app and try out the approach. I am sure you’ll be glad you did a second pass on the weighty matter.

Share.
Exit mobile version