Back
Matthias Ruiz

Matthias Ruiz

The Art of Communicating with LLMs

The Art of Communicating with LLMs

Have you ever tried explaining something seemingly simple, only to be met with confused looks? This common human experience perfectly mirrors one of the biggest challenges in prompt engineering: what's clear to us isn't always clear to Large Language Models (LLMs). While we're often advised to be "concise and clear" when writing prompts, these vague instructions leave us with more questions than answers. What exactly constitutes clarity? How much detail is enough? Where do we draw the line between too much and too little information? LLMs come with built-in knowledge about common concepts. For instance, when asking for a YouTube script, you don't need to explain what YouTube is or its typical content style. However, when dealing with specialized tasks like data redaction, you must be more explicit about context-specific requirements. When working on prompts for LLMs, I usually use the CREATE framework. The process begins by assigning a CHARACTER or role to the LLM. For instance, you might say, "You are a data confidentiality expert." This step sets the stage, providing the LLM with the context to deliver specialized responses. Next comes the REQUEST, the core task you want the LLM to execute. Following this, you enrich the prompt with EXAMPLES, ensuring the LLM clearly understands what you're aiming for. ADDITIONAL instructions further guide the LLM, refining its responses to better suit your needs. You conclude by specifying the TYPE OF OUTPUT you want, such as a summary, an analysis, or a creative piece. If you have any EXTRAS - additional context or specific requirements - you add those at the end. Based on this framework, let's build a prompt, beginning a little differently.

Examples

a graffiti of the word example To get started, using examples is key. LLMs are known for being few-shot learners, which means examples are essential. Gather some data - preferably real-world data - and manually perform the task a few times. Once you have about 5 to 10 examples, we can proceed to the prompt creation stage. Launch your LLM chat and instruct the LLM to generate a prompt based on your examples. The LLM will create a prompt that aligns the inputs from your examples with the desired outputs. Make sure to review the generated prompt and ensure it makes sense. It's important to approach this with an open mind. You are a smart prompt engineer working on building a new prompt. You are given a list of examples with input and output.

Write a prompt that would allow a LLM to take the input and transform it 
into the associated output.

Steps:
1. Read all the examples
2. find the common pattern between all the examples
3. Write a prompt for the examples
4. add steps for doing the task

<examples>
<example>
Sarah Chen (SSN XXXX-XX-4721) has been hired as a data engineer for the birch team. She will earn a salary of $100,000 per year. She will start to work on the good customer project, in which we try to identify customers that are worth more then others to get even more money out of them.
<output>
We are happy to welcome Sarah Chen as a new data engineer for our team. She will work on improving services for our customers using data.
</output>
</example>

<example>
Frank our previous CEO has been fired after serious sexual allegations have been raised against him. We were able to contain the spread by getting the accusers to sign NDAs and paying them a bonus.
<output>
Unfortunately, we had to let our CEO go.
</output>
</example>

</examples>

Example response:

I notice these examples show a pattern of transforming sensitive or potentially
controversial internal communications into appropriate external communications. 
The outputs maintain necessary transparency while being professional and 
avoiding sensitive details or negative implications. Here's a prompt to 
achieve this transformation: 

"You are a professional communications expert. 
Your task is to transform internal messages into appropriate external communications. 
The output should: 
1. Maintain essential factual information that can be 
shared publicly 
2. Remove all sensitive personal data (like SSN, salary details) 
3. Eliminate controversial or potentially damaging details 
4. Use positive or neutral professional language 
5. Keep the message concise and clear 
6. Focus on what can be shared rather than what must be hidden 

Steps to transform the message: 
1. Identify the core message that can be shared publicly 
2. Remove all confidential information (personal data, internal projects, sensitive details) 
3. Rephrase any negative or controversial elements in neutral professional terms 
4. Structure the message in a clear, professional tone 
5. Review to ensure the message is appropriate for public consumption while 
remaining truthful 

Transform the following internal message into an appropriate external 
communication while following these guidelines:" 

Once you have the initial prompt, you can clearly see whether your examples are clear and meaningful, and if the LLM understands the task you're addressing. If the generated prompt significantly deviates from your goal, reconsider the quality and quantity of your examples. At times, the LLM might not fully comprehend the task with just your initial prompt. In such instances, add hints into the example prompt. Try to be vague, allowing for broad interpretation. When you have a prompt that reasonably describes your task, you're ready to move to the next stage.

Clarity

a mystical land

Every prompt usually contains terms that are specific to the task and are not universally defined. Examples of these terms from our prompt are: "Confidential information", "Sensitive personal data", "controversial or potentially damaging details". The LLM has a basic understanding of these terms, but it is usually beneficial to elaborate each of these terms to make sure that the LLM has the same idea as you. Also, add short examples that fit your use case. To refine your prompt, start by discussing the terms with the LLM. Ask it to explain the term and then tailor that explanation to your specific context. Add this refined explanation into your prompt, even if the LLM's initial explanation was already excellent.

Explain the following concept and add examples:

sensitive personal data

Response

Sensitive personal data (also known as special category data) is a specific 
type of personal information that requires extra protection due to its 
particularly private or potentially discriminatory nature. This type of data 
is subject to stricter processing conditions under data protection laws like 
the GDPR. Definition: Sensitive personal data includes information about an 
individual's: 
1. Racial or ethnic origin 
2. Political opinions 
3. Religious or philosophical beliefs 
4. Trade union membership 
5. Genetic data 
6. Biometric data 
7. Health data 
8. Sexual orientation or sex life 
9. Criminal records 

Examples: 
1. Medical Records: - Patient's diagnosis history - Prescription medications - Mental health treatment records - COVID-19 test results 
2. Biometric Data: - Fingerprint scans for office access - Facial recognition data - Retina scans - DNA profiles
...

Examples 2

Until now, we've used real data for our examples. Typically, this isn't advisable, so let's switch to creating relevant artificial examples. Take all your test data and run it against your current prompt. Review the responses and save any errors as an example. Then, ask the LLM to summarize these examples, resulting in fewer, non-real data examples.

You are given a set of examples of how to transform sensitive or potentially 
controversial internal communications into appropriate external communications.

Rate each given example on how relevant it is to the task.

Also summarize the examples into three examples that cover all that the given 
examples cover. Return the three created examples in the same format as the 
given examples.

Add the resulting example into your prompt, and there you have it - a fully enhanced prompt, complete with all its bells and whistles, that should perform quite well. Additional Tips:

  • Clear the chat history after each interaction: This helps ensure your prompt works in a clean environment similar to real-world usage.
  • Use version control for your prompts: This allows you to track changes and refine your prompts over time, making it easier to see what adjustments improve performance.
  • Focus on output formatting for unique outputs: If your output needs to be uniquely formatted, follow the same steps with an emphasis on ensuring the LLM understands how to deliver the correct format.

Check out my new AI video editor Diffusion Studio.

DIFFUSION STUDIO