- Data Continuum
- Posts
- Prompt Engineering will be the future
Prompt Engineering will be the future
Fundamentals of Prompt Engineering
Prompt Engineering Developers on average earn up to $335,000
In the next 5-10 years every person wanting to get into tech will be equipped with prompt engineering in their skillset
I'll explain to you the principles of Prompt Engineering for Large Language Models in 2 minutes
There are two main principles that form the fundamentals
Principle 1 - Write clear and specific prompts
This will reduce the chance of the LLM generating irrelevant responses
Clear doesn't necessarily mean short In many cases, a well-written long prompt works better as it provides more context and details which leads to a better response.
Each principle can be executed using a number of tactics
Tactic 1: Use delimiters.
Use delimiters to differentiate the prompts and instructions from the user input.
Tactic 2: Ask for structured output (HTML / JSON)
This can be essential because the output can be re-inputted into the model in a dictionary or list format
Tactic 3: Check whether the conditions are satisfied.
Check the assumptions required to do the task
Tactic 4: Few-shot prompting.
Give the model a few successful examples and ask the model to perform the task.
Principle 2 - Give the model time to think.
Complex questions might need more computational effort on the task.
To provide the model with adequate time to think.
Tactic 1: Specify steps to complete a task
Break down a complex task into multiple smaller tasks or steps.
Make sure the LLM doesn't skip any step and automatically assume something.
Tactic 2: Ask the model to work out its own solution before rushing to a conclusion
In order to avoid confirmation bias, before asking the model to confirm whether an answer to a question is correct ask the model to work out its own solution before rushing to a conclusion.
Understanding the Model Limitations:
The model can sometimes Hallucinate. Which is a limitation of the model.
It makes statements that sound plausible but are not true For example:
To reduce Hallucinations - Ask the model to find relevant information and then answer the question based on relevant info
To learn more about prompting LLMs this course by Andrew NG is a great start: https://learn.deeplearning.ai/chatgpt-prompt-eng/lesson/1/introduction
Reply