0% found this document useful (0 votes)
30 views1 page

Implementing Advanced Prompt Engineering Techniques

The document outlines a learning journey focused on implementing advanced prompt engineering techniques using SAP's Generative AI Hub. It covers the creation and refinement of prompts, evaluation of AI responses, and the combination of few-shot and meta-prompting methods to enhance message classification accuracy. The evaluation results indicate improvements in response quality after applying these advanced techniques, while also considering the cost and scalability of different models.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views1 page

Implementing Advanced Prompt Engineering Techniques

The document outlines a learning journey focused on implementing advanced prompt engineering techniques using SAP's Generative AI Hub. It covers the creation and refinement of prompts, evaluation of AI responses, and the combination of few-shot and meta-prompting methods to enhance message classification accuracy. The evaluation results indicate improvements in response quality after applying these advanced techniques, while also considering the cost and scalability of different models.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Learning Browse Ge Cer ified My Learning Subscribe Explore SAP

/ Browse / Learning Journeys / Solving Your Business Problems Using Promp s and LLMs in SAP's Genera ive AI Hub / Implemen ing Advanced Promp Engineering Techniques

Crea ing Basic Promp s in Genera ive


AI Hub
Implemen ing Advanced Promp Engineering
Leveraging he Power of LLMs Using Techniques
SDK for Genera ive AI Hub

Refining AI Responses Using


Advanced Promp Engineering Objec ive
Techniques Af er comple ing his lesson, you will be able o design a sys ema ic approach o develop and evalua e promp
engineering from a simple baseline
Describing Techniques for Refining
Promp s
10 mins
Few-shot Prompting
Implemen ing Advanced Promp
Le 's implemen promo ing echniques and hen evalua e he resul s o see improvemen in he promp resul s.
Engineering Techniques
60 mins We use he following code:

Quiz Py hon

Selec ing Large Language Models in 1


Genera ive AI Hub 2 prompt_10 = """Your task is to extract and categorize messages. Here are some example:
3 ---
4 {{?few_shot_examples}}
5 ---
6 Use the examples when extract and categorize the following message:
7 ---
8 {{?input}}
9 ---
10 Extract and return a json with the follwoing keys and values:
11 - "urgency" as one of {{?urgency}}
12 - "sentiment" as one of {{?sentiment}}
13 - "categories" list of the best matching support category tags from: {{?categories}}
14 Your complete message should be a valid json string that can be read directly and only cont
15 """
16
17 import random
18 [Link](42)
19
20 k = 3
21 examples = [Link](dev_set, k)
22
23 example_template = """<example>
24 {example_input}
25

The code aims o crea e a promp empla e o ex rac and ca egorize messages according o heir urgency, sen imen , and
suppor ca egory ags. By using randomly selec ed examples from a developmen se , i genera es a forma ed few-sho learning
promp . The promp is sen o a language model o process and ca egorize a given inpu message, and he overall performance
of he model is hen evalua ed and displayed in a able forma .

Here’s an expanded explana ion for a few par s of he code:

. Se ing he Random Seed: I se s a random seed using "[Link](42)" o ensure ha he random sampling of he
examples is reproducible. This helps in main aining consis ency in experimen s and evalua ions.
. Sampling Examples: The variable "k" is se o 3, indica ing he number of examples o sample from he "dev_se " da ase .
The "[Link](dev_se , k)" func ion selec s hree random examples from he developmen se .
. Forma ing Examples: The selec ed examples are forma ed in o a empla e "example_ empla e". Each example includes
he inpu message and he expec ed ou pu in JSON forma . This forma ed s ring is hen joined using "\n---\n" o crea e a
cohesive se of examples.
. Par ial Func ion Applica ion: The "par ial" func ion is used o bind he genera ed promp and examples o he
"send_reques " func ion, crea ing a func ion "f_10" ha can be called wi h jus he inpu message. This s reamlines he
process of sending reques s o he model wi h he necessary con ex .
. Sending Reques and Evalua ing: The scrip sends he reques using "f_10(inpu =mail["message"])" wi h he inpu message
from "mail["message"]". The resul is s ored and evalua ed agains a small es da ase " es _se _small". The evalua ion resul s
are s ored in "overall_resul ["few_sho --llama3-70b"]".
. Ou pu Display: Finally, he "pre y_prin _ able(overall_resul )" func ion is used o display he evalua ion resul s in a
forma ed able, making i easier o in erpre he resul s.

You can ge he following ou pu promp s:

You can see an example promp here:

This is ano her promp example here:

You can see ano her example promp and he response here.

This is he ou pu for evalua ion af er implemen ing few-sho promp ing.

You can see improvemen in sen imen and urgency assignmen .

We es ablished a baseline earlier, and now we can evalua e and compare he resul s of he refined promp s wi h he baseline
using he es da a.

Metaprompting
Here we'll implemen me apromp ing o crea e de ailed guides for promp s for various ags like urgency, sen imen s, and so on.

We use he following code:

Py hon

1
2 example_template_metaprompt = """<example>
3 {example_input}
4
5 ## Output
6 {key}={example_output}
7 </example>"""
8
9 prompt_get_guide = """Here are some example:
10 ---
11 {{?examples}}
12 ---
13 Use the examples above to come up with a guide on how to distinguish between {{?options}} {
14 Use the following format:
15 ```
16 ### **<category 1>**
17 - <instruction 1>
18 - <instruction 2>
19 - <instruction 3>
20 ### **<category 2>**
21 - <instruction 1>
22 - <instruction 2>
23 - <instruction 3>
24 ...
25 ```

This code genera es s ep-by-s ep guides for differen ca egories—like "ca egories," "urgency," and "sen imen "—from labeled
examples in a da ase .

I crea es ailored guides for dis inguishing be ween ca egories, urgency, and sen imen in ex da a. I forma s examples using a
specific empla e, hen sends hese examples o a model for genera ing s ep-by-s ep ins ruc ions. The guides help users
dis inguish be ween hese ca egories based on pa erns in he provided examples.

Here's a more de ailed explana ion:

. Templa e Defini ions:

"example_ empla e_me apromp ": Defines a empla e o forma examples, specifying how o s ruc ure inpu and ou pu
wi hin an example.
"promp _ge _guide": Ou lines a promp forma o reques he genera ion of a guide based on forma ed examples. I also
specifies he forma and requiremen s for he guide, including making i a s ep-by-s ep ins ruc ion, accoun ing for possible
incorrec labels, and avoiding explici replica ion of he examples.

. Guide Prepara ion:

The scrip i era es over hree keys: "ca egories", "urgency", and "sen imen ".
For each key, i re rieves relevan op ions from "op ion_lis s".

. Example Selec ion and Forma ing: I forma s examples from "dev_se " using he predefined empla e for each key,
embedding he inpu message and corresponding ground ru h.

. Guide Genera ion:

I sends a forma ed promp along wi h he examples o a model (gp -4o), reques ing he genera ion of a guide for
dis inguishing be ween he specified op ions for each key.
I s ores he genera ed guides in a dic ionary (guides), wi h each guide associa ed wi h i s respec ive key (for example,
"guide_ca egories", "guide_urgency", "guide_sen imen ").

This process ensures ha comprehensive and accura e ins ruc ion guides are genera ed for differen classifica ion asks,
facili a ing he correc ca egoriza ion of ex da a.

The las line of he code prin s he guide for urgency.

You can see he following ou pu :

You can see he guide describing hree rules for each urgency ca egory ha can be used in a promp .

We use he following code o u ilize hese guides in a promp .

Py hon

1 prompt_12 = """Your task is to classify messages.


2 This is an explanation of `urgency` labels:
3 ---
4 {{?guide_urgency}}
5 ---
6 This is an explanation of `sentiment` labels:
7 ---
8 {{?guide_sentiment}}
9 ---
10 This is an explanation of `support` categories:
11 ---
12 {{?guide_categories}}
13 ---
14 Giving the following message:
15 ---
16 {{?input}}
17 ---
18 Extract and return a json with the following keys and values:
19 - "urgency" as one of {{?urgency}}
20 - "sentiment" as one of {{?sentiment}}
21 - "categories" list of the best matching support category tags from: {{?categories}}
22 Your complete message should be a valid json string that can be read directly and only cont
23 """
24 f_12 = partial(send_request, prompt=prompt_12, **option_lists, **guides)
25 response = f_12(input=mail["message"])

The code prepares a promp for classifying messages based on urgency, sen imen , and suppor ca egories by u ilizing
predefined guides genera ed hrough he me apromp code. I hen uses a par ial func ion o send his promp as a reques wi h
specific op ions and guides. Finally, i processes an email message o ex rac and re urn hese classifica ions in a JSON forma .

See he following video o see he ou pu .

Le ’s evalua e his promp and i s response using he following code:

Py hon

1
2 overall_result["metaprompting--llama3-70b"] = evalulation_full_dataset(test_set_small, f_12)
3 pretty_print_table(overall_result)
4

You can ge he following ou pu :

Now, we see ha accuracy for urgency is improved, however accuracy for o her ca egories is similar or even worse in he case of
sen imen .

Combining Metaprompting and Few-shot Prompting


We can combine me aprom ing and few-sho promp ing using he following code:

Py hon

1
2 prompt_13 = """Your task is to classify messages.
3 Here are some examples:
4 ---
5 {{?few_shot_examples}}
6 ---
7 This is an explanation of `urgency` labels:
8 ---
9 {{?guide_urgency}}
10 ---
11 This is an explanation of `sentiment` labels:
12 ---
13 {{?guide_sentiment}}
14 ---
15 This is an explanation of `support` categories:
16 ---
17 {{?guide_categories}}
18 ---
19 Giving the following message:
20 ---
21 {{?input}}
22 ---
23 extract and return a json with the following keys and values:
24 - "urgency" as one of {{?urgency}}
25 - "sentiment" as one of {{?sentiment}}

This Py hon code crea es a empla e promp for a message classifica ion ask, specifying how o ex rac and re urn informa ion
abou urgency, sen imen , and suppor ca egories in a JSON forma . The code uses his promp o configure a func ion, "f_13", o
analyze a given inpu message and genera e a s ruc ured JSON response. This ensures consis en and accura e message
classifica ion.

You can see ha i 's combining few examples wi h guides genera e during me apromp ing.

See he following video o see he ou pu .

Le ’s evalua e his promp and i s response using he following code:

Py hon

1
2 overall_result["metaprompting_and_few_shot--llama3-70b"] = evalulation_full_dataset(test_set_s
3 pretty_print_table(overall_result)
4

You can ge he following ou pu :

Now, we see ha accuracy for almos all ca egories excep urgency is improved. This promp has good accuracy. However, i 's a
more expensive promp needing more resources.

No e
You may ge a sligh ly differen response o he one shown here and in all he remaining responses of models shown
in his learning journey.

When you execu e he same promp in your machine, an LLM produces varying ou pu s due o i s probabilis ic na ure,
empera ure se ing, and nonde erminis ic archi ec ure, leading o differen responses even wi h sligh se ing changes
or in ernal s a e shif s.

Evaluation Summary
We need o consider he overall accuracy and quali y of a model along wi h i s cos and scale.

A imes, smaller models and simpler echniques may give be er resul s.

In he preceding ou pu , we can see ha few-sho gives op imal performance wi h less expensive promp .

Le 's recap wha we have done o solve he business problem so far:

. We crea ed a basic promp in SAP AI Launchpad using an open-source model.


. We recrea ed he promp using genera ive-ai-hub-sdk o scale he solu ion.
. We crea ed a baseline evalua ion me hod for he simple promp .
. Finally, we used echniques like few sho and me apromp ing o fur her enhance he promp s.
. The resul s show improvemen in he quali y of promp responses af er implemen ing advanced echniques.

We'll s udy he cos s associa ed wi h hese echniques using o her models in he nex uni .

Was his lesson helpful? Yes No Con inue o quiz

Quick Links Learning Suppor Abou SAP Si e Informa ion


Learning

Download Ca alog (CSV, JSON, XLSX, Ge Suppor Company Informa ion Privacy
XML)
Share Feedback Copyrigh Terms of Use
SAP Learning Hub
Trademark Legal Disclosure
SAP Training Shop
Worldwide Direc ory Do No Share/sell My Personal
SAP Developer Cen er Informa ion (us Learners Only)
Careers
SAP Communi y
News and Press
Newsle er

You might also like