⚠
️ DEPRECATED GITLAB INSTANCE
⚠
️ This GitLab is now read-only for reference. Please use
https://gitlab.iauro.co
for all new work.
Migration completed on September 17, 2025
Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
P
Pratiksha-Patil
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Packages
Packages
Container Registry
Analytics
CI / CD Analytics
Repository Analytics
Value Stream Analytics
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Pratiksha Patil
Pratiksha-Patil
Commits
5938c798
Commit
5938c798
authored
Jul 23, 2025
by
Pratiksha Patil
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
6th Assignment of Prompt Engineering
parent
0976d334
Changes
9
Show whitespace changes
Inline
Side-by-side
Showing
9 changed files
with
157 additions
and
0 deletions
+157
-0
prompt-engineering-assignment/.env
prompt-engineering-assignment/.env
+1
-0
prompt-engineering-assignment/README.md
prompt-engineering-assignment/README.md
+36
-0
prompt-engineering-assignment/guardrails/prompt_rules.yaml
prompt-engineering-assignment/guardrails/prompt_rules.yaml
+6
-0
prompt-engineering-assignment/prompts/faq_prompting.py
prompt-engineering-assignment/prompts/faq_prompting.py
+29
-0
prompt-engineering-assignment/prompts/langchain_faq.py
prompt-engineering-assignment/prompts/langchain_faq.py
+20
-0
prompt-engineering-assignment/prompts/langchain_sentiment.py
prompt-engineering-assignment/prompts/langchain_sentiment.py
+16
-0
prompt-engineering-assignment/prompts/sentiment_zero_few_shot.py
...engineering-assignment/prompts/sentiment_zero_few_shot.py
+34
-0
prompt-engineering-assignment/requirements.txt
prompt-engineering-assignment/requirements.txt
+4
-0
prompt-engineering-assignment/test_cases/run_all_tests.py
prompt-engineering-assignment/test_cases/run_all_tests.py
+11
-0
No files found.
prompt-engineering-assignment/.env
0 → 100644
View file @
5938c798
OPENAI_API_KEY=your_openai_key_here
prompt-engineering-assignment/README.md
0 → 100644
View file @
5938c798
# Prompt Engineering Assignment
## 📌 Objective
Create and test reusable prompt templates for:
-
Sentiment Analysis
-
FAQ Answering
## 🛠️ Tools Used
-
OpenAI GPT-4 API
-
LangChain
-
Python
-
Nemo Guardrails
## 🧪 Run Examples
### 1. Setup
```
bash
pip
install
-r
requirements.txt
```
### 2. Run Sentiment Analysis Prompts
```
bash
python prompts/sentiment_zero_few_shot.py
python prompts/langchain_sentiment.py
```
### 3. Run FAQ Answering
```
bash
python prompts/faq_prompting.py
python prompts/langchain_faq.py
```
### 4. Run All Tests
```
bash
python test_cases/run_all_tests.py
```
prompt-engineering-assignment/guardrails/prompt_rules.yaml
0 → 100644
View file @
5938c798
input
:
flows
:
-
name
:
deny-sensitive
type
:
regex
pattern
:
"
.*(ssn|credit
card|password).*"
action
:
block
prompt-engineering-assignment/prompts/faq_prompting.py
0 → 100644
View file @
5938c798
import
openai
import
os
from
dotenv
import
load_dotenv
load_dotenv
()
openai
.
api_key
=
os
.
getenv
(
"OPENAI_API_KEY"
)
def
ask
(
prompt
):
response
=
openai
.
ChatCompletion
.
create
(
model
=
"gpt-4"
,
messages
=
[{
"role"
:
"user"
,
"content"
:
prompt
}]
)
print
(
"Prompt:
\n
"
,
prompt
)
print
(
"Response:
\n
"
,
response
[
'choices'
][
0
][
'message'
][
'content'
])
ask
(
"What is the return policy?"
)
few_shot
=
"""
Q: What is the warranty?
A: 12 months.
Q: Can I return after 30 days?
A: Yes, up to 45 days.
Q: What is the return policy?
"""
ask
(
few_shot
)
ask
(
"You are a customer service agent. What is the return policy?"
)
prompt-engineering-assignment/prompts/langchain_faq.py
0 → 100644
View file @
5938c798
from
langchain.chat_models
import
ChatOpenAI
from
langchain.prompts
import
PromptTemplate
from
langchain.chains
import
LLMChain
import
os
llm
=
ChatOpenAI
(
model_name
=
"gpt-4"
,
temperature
=
0
)
template
=
PromptTemplate
(
input_variables
=
[
"question"
],
template
=
"""
You are an intelligent FAQ assistant.
Q: {question}
A:"""
)
chain
=
LLMChain
(
llm
=
llm
,
prompt
=
template
)
print
(
chain
.
run
(
"How long is the warranty?"
))
print
(
chain
.
run
(
"Can I cancel my subscription anytime?"
))
prompt-engineering-assignment/prompts/langchain_sentiment.py
0 → 100644
View file @
5938c798
from
langchain.chat_models
import
ChatOpenAI
from
langchain.prompts
import
PromptTemplate
from
langchain.chains
import
LLMChain
import
os
llm
=
ChatOpenAI
(
model_name
=
"gpt-4"
,
temperature
=
0
)
template
=
PromptTemplate
(
input_variables
=
[
"sentence"
],
template
=
"Classify sentiment:
\"
{sentence}
\"
. Respond with Positive/Negative/Neutral."
)
chain
=
LLMChain
(
llm
=
llm
,
prompt
=
template
)
print
(
chain
.
run
(
"The product is amazing!"
))
print
(
chain
.
run
(
"Worst experience ever."
))
prompt-engineering-assignment/prompts/sentiment_zero_few_shot.py
0 → 100644
View file @
5938c798
import
openai
import
os
from
dotenv
import
load_dotenv
load_dotenv
()
openai
.
api_key
=
os
.
getenv
(
"OPENAI_API_KEY"
)
def
ask
(
prompt
):
response
=
openai
.
ChatCompletion
.
create
(
model
=
"gpt-4"
,
messages
=
[{
"role"
:
"user"
,
"content"
:
prompt
}]
)
print
(
"Prompt:
\n
"
,
prompt
)
print
(
"Response:
\n
"
,
response
[
'choices'
][
0
][
'message'
][
'content'
])
ask
(
"Classify sentiment: 'I love this product'. Respond with Positive/Negative/Neutral."
)
few_shot
=
"""
Classify sentiment:
\"
I hate it.
\"
-> Negative
\"
It's fine.
\"
-> Neutral
\"
I love it!
\"
-> Positive
Sentence:
\"
I am disappointed.
\"
"""
ask
(
few_shot
)
cot_prompt
=
"""
Step-by-step analyze:
Sentence:
\"
The experience was terrible and support was rude.
\"
Step 1: Identify keywords → terrible, rude
Step 2: Tone is negative
Sentiment: Negative
"""
ask
(
cot_prompt
)
prompt-engineering-assignment/requirements.txt
0 → 100644
View file @
5938c798
openai
python-dotenv
langchain
nemo-guardrails
prompt-engineering-assignment/test_cases/run_all_tests.py
0 → 100644
View file @
5938c798
print
(
"
\n
>>> TEST: Sentiment (Zero/Few/CoT)"
)
exec
(
open
(
"prompts/sentiment_zero_few_shot.py"
)
.
read
())
print
(
"
\n
>>> TEST: FAQ Prompting"
)
exec
(
open
(
"prompts/faq_prompting.py"
)
.
read
())
print
(
"
\n
>>> TEST: LangChain Sentiment"
)
exec
(
open
(
"prompts/langchain_sentiment.py"
)
.
read
())
print
(
"
\n
>>> TEST: LangChain FAQ"
)
exec
(
open
(
"prompts/langchain_faq.py"
)
.
read
())
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment