Skip to content

RichaACN/Agentic-Cohort

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 

Repository files navigation

Agentic-Cohort

Agentic AI for business leaders

what exactly is an AI Agent?

At its core, an agent is an intelligent system capable of perceiving its environment, making decisions, and taking actions to achieve specific goals. Key characteristics include autonomy, adaptability, and interaction with its surroundings. While traditional AI systems might simply compute or follow static rules, agents are designed for autonomy, adaptability, and interaction with their surroundings. They take the power of generative AI, particularly Large Language Models (or LLMs), a step further.

Instead of merely assisting you, agents can work alongside you or independently on your behalf. They can handle a range of tasks, from simple responses to more complicated, multistep assignments. AI Agents move beyond static rules or simple generation to reasoning, planning, and acting towards a goal using tools. They give LLMs the capability to reason, plan, and act in an environment– like your computer, the internet, or the physical world–to achieve a goal.

What is Prompting?

Prompting is essentially the means by which LLMs are programmed or guided. A prompt is a set of instructions provided to an LLM that customizes, enhances, or refines its capabilities. For agents, these instructions (including system prompts, user requests, and tool descriptions) are components. Thus, LLMs provide the raw reasoning power, but prompting is the method by which we “program” the LLM

Role-Based Prompting

A Persona (or Role) defines how an agent should behave—its personality, tone, and perspective. image

The detailed, role-based system prompt

system_prompt_analyst = """

You are a senior Cybersecurity Analyst providing a formal threat assessment. Your tone is objective, cautious, and precise.
When analyzing a potential phishing email, do the following:
  1. State your overall assessment clearly (e.g., "High-Confidence Phishing Attempt").
  2. Do not speculate or use casual language.
  3. List the specific red flags you've identified as a bulleted list. For each flag, provide a brief explanation.
  4. Conclude with a clear, actionable recommendation for the end-user. """

# The generic system prompt

system_prompt_generic = "You are a helpful assistant."

The user's request with the email data

user_prompt = f""" Please analyze the following email and tell me if it's safe:

{suspicious_email_text}

"""

Calling the model

response = get_completion(system_prompt_generic, user_prompt) print(response)

We use the same user_prompt as before

response = get_completion(system_prompt_analyst, user_prompt) print(response)

Evaluating Agent Personas

Consistency & Persona Adherence:

Is the agent staying within its defined role? This is essential, for instance, in the case of a Customer Support agent. If the agent is supposed to be a hospital assistant, does it stay within that role, even if asked about non-medical topics? A mock conversation can be passed to the Agent and the output can be evaluated by another LLM. image

Robustness Testing:

Assess resilience against adversarial attacks. This is critical if the agent handles sensitive information or could cause harm.

how role-based prompting can be used to create convincing historical figure personas:

  1. Plain Prompt: We started with a simple request involving no role-playing.
  2. Baseline Prompt: We then used a simple request for the AI to role-play as Albert Einstein.
  3. Persona-Specific Attributes: We added details about personality traits, speech style, expertise, and historical context.
  4. Tone and Style: We further refined the prompt with specific instructions about conversational tone and linguistic style.

Chain-of-Thought Prompting(COT)

##https://arxiv.org/abs/2201.11903 ##https://arxiv.org/abs/2205.11916 Lets think Step by Step

  1. Zero Short CoT For e.g " Lets think step by step" - Its a simple approach
  2. Few Shorts CoT For e.g Is ‘3 + 1 = 4’ correct? Work out your own solution step by step then compare it to mine. Include a few examples in your prompt where you show the problem , reasoning and answer

Introducing ReAct (Reason + Act) Prompting

CoT is great for internal reasoning, but what if the task requires getting new information or interacting with the outside world? That's where ReAct comes in. ReAct stands for Reason + Act. It's a prompting technique that synergizes reasoning and acting in LLMs by interleaving thought steps with action steps.

The core of ReAct is its iterative loop: Thought, Action, Observation.

Thought: The model internally reasons and plans the next specific step required to progress towards the overall task goal. It analyzes the situation, figures out necessary steps, and decides on an action. **Action: ** Based on its plan, the model specifies using an external tool (like web search, calculator, API) with provided parameters. An orchestrator program then executes this tool. Tools are functions or services the agent can invoke. Observation: The model receives the results or feedback from the executed action (e.g., search results, calculator answer, confirmation of an email sent). This new information feeds into the next Thought. This observation, sometimes referred to as information from the "environment" if it's from the outside world like a weather report, then feeds back into the model's next Thought, allowing it to refine its plan or take subsequent actions.

Simple Metrics: Easy-to-implement measures like median response length are good for evaluating and monitoring language models for improved usability and cost efficiency. In all these cases, track performance as you develop your algorithms as well as in production over time, where the model is exposed to a constant stream of real-world data. This way, you can improve and refine your agent. This evaluation often leads to an iterative refinement process. You adjust the prompt based on the AI's performance, clarifying instructions or adding more examples until the output aligns better with your expectations. Treat LLM applications like software development where testing and feedback are crucial for improvement.

image

Scenario: An AI assistant for a financial analyst needs to gather data about stocks and news.

The ReAct System Prompt:

You are a diligent financial analyst assistant. You will help users by gathering and comparing stock information and recent news.

To do this, you will use a step-by-step reasoning process. At each step, you will respond with a single THINK/ACT message in the following format:

THINK: First, you will reason about the user's request and figure out the next logical step to take. You will formulate a plan to gather the necessary information. After you have all the information, you will reason about how to construct the final answer.

ACT: Based on your thought process, you will call ONE of the available tools to take the next step. If you have gathered all the necessary information and are ready to provide the final answer, use the final_answer tool.


Available Tools

Here are the tools you can use:

  1. get_stock_quote(ticker: str)

    • Use this to get the latest stock information for a given ticker symbol.
    • Example: get_stock_quote(ticker="$MC")
  2. search_financial_news(company_name: str)

    • Use this to search for recent financial news articles about a specific company.
    • Example: search_financial_news(company_name="MegaCorp")
  3. final_answer(summary: str)

    • Use this tool ONLY when you have all the information needed to respond to the user's request.
    • Example: final_answer(summary="MegaCorp's P/E ratio is 25.4.")

Example Interaction

Here is an example of how you should respond.

User: Compare the current P/E ratio of MegaCorp ($MC) to its main competitor, Innovate Inc ($II). Is there any recent news that might affect MegaCorp's stock price?

AI Assistant: THINK: I need to answer two questions: one about P/E ratios and one about news. I will start by getting the P/E ratio for the first stock, MegaCorp. ACT: get_stock_quote(ticker="$MC")

**(After this, the system would provide an OBSERVATION with the result of t

About

Agentic AI for business leaders

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors