AI
Day 1: Python Fundamentals
Objectives:
- Learn core Python syntax: variables, data types (lists, dicts, sets, tuples).
- Control flow (if/else, for/while), list comprehensions, and functions.
- Basic file I/O (reading/writing .py, .txt, .csv).
Activities
- Write a script that reads a CSV of student names and grades, computes average scores, and writes a summary file.
- Implement small exercises: FizzBuzz, reversing a string, counting word frequencies in a text.
Day 2: Python for Data Science (NumPy & pandas)
Objectives:
- Manipulate numeric arrays with NumPy: vectorized operations, indexing, and broadcasting.
- Use pandas to load tabular data, explore DataFrames, filter/group/aggregate.
Activities
- Load the Iris dataset via pandas, compute mean/standard deviation for each feature.
- Use NumPy to generate a random 2D array (100×3), compute column‐wise sums and means.
- Clean a CSV with missing values (e.g., drop or impute NaNs), then save the cleaned version.
Day 3: Introduction to Regression
Objectives:
- Grasp supervised learning basics: features, target, train/test split.
- Linear regression theory: hypothesis function h(x)=w0+w1xh(x)=w_0 + w_1 x, cost J(w)J(w), least squares.
- Understand overfitting vs. underfitting, bias–variance concepts.
Activities
- Plot a small synthetic dataset (e.g., house size vs. price). Fit a line by deriving w1w_1 and w0w_0 analytically.
- Visualize residuals to assess fit quality (scatter of actual vs. predicted).
Day 4: Implementing Linear Regression from Scratch
Objectives:
- Code gradient‐descent for linear regression in pure NumPy: cost computation, weight updates.
- Learn scikit-learn’s LinearRegression and compare results.
- Evaluate with MSE and R2R^2.
Activities
- Build a function gradient_descent(X, y, lr, epochs) that returns learned weights over iterations. Plot cost vs. epochs.
- Use scikit-learn on the Boston Housing (or similar) dataset: fit model, compute metrics, and compare to your custom implementation.
Day 5: Neural Networks & Gradient Descent Theory
Objectives:
- Understand perceptron → multilayer neural network evolution.
- Forward pass: computing activations. Loss functions (MSE for regression; cross-entropy for classification).
- Backpropagation: chain rule, gradient calculations, weight updates.
Activities
- Manually work through a 2-input, 1-hidden-unit example: compute outputs and gradients by hand for one training example.
- Draw a simple computational graph (node for each operation) and annotate partial derivatives.
Day 6: Implementing a Simple Neuron (Perceptron)
Objectives:
- Build a Perceptron class that accepts binary inputs and outputs a binary prediction (step activation).
- Implement the Perceptron learning rule:
- Train on basic logic gates (AND, OR).
Activities
- Code a Perceptron class in Python/NumPy with fit(X, y) and predict(X) methods.
- Train on AND/OR datasets; plot classification boundary on a 2D scatter.
Day 7: Multilayer Perceptron (MLP) from Scratch
Objectives:
- Extend from a single perceptron to an MLP with one hidden layer (e.g., two inputs → two hidden neurons → one output).
- Implement forward pass (affine → activation) and backprop manually.
- Experiment with activation functions: sigmoid, ReLU.
Activities
- Write a small 2-layer MLP (NumPy only) for classifying a toy dataset (e.g., XOR, Moons).
- Train with gradient descent (batch or mini-batch), track training loss over epochs, and visualize decision regions.
Day 8: Generative AI Theory
Objectives:
- Learn key generative model families: autoencoders (AE), variational autoencoders (VAE), generative adversarial networks (GAN), and transformers.
- Overview of transformer architecture: self-attention, multihead attention, positional encoding.
- Understand evaluation metrics: reconstruction loss (AE/VAE), adversarial loss (GAN), perplexity (LMs).
Activities
- Sketch/build the transformer block on paper: show how queries, keys, and values interact in self-attention.
- Compare VAE vs. GAN objectives: implement a simple autoencoder loss on MNIST in NumPy (forward/backward for one layer).
Day 9: Working with Generative AI Models (Hands-On)
Objectives:
- Load and run a pretrained transformer model from Hugging Face.
- Optionally, fine-tune a small model on a custom dataset (e.g., fine-tune GPT-2 on short text).
Activities
- Write a Python script using transformers to input a text prompt and output continuation.
- (Optional) Fine-tune a small model on a small text corpus (e.g., product reviews) for 1 epoch and generate samples.
Day 10: Introduction to Agents (LangChain & LangGraph Overview)
Objectives:
- Define an “agent” that interacts with environments or tools: perceives, plans, and acts.
- Overview of LangChain: chaining LLM calls, tool integrations, prompt templates.
- Overview of LangGraph: graph-based orchestration of LLMs and tool calls; designing pipelines.
Activities
- Install LangChain and LangGraph (pip install langchain langgraph) and explore documentation.
- Build a minimal LangChain “tool” (e.g., a calculator function) and wrap it so that the chain can call it when the prompt contains math questions.
- In LangGraph, create a simple graph that:
- Node A: takes user input,
- Node B: calls an LLM to classify intent (e.g., “calculator” vs. “general chat”),
- Node C: routes to either the calculator tool or a conversational LLM.
Day 11: MCP (Model Context Protocols) Theory & Tool-Calling Basics
Objectives:
- Define MCP: Model Context Protocols are structured prompts and metadata schemas that tell an LLM how to call external tools or APIs.
- Understand how to encode “context” (user intent, tool signature, relevant metadata) so that the LLM can decide which tool to call and with what arguments.
- Study examples of JSON-based “MCP” schemas that specify:
- Tool name
- Input arguments schema
- Expected output schema
- Contextual instructions (“If user asks for calculation, call calculator tool with these args”).
Activities
Draft a simple MCP schema in JSON for a “weather” tool:
{
"tool_name": "get_current_weather",
"description": "Returns temperature and conditions for a city",
"input_schema": {
"city": "string"
},
"output_schema": {
"temperature": "float",
"condition": "string"
}
}
- Discuss how to embed that schema into a prompt so an LLM can generate a valid function call. Example prompt snippet: “User wants current weather. Given tool definitions, reply with a JSON like: { “tool”: “get_current_weather”, “args”: { “city”: “London” } }.”
- Walk through how LangChain’s LLMChain or AgentExecutor can be configured with these “tool descriptors” to automatically route calls.
Day 12: Implementing a Simple MCP
Objectives:
- Build a minimal MCP system:
- Define a tool (e.g., calculator, weather, Wikipedia search).
- Create a JSON-based protocol that the LLM uses to decide which tool to invoke.
- Parse the LLM’s JSON response and call the corresponding Python function.
- Use LangChain’s Tool class to register your functions along with descriptions (serving as your MCP).
Activities
- In Python, implement two functions:
- def calculator(a: float, b: float, op: str) -> float: performs basic +, -, *, /.
- def simple_search(query: str) -> str: does a dummy string lookup (e.g., returns “Result for {query}”).
- Using LangChain’s Tool.from_function, register both functions with proper descriptions. Write code to create an LLMChain or AgentExecutor that:
- Receives a user prompt (e.g., “What is 12 * 7?”).
- Embeds MCP schema into system prompt so that the LLM knows about both tools.
- Parses the LLM response to JSON, then dispatches to the correct Python function.
- Test end-to-end: type several prompts (“Divide 100 by 4”, “Search for Python decorators”) and verify correct tool invocation.
Day 13: Simple Agent Creation with LangChain
Objectives:
- Learn to build a rule-augmented or LLM-driven agent in LangChain.
- Use LangChain’s ConversationChain, LLMChain, and AgentExecutor abstractions.
- Integrate at least two tools (e.g., calculator + Wikipedia) into a single agent.
Activities
- Define two tools with LangChain:
- calculator (from Day 12)
- wiki_lookup(query: str) -> str (calls Wikipedia API; return the first paragraph).
- Create a ZeroShotAgent (or OpenAIFunctionsAgent) that:
- At runtime, inspects the user’s prompt.
- If it recognizes a math operation, calls calculator.
- Otherwise, calls wiki_lookup.
- Build a small CLI loop: prompt user → agent processes → prints either computed result or a wiki excerpt.
Day 14: Advanced Agent Work with LangGraph
Objectives:
- Dive into LangGraph: orchestrate multiple LLM chains and tools via a directed graph.
- Understand node types: LLMNode (runs an LLM), ToolNode (executes registered tool), and RouterNode (decides routing logic).
- Build a graph that:
- Takes user input.
- Runs an IntentClassifierNode (LLM) to label intent (“calculator” vs. “summarize” vs. “general chat”).
- Routes to either:
- CalculatorNode (ToolNode)
- SummarizerNode (LLMNode calling a summarization model)
- ChatNode (LLMNode for generic chat).
Activities
- Install and explore LangGraph’s examples.
- Implement an IntentClassifierNode using an LLM (e.g., a prompt that returns { "intent": "calculator", "args": {...} } or { "intent": "summarize", "args": { "text": "..." }}).
- Connect that to three downstream nodes:
- A ToolNode that invokes your calculator from Day 12.
- An LLMNode that takes a long passage of text and returns a one-sentence summary (use a small summarization-capable LLM).
- A generic LLMNode that does open-ended chat if neither of the above.
- Test the graph end-to-end: type “Summarize the following paragraph: …”, “What is 15 + 27?”, and “Tell me a joke”.
Day 15: Capstone Project & Integration
Objectives:
- Combine MCP, LangChain, and LangGraph concepts into a single mini-project.
- Demonstrate:
- A custom MCP schema (JSON) that allows the LLM to select and call tools.
- A LangChain agent that uses that MCP to perform tasks.
- A LangGraph pipeline that includes:
- Intent classification
- Tool invocation via MCP
- Post-processing and response formatting.
Activities (Choose one of the two suggested mini-projects):
- “Study Buddy Agent”
- Features:
- Calculator tool: solves math queries.
- Wiki tool: fetches first paragraph for academic topics.
- Summarizer tool: summarizes long passages.
- Quiz generator: calls a simple Python function that, given a topic, returns three multiple-choice questions (hardcode a small bank).
- Calculator tool: solves math queries.
- Workflow:
- User: “Explain the causes of World War I.”
- Agent: Detects “explain topic” → calls wiki tool → gets paragraph → calls summarizer → returns concise summary.
- User: “Generate 3 quiz questions on that summary.”
- Agent: Calls quiz-generator function via MCP, then returns questions.
- User: “Explain the causes of World War I.”
- Deliverables:
- JSON definitions for all tools (MCP schemas).
- LangChain agent code that can parse user queries into JSON tool calls.
- LangGraph graph that first classifies the request (“explain” vs. “quiz” vs. “calc”) and routes to the correct subgraph.
- JSON definitions for all tools (MCP schemas).
- Features:
- “Office Assistant Agent”
- Features:
- Calendar tool: Python function that schedules reminders (can write to a local JSON “calendar.json”).
- Email-skeleton tool: given recipient, subject, body, returns a formatted email template (just a string).
- To-Do tool: add/list tasks in a simple JSON file.
- Search tool: use Wikipedia or a local Q&A index (toy) to fetch definitions.
- Calendar tool: Python function that schedules reminders (can write to a local JSON “calendar.json”).
- Workflow:
- User: “Set a meeting with Suraj at 3 PM tomorrow.”
- Agent: Recognizes calendar intent → calls calendar_tool with { "description": "Meeting with Suraj", "datetime": "2025-06-06T15:00:00" }.
- User: “Draft an email to Alicia about PoC next week.”
- Agent: Calls email_skeleton_tool → returns a template email.
- User: “What’s a Node.js runtime?”
- Agent: Calls search_tool → fetches first paragraph from Wikipedia.
- User: “Set a meeting with Suraj at 3 PM tomorrow.”
- Deliverables:
- MCP schemas for each tool.
- LangChain agent logic using agent_executor with those tools.
- LangGraph pipeline that first runs an IntentClassifierNode, then invokes the appropriate ToolNode or LLMNode.
- MCP schemas for each tool.
- Features:
- Presentation:
- Each intern (or small team) demos their capstone:
- Show MCP JSON definitions.
- Run a few sample queries end-to-end, highlighting how the LLM decides which tool to call.
- Explain how LangChain and LangGraph fit together (e.g., “LangChain registers tools; LangGraph orchestrates the high-level routing”).
- Show MCP JSON definitions.
- Each intern (or small team) demos their capstone:
Resources & Tips
- LangChain Documentation:
- https://langchain.readthedocs.io/en/latest/
- Focus on sections: “Tools & Agents”, “Function Calling”, “LLMChain”, and “Creating Custom Tools.”
- https://langchain.readthedocs.io/en/latest/
- LangGraph GitHub & Docs:
- https://github.com/langgraph/langgraph
- Study example pipelines that show how to combine multiple Node types.
- https://github.com/langgraph/langgraph
- MCP Design Patterns:
- Treat each tool as a JSON schema with:
- name
- description
- input_schema (fields + types)
- output_schema
- name
- Embed these schemas into the LLM’s system prompt so it “knows” exactly how to format function-call requests.
- Treat each tool as a JSON schema with:
- Model Choice & API Keys:
- Use a smaller LLM (e.g., GPT-3.5-Turbo) for rapid iterations.
- Obtain API keys and store them as environment variables (e.g., OPENAI_API_KEY).
- Use a smaller LLM (e.g., GPT-3.5-Turbo) for rapid iterations.
- Testing & Debugging:
- Log every step: when the LLM returns JSON, validate it with jsonschema and catch errors.
- For LangGraph, visualize your graph (e.g., print node names and edges) to ensure routing is correct.
- Log every step: when the LLM returns JSON, validate it with jsonschema and catch errors.
- Pair Programming & Code Review:
- Have interns review each other’s MCP schemas for completeness (e.g., are all required fields defined?).
- Validate that tool implementations match the declared input/output schemas.
- Have interns review each other’s MCP schemas for completeness (e.g., are all required fields defined?).
By following this 15-day plan—with a clear emphasis on LangChain, LangGraph, and MCP (Model Context Protocols)—interns will build from Python basics all the way through to deploying a working AI agent that can call external tools via structured protocol.