Build & Deploy AI Agents
Complete guide to building and deploying your own AI agents using the NEST framework for MumbaiHacks
What is NEST/NANDA?
NEST (NANDA Ecosystem Standard) is a framework that enables AI agents to:
1. Prerequisites
What you'll need before getting started
Required Software
Python 3.10+
python --versionpip
Python package manager
ngrok
For exposing local agent to the internet
Required API Keys (Multi-LLM Support)
Choose your LLM provider:
- • Anthropic API Key (Claude models)
- • OpenAI API Key (GPT models)
- • Google AI Studio API Key (Gemini models)
Smithery API Key (optional)
For MCP tools integration (LLM provider agnostic)
Install ngrok
# macOS brew install ngrok # Linux curl -s https://ngrok-agent.s3.amazonaws.com/ngrok.asc | sudo tee /etc/apt/trusted.gpg.d/ngrok.asc echo "deb https://ngrok-agent.s3.amazonaws.com buster main" | sudo tee /etc/apt/sources.list.d/ngrok.list sudo apt update && sudo apt install ngrok # Set auth token (get from https://dashboard.ngrok.com/get-started/your-authtoken) ngrok authtoken YOUR_AUTH_TOKEN
2. Environment Setup
Install NEST framework and create your project
Install NEST Framework & Dependencies
# Install NEST framework pip install git+https://github.com/projnanda/NEST.git@v1 # Install additional dependencies pip install pyngrok python-dotenv
This installs nanda-core with Flask, Anthropic/OpenAI SDKs, A2A protocol, MCP support, and ngrok tunnel integration.
Create Project Directory
mkdir my-agent-project cd my-agent-project
Create .env File
# API Keys (at least one required - Multi-LLM Support) ANTHROPIC_API_KEY=sk-ant-api03-... # For Claude models OPENAI_API_KEY=sk-proj-... # For GPT models GOOGLE_API_KEY=AIza... # For Gemini models # Agent Configuration AGENT_ID=my-custom-agent-001 AGENT_NAME=My Custom Agent AGENT_DOMAIN=general AGENT_SPECIALIZATION=helpful assistant AGENT_DESCRIPTION=A custom AI agent that helps with tasks AGENT_CAPABILITIES=conversation,analysis,research # Server Configuration PORT=7000 PUBLIC_URL=http://localhost:7000 # Mumbai Hacks Registry REGISTRY_URL=https://mumbaihacksindex.chat39.com # LLM Provider (anthropic, openai, or gemini) LLM_PROVIDER=anthropic # Tunnel Configuration (for public deployment) ENABLE_TUNNEL=true NGROK_AUTH_TOKEN=your-ngrok-auth-token # Get from https://dashboard.ngrok.com # Optional: MCP Tools (LLM provider agnostic) SMITHERY_API_KEY=your-smithery-api-key
3. Build Your Agent
Choose a template and customize the agent logic function
Agent Structure & Interface
Input: Your agent receives a message (str) and conversation_id (str)
Output: Return a string response that will be sent back to the user
Customize: Only modify the agent_logic() function - NEST handles everything else
Template 1Basic Echo Agent (No LLM)
Simple rule-based agent for testing. Create my_agent.py:
#!/usr/bin/env python3
import os
from dotenv import load_dotenv
from nanda_core import NANDA
from pyngrok import ngrok
load_dotenv()
# ============================================
# CUSTOMIZE THIS FUNCTION FOR YOUR AGENT
# ============================================
def agent_logic(message: str, conversation_id: str) -> str:
"""
Your agent's brain - customize this function only.
Args:
message: User's input text
conversation_id: Unique conversation identifier
Returns:
str: Your agent's response
"""
msg = message.lower()
# Add your custom logic here
if "hello" in msg or "hi" in msg:
return f"Hello! I'm {os.getenv('AGENT_ID')}. How can I help?"
elif "help" in msg:
return "I can respond to greetings and questions. Try asking me something!"
elif "?" in message:
return f"Great question! Let me think... {message}"
else:
return f"You said: {message}. (Echo mode)"
# ============================================
# DO NOT MODIFY BELOW (NEST Infrastructure)
# ============================================
if __name__ == "__main__":
port = int(os.getenv("PORT", "7000"))
# Setup ngrok tunnel
if os.getenv("ENABLE_TUNNEL", "false").lower() == "true":
ngrok.set_auth_token(os.getenv("NGROK_AUTH_TOKEN"))
tunnel = ngrok.connect(port, bind_tls=True)
public_url = tunnel.public_url
print(f"🌐 Tunnel URL: {public_url}")
else:
public_url = os.getenv("PUBLIC_URL", f"http://localhost:{port}")
# Initialize NANDA agent
nanda = NANDA(
agent_id=os.getenv("AGENT_ID"),
agent_logic=agent_logic, # Your custom function
port=port,
public_url=public_url,
registry_url=os.getenv("REGISTRY_URL"),
enable_telemetry=True
)
print(f"🤖 Agent '{os.getenv('AGENT_ID')}' ready!")
print(f"📡 Listening on port {port}")
# Start the agent server
nanda.start()Template 2LLM-Powered Agent (Claude/GPT/Gemini)
Use Claude, GPT, or Gemini for intelligent responses:
#!/usr/bin/env python3
import os
from dotenv import load_dotenv
from nanda_core import NANDA
from pyngrok import ngrok
# Choose your LLM client:
from anthropic import Anthropic # or: from openai import OpenAI
load_dotenv()
# Initialize LLM client
llm_client = Anthropic(api_key=os.getenv("ANTHROPIC_API_KEY"))
# For OpenAI: llm_client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
# ============================================
# CUSTOMIZE THIS FUNCTION FOR YOUR AGENT
# ============================================
def agent_logic(message: str, conversation_id: str) -> str:
"""
LLM-powered agent - customize the system prompt and model.
"""
# CUSTOMIZE: Your agent's personality and instructions
system_prompt = f"""You are {os.getenv('AGENT_ID')}, a helpful AI assistant.
Your specialization: {os.getenv('AGENT_SPECIALIZATION', 'general assistance')}
Your capabilities: {os.getenv('AGENT_CAPABILITIES', 'conversation')}
Be concise, friendly, and helpful in your responses."""
try:
# Call LLM (Claude example)
response = llm_client.messages.create(
model="claude-3-5-sonnet-20241022", # CUSTOMIZE: model choice
max_tokens=1024,
system=system_prompt,
messages=[{"role": "user", "content": message}]
)
return response.content[0].text
# For OpenAI/GPT:
# response = llm_client.chat.completions.create(
# model="gpt-4",
# messages=[
# {"role": "system", "content": system_prompt},
# {"role": "user", "content": message}
# ]
# )
# return response.choices[0].message.content
except Exception as e:
return f"Error processing request: {str(e)}"
# ============================================
# DO NOT MODIFY BELOW (NEST Infrastructure)
# ============================================
if __name__ == "__main__":
port = int(os.getenv("PORT", "7000"))
# Setup ngrok tunnel
if os.getenv("ENABLE_TUNNEL", "false").lower() == "true":
ngrok.set_auth_token(os.getenv("NGROK_AUTH_TOKEN"))
tunnel = ngrok.connect(port, bind_tls=True)
public_url = tunnel.public_url
print(f"🌐 Tunnel URL: {public_url}")
else:
public_url = os.getenv("PUBLIC_URL", f"http://localhost:{port}")
nanda = NANDA(
agent_id=os.getenv("AGENT_ID"),
agent_logic=agent_logic,
port=port,
public_url=public_url,
registry_url=os.getenv("REGISTRY_URL"),
enable_telemetry=True
)
print(f"🤖 LLM Agent '{os.getenv('AGENT_ID')}' ready!")
nanda.start()Template 3Stateful Agent with Memory
Track conversation history and maintain state:
#!/usr/bin/env python3
import os
from dotenv import load_dotenv
from nanda_core import NANDA
from pyngrok import ngrok
load_dotenv()
# Conversation memory storage
conversation_history = {}
# ============================================
# CUSTOMIZE THIS FUNCTION FOR YOUR AGENT
# ============================================
def agent_logic(message: str, conversation_id: str) -> str:
"""
Stateful agent that remembers conversation context.
"""
# Initialize conversation history
if conversation_id not in conversation_history:
conversation_history[conversation_id] = []
# Add current message to history
conversation_history[conversation_id].append({
"role": "user",
"content": message
})
# CUSTOMIZE: Your stateful logic here
history = conversation_history[conversation_id]
message_count = len(history)
# Example: Count-based responses
if message_count == 1:
response = "Hello! This is our first message. What's your name?"
elif message_count == 2:
response = f"Nice to meet you! You said: {history[0]['content']}"
else:
response = f"This is message #{message_count}. Previous: {history[-2]['content']}"
# Save agent response to history
conversation_history[conversation_id].append({
"role": "assistant",
"content": response
})
return response
# ============================================
# DO NOT MODIFY BELOW
# ============================================
if __name__ == "__main__":
port = int(os.getenv("PORT", "7000"))
if os.getenv("ENABLE_TUNNEL", "false").lower() == "true":
ngrok.set_auth_token(os.getenv("NGROK_AUTH_TOKEN"))
tunnel = ngrok.connect(port, bind_tls=True)
public_url = tunnel.public_url
print(f"🌐 Tunnel URL: {public_url}")
else:
public_url = os.getenv("PUBLIC_URL", f"http://localhost:{port}")
nanda = NANDA(
agent_id=os.getenv("AGENT_ID"),
agent_logic=agent_logic,
port=port,
public_url=public_url,
registry_url=os.getenv("REGISTRY_URL"),
enable_telemetry=True
)
print(f"🤖 Stateful Agent '{os.getenv('AGENT_ID')}' ready!")
nanda.start()🎯 What to Customize:
- agent_logic() function - Your agent's behavior and responses
- system_prompt (Template 2) - LLM personality and instructions
- model choice (Template 2) - claude-3-5-sonnet, gpt-4, gemini-pro, etc.
- .env variables - AGENT_ID, AGENT_NAME, AGENT_SPECIALIZATION
⚠️ Do not modify the NEST infrastructure code below the customization section!
Want Complete Working Examples?
Check out production-ready agent examples with detailed READMEs in the NEST repository:
• LangChain agents with memory
• LangGraph multi-agent workflows
• CrewAI team coordination
• MCP tool integration examples
• Domain-specific agents (finance, healthcare, etc.)
4. Local Testing
Test your agent before deploying
Start Your Agent
python my_agent.py
Expected output:
🤖 Agent 'my-agent-001' starting... 📡 Port: 7000 🌐 Registry: https://mumbaihacksindex.chat39.com 🔌 Tunnel: true 🚀 Starting agent 'my-agent-001' on 0.0.0.0:7000 📊 Telemetry enabled for my-agent-001 🌐 Tunnel URL: https://abc123.ngrok.io
Test with cURL
curl -X POST http://localhost:7000/a2a \
-H "Content-Type: application/json" \
-d '{
"content": {"text": "Hello, agent!", "type": "text"},
"role": "user",
"conversation_id": "test-123"
}'5. Deploy with Tunnel
Make your agent publicly accessible
When ENABLE_TUNNEL=true, NEST automatically creates an ngrok tunnel to expose your agent to the internet.
How It Works:
- Your agent starts on
localhost:7000 - NEST creates an ngrok tunnel:
https://random-id.ngrok.io→localhost:7000 - Agent registers the public tunnel URL with Mumbai Hacks registry
- Users can access your agent through the tunnel
Verify Tunnel
Look for the tunnel URL in your agent logs:
🌐 Tunnel URL: https://abc123.ngrok.io
Test the tunnel:
curl -X POST https://abc123.ngrok.io/a2a \
-H "Content-Type: application/json" \
-d '{"content": {"text": "Hello via tunnel!", "type": "text"}, "role": "user", "conversation_id": "test"}'6. Verify Registration
Check your agent is registered in the Mumbai Hacks index
Check Registry Listing
curl https://mumbaihacksindex.chat39.com/list
You should see your agent in the response.
Access via Frontend
Once registered, your agent will appear on the Agent Gallery page.
View Agent Gallery7. Register Your Team
Connect your agent with your team profile
After deploying, register your team for MumbaiHacks:
Register TeamThis will link your agent to your team profile and enable you to showcase your project, problem statement, and team members.
Troubleshooting
Agent Won't Start
Issue: "Address already in use" or "Port XXXX is in use"
# Find and kill process using the port lsof -ti:7000 | xargs kill -9 # Or change PORT in .env file PORT=7001
Agent Not Registering
• Check REGISTRY_URL is correct: https://mumbaihacksindex.chat39.com
• Verify tunnel is working: check for tunnel URL in logs
• Test registry manually: curl https://mumbaihacksindex.chat39.com/list
Tunnel Issues
• Install ngrok: brew install ngrok (macOS)
• Set auth token: ngrok authtoken YOUR_TOKEN
• Test manually: ngrok http 7000
Can't Access via Frontend
• Verify tunnel URL is publicly accessible
• Check CORS settings (handled by NEST automatically)
• Verify agent is still running: ps aux | grep my_agent.py
Additional Resources
NEST GitHub Repository (v1 branch)
Official NEST framework source code with multi-LLM support
Ready to Build? 🚀
Deploy your agent to appear in the gallery, test A2A communication, and showcase your work at MumbaiHacks!