Welcome to the Antigravity Workspace Template. This is a production-grade starter kit for building autonomous agents on the Google Antigravity platform, fully compliant with Antigravity Official Documentationβand proudly "Anti-LangChain" thanks to its minimal, transparent architecture.
In an era rich with AI IDEs, I wanted to achieve an enterprise-grade architecture with just Clone -> Rename -> Prompt.
This project leverages the IDE's context awareness (via .cursorrules and .antigravity/rules.md) to embed a complete Cognitive Architecture directly into the project files.
When you open this project, your IDE is no longer just an editor; it transforms into a "Knowledgeable" Architect.
When using Google Antigravity or Cursor for AI development, I found a pain point:
IDEs and models are powerful, but "empty projects" are weak.
Every time we start a new project, we repeat boring configurations:
- "Should my code go in src or app?"
- "How do I define tool functions so Gemini recognizes them?"
- "How do I make the AI remember context?"
This repetitive labor is a waste of creativity. My ideal workflow is: Git Clone -> IDE already knows what to do.
So I created this project: Antigravity Workspace Template.
This workspace enforces the Artifact-First protocol. The Agent does not just write code; it produces tangible outputs (Artifacts) for every complex task.
- Planning:
artifacts/plan_[task_id].mdis created before coding. - Evidence: Logs and test outputs are saved to
artifacts/logs/. - Visuals: UI changes generate screenshot artifacts.
The agent follows a strict "Think-Act-Reflect" loop, simulating the cognitive process of Gemini 3.
sequenceDiagram
participant User
participant Agent as π€ GeminiAgent
participant Memory as π§ Memory
participant Tools as π οΈ Tools
participant Artifacts as π Artifacts
User->>Agent: "Refactor Authentication"
activate Agent
Agent->>Artifacts: Create Implementation Plan
Note over Agent: <thought> Deep Think Process </thought>
Agent->>Agent: Formulate Strategy
Agent->>Tools: Execute Tool (code_edit)
activate Tools
Tools-->>Agent: Result
deactivate Tools
Agent->>Artifacts: Save Logs/Evidence
Agent-->>User: Final Report (Walkthrough)
deactivate Agent
- π§ Infinite Memory Engine: Recursive summarization automatically compresses history. Context limits are a thing of the past.
- π οΈ Universal Tool Protocol: Generic ReAct pattern. Just register any Python function in
available_tools, and the Agent learns to use it. - β‘οΈ Gemini Native: Optimized for Gemini 2.0 Flash's speed and function calling capabilities.
- Install Dependencies:
pip install -r requirements.txt
- Run the Agent:
python src/agent.py
- Build & Run:
docker-compose up --build
.
βββ .antigravity/ # πΈ Official Antigravity Config
β βββ rules.md # Agent Rules & Permissions
βββ artifacts/ # π Agent Outputs (Plans, Logs, Visuals)
βββ .context/ # AI Knowledge Base
βββ .github/ # CI/CD Workflows
βββ src/ # Source Code
β βββ agent.py # Main Agent Logic
β βββ config.py # Settings Management
β βββ memory.py # JSON Memory Manager
β βββ tools/ # Agent Tools
βββ tests/ # Test Suite
βββ .cursorrules # Compatibility Pointer
βββ Dockerfile # Production Build
βββ docker-compose.yml # Local Dev Setup
βββ mission.md # Agent Objective
Stop writing long system prompts. This workspace pre-loads the AI's cognitive architecture for you.
Treat this repository as a factory mold. Clone it, then rename the folder to your project name.
git clone https://github.com/study8677/antigravity-workspace-template.git my-agent-project
cd my-agent-project
# Now you are ready. No setup required.Open the folder in Cursor or Google Antigravity.
- π Watch: The IDE automatically detects
.cursorrules. - π§ Load: The AI silently ingests the "Antigravity Expert" persona from
.antigravity/rules.md.
You don't need to tell the AI to "be careful" or "use the src folder". It's already brainwashed to be a Senior Engineer.
Old Way (Manual Prompting):
"Please write a snake game. Make sure to use modular code. Put files in src. Don't forget comments..."
The Antigravity Way:
"Build a snake game."
The AI will automatically:
- π Pause: "According to protocols, I must plan first."
- π Document: Generates
artifacts/plan_snake.md. - π¨ Build: Writes modular code into
src/game/with full Google-style docstrings.
- Phase 1: Foundation (Scaffold, Config, Memory)
- Phase 2: DevOps (Docker, CI/CD)
- Phase 3: Antigravity Compliance (Rules, Artifacts)
- Phase 4: Advanced Memory (Summary Buffer Implemented β )
- Phase 5: Cognitive Architecture (Generic Tool Dispatch Implemented β )
- Phase 6: Dynamic Discovery (Auto Tool & Context Loading β )
- Phase 7: Multi-Agent Swarm (Router-Worker Orchestration β )
No more manual imports! The agent now automatically discovers:
Drop any Python file into src/tools/ and the agent instantly knows how to use it:
# src/tools/my_custom_tool.py
def analyze_sentiment(text: str) -> str:
"""Analyzes the sentiment of given text.
Args:
text: The text to analyze.
Returns:
Sentiment score and analysis.
"""
# Your implementation
return "Positive sentiment detected!"That's it! No need to edit agent.py. Just restart and the tool is available.
Add knowledge files to .context/ and they're automatically injected:
echo "# Project Rules\nUsefriendly language." > .context/project_rules.mdThe agent will follow these rules immediately on next run.
Collaborate at scale! The swarm enables multiple specialist agents to work together:
graph TD
User[User Task] --> Router[π§ Router Agent]
Router --> Coder[π» Coder Agent]
Router --> Reviewer[π Reviewer Agent]
Router --> Researcher[π Researcher Agent]
Coder --> Router
Reviewer --> Router
Researcher --> Router
Router --> Result[π Synthesized Result]
Specialist Agents:
- Router: Analyzes tasks, delegates to specialists, synthesizes results
- Coder: Writes clean, well-documented code
- Reviewer: Checks quality, security, best practices
- Researcher: Gathers information and insights
Run the interactive demo:
python -m src.swarm_demoUse in your code:
from src.swarm import SwarmOrchestrator
swarm = SwarmOrchestrator()
result = swarm.execute("Build a calculator and review it for security")
print(result)Example output:
π§ [Router] Analyzing task...
π€ [Router β Coder] Build a calculator
π» [Coder] Creating calculator implementation...
β
[Coder] Done!
The agent follows a strict "Think-Act-Reflect" loop, simulating the cognitive process of Gemini 3.
```mermaid
sequenceDiagram
participant User
participant Agent as π€ GeminiAgent
participant Memory as π§ Memory
participant Tools as π οΈ Tools
participant Artifacts as π Artifacts
User->>Agent: "Refactor Authentication"
activate Agent
Agent->>Artifacts: Create Implementation Plan
Note over Agent: <thought> Deep Think Process </thought>
Agent->>Agent: Formulate Strategy
Agent->>Tools: Execute Tool (code_edit)
activate Tools
Tools-->>Agent: Result
deactivate Tools
Agent->>Artifacts: Save Logs/Evidence
Agent-->>User: Final Report (Walkthrough)
deactivate Agent
- π§ Infinite Memory Engine: Recursive summarization automatically compresses history. Context limits are a thing of the past.
- π οΈ Universal Tool Protocol: Generic ReAct pattern. Just register any Python function in
available_tools, and the Agent learns to use it. - β‘οΈ Gemini Native: Optimized for Gemini 2.0 Flash's speed and function calling capabilities.
- Install Dependencies:
pip install -r requirements.txt
- Run the Agent:
python src/agent.py
- Build & Run:
docker-compose up --build
.
βββ .antigravity/ # πΈ Official Antigravity Config
β βββ rules.md # Agent Rules & Permissions
βββ artifacts/ # π Agent Outputs (Plans, Logs, Visuals)
βββ .context/ # AI Knowledge Base
βββ .github/ # CI/CD Workflows
βββ src/ # Source Code
β βββ agent.py # Main Agent Logic
β βββ config.py # Settings Management
β βββ memory.py # JSON Memory Manager
β βββ tools/ # Agent Tools
βββ tests/ # Test Suite
βββ .cursorrules # Compatibility Pointer
βββ Dockerfile # Production Build
βββ docker-compose.yml # Local Dev Setup
βββ mission.md # Agent Objective
Stop writing long system prompts. This workspace pre-loads the AI's cognitive architecture for you.
Treat this repository as a factory mold. Clone it, then rename the folder to your project name.
git clone https://github.com/study8677/antigravity-workspace-template.git my-agent-project
cd my-agent-project
# Now you are ready. No setup required.Open the folder in Cursor or Google Antigravity.
- π Watch: The IDE automatically detects
.cursorrules. - π§ Load: The AI silently ingests the "Antigravity Expert" persona from
.antigravity/rules.md.
You don't need to tell the AI to "be careful" or "use the src folder". It's already brainwashed to be a Senior Engineer.
Old Way (Manual Prompting):
"Please write a snake game. Make sure to use modular code. Put files in src. Don't forget comments..."
The Antigravity Way:
"Build a snake game."
The AI will automatically:
- π Pause: "According to protocols, I must plan first."
- π Document: Generates
artifacts/plan_snake.md. - π¨ Build: Writes modular code into
src/game/with full Google-style docstrings.
- Phase 1: Foundation (Scaffold, Config, Memory)
- Phase 2: DevOps (Docker, CI/CD)
- Phase 3: Antigravity Compliance (Rules, Artifacts)
- Phase 4: Advanced Memory (Summary Buffer Implemented β )
- Phase 5: Cognitive Architecture (Generic Tool Dispatch Implemented β )
- Phase 6: Dynamic Discovery (Auto Tool & Context Loading β )
- Phase 7: Multi-Agent Swarm (Router-Worker Orchestration β )
- Phase 8: MCP Integration (Model Context Protocol β ) - Implemented by @devalexanderdaza
- Phase 9: Enterprise Core (The "Agent OS" Vision)
- Sandbox Environment: Safe code execution (e.g., E2B or local Docker) for high-risk operations.
- Orchestrated Flows: Structured, parallel execution pipelines (DAGs) for complex tasks.
Connect to any MCP server! The agent now supports the Model Context Protocol, enabling seamless integration with external tools and services.
MCP is an open protocol that standardizes how AI applications connect to external data sources and tools. With MCP integration, your Antigravity agent can:
- π Connect to multiple MCP servers simultaneously
- π οΈ Use any tools exposed by MCP servers
- π Access databases, APIs, filesystems, and more
- π All transparently integrated with local tools
1. Enable MCP in your .env:
MCP_ENABLED=true2. Configure servers in mcp_servers.json:
{
"servers": [
{
"name": "github",
"transport": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"enabled": true,
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "${GITHUB_TOKEN}"
}
}
]
}3. Run the agent:
python src/agent.pyThe agent will automatically:
- π Connect to configured MCP servers
- π Discover available tools
- π¦ Register them alongside local tools
graph TD
Agent[π€ GeminiAgent] --> LocalTools[π οΈ Local Tools]
Agent --> MCPManager[π MCP Client Manager]
MCPManager --> Server1[π‘ GitHub MCP]
MCPManager --> Server2[π‘ Database MCP]
MCPManager --> Server3[π‘ Custom MCP]
LocalTools --> |Merged| AllTools[π¦ All Available Tools]
MCPManager --> |Merged| AllTools
| Transport | Description | Use Case |
|---|---|---|
stdio |
Standard I/O | Local servers, CLI tools |
http |
Streamable HTTP | Remote servers, cloud services |
sse |
Server-Sent Events | Legacy HTTP servers |
The agent includes helper tools for MCP management:
# List all connected MCP servers
list_mcp_servers()
# List available MCP tools
list_mcp_tools()
# Get help for a specific tool
get_mcp_tool_help("mcp_github_create_issue")
# Check server health
mcp_health_check()The mcp_servers.json includes ready-to-use configurations for:
- ποΈ Filesystem: Local file operations
- π GitHub: Repository management
- ποΈ PostgreSQL: Database access
- π Brave Search: Web search
- πΎ Memory: Persistent storage
- π Puppeteer: Browser automation
- π¬ Slack: Workspace integration
Just enable the ones you need and add your API keys!
You can also create your own MCP servers using the MCP Python SDK:
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("My Custom Server")
@mcp.tool()
def my_custom_tool(param: str) -> str:
"""A custom tool for your agent."""
return f"Processed: {param}"
if __name__ == "__main__":
mcp.run()Then add it to mcp_servers.json:
{
"name": "my-server",
"transport": "stdio",
"command": "python",
"args": ["path/to/my_server.py"],
"enabled": true
}No more manual imports! The agent now automatically discovers:
Drop any Python file into src/tools/ and the agent instantly knows how to use it:
# src/tools/my_custom_tool.py
def analyze_sentiment(text: str) -> str:
"""Analyzes the sentiment of given text.
Args:
text: The text to analyze.
Returns:
Sentiment score and analysis.
"""
# Your implementation
return "Positive sentiment detected!"That's it! No need to edit agent.py. Just restart and the tool is available.
Add knowledge files to .context/ and they're automatically injected:
echo "# Project Rules\nUsefriendly language." > .context/project_rules.mdThe agent will follow these rules immediately on next run.
Collaborate at scale! The swarm enables multiple specialist agents to work together:
graph TD
User[User Task] --> Router[π§ Router Agent]
Router --> Coder[π» Coder Agent]
Router --> Reviewer[π Reviewer Agent]
Router --> Researcher[π Researcher Agent]
Coder --> Router
Reviewer --> Router
Researcher --> Router
Router --> Result[π Synthesized Result]
Specialist Agents:
- Router: Analyzes tasks, delegates to specialists, synthesizes results
- Coder: Writes clean, well-documented code
- Reviewer: Checks quality, security, best practices
- Researcher: Gathers information and insights
Run the interactive demo:
python -m src.swarm_demoUse in your code:
from src.swarm import SwarmOrchestrator
swarm = SwarmOrchestrator()
result = swarm.execute("Build a calculator and review it for security")
print(result)Example output:
π§ [Router] Analyzing task...
π€ [Router β Coder] Build a calculator
π» [Coder] Creating calculator implementation...
β
[Coder] Done!
π€ [Router β Reviewer] Review for security
π [Reviewer] Analyzing code...
β
[Reviewer] Review complete!
π Task Completed!
A massive thank you to the community members who help build this project:
- @devalexanderdaza π» π§ (First Contributor!)
- Implemented demo tools script and enhanced agent functionality.
- Proposed the "Agent OS" Roadmap (MCP, Sandbox, Orchestration).
- @Subham-KRLX π»
- Added dynamic tools and context loading (Fixes #4)
- New feature: Add multi-agent cluster protocol (Fixes #6)
Want to contribute? Check out our Issues page!
We value ideas as much as code! We are currently brainstorming the architecture for Phase 6: Multi-Agent Swarm. If you provide a solid architectural suggestion or a detailed design that gets adopted, you will be added to our README as a Contributor.
Don't hesitate to share your thoughts in the Issues, even if you don't have time to write the implementation.