A cross-platform Electron desktop application featuring a simulated roundtable of AI-generated expert personas for GeoAI discussions. Each persona represents a unique ideological and technical stance on topics at the heart of the FOSS4G community.
- Multiple AI Personas: Engage with diverse expert perspectives including Maya RΓos (Indigenous data sovereignty advocate), Prof. Otto Reinhardt (spatial ontologist), and others
- Multiple LLM Providers: Support for Ollama, LM Studio, OpenAI, and any OpenAI-compatible API
- Easy Provider Switching: Quick presets for common setups with one-click testing
- Per-Persona Models: Assign different models to different personas for varied perspectives
- π€ Voice Input (NEW!): Local speech-to-text using Whisper.cpp
- Speak your questions instead of typing
- Completely private - runs locally on your machine
- Fast and accurate transcription
- See VOICE_INPUT_QUICK_START.md
- High-Quality Text-to-Speech: Multiple TTS providers including:
- Piper (local, high-quality neural voices) - Recommended! π
- Web Speech API (browser built-in)
- Azure Neural TTS
- ElevenLabs
- Distinct Voices Per Persona: Each character has a unique, carefully selected voice
- Cross-Platform: Runs on Windows, macOS, and Linux
geoai_metapanel/
βββ electron/ # Electron main process and preload scripts
β βββ main.ts # Main process entry point
β βββ preload.ts # Preload script for secure IPC
βββ src/ # React renderer process
β βββ App.tsx # Main application component
β βββ components/ # React components
β βββ data/ # Persona definitions
β βββ services/ # API services (Ollama, TTS)
β βββ main.tsx # Renderer entry point
β βββ styles.css # Application styles
βββ images/ # Persona avatars and assets
βββ build/ # Build resources (icons, entitlements)
βββ dist/ # Vite build output (renderer)
βββ dist-electron/ # Compiled Electron main process
βββ release/ # electron-builder output
βββ index.html # HTML entry point
βββ package.json # Project configuration
βββ tsconfig.json # TypeScript configuration
βββ vite.config.ts # Vite bundler configuration
- Node.js: v18 or higher
- npm: v9 or higher
- LLM Provider (choose one or more):
- Ollama: Install from ollama.ai for local LLM inference (recommended for getting started)
- LM Studio: Download from lmstudio.ai for GUI-based local model management
- OpenAI API: Sign up at platform.openai.com for cloud-based inference
- Other: Any OpenAI-compatible API endpoint
- TTS (Optional):
- Piper (recommended):
brew install piper-ttsfor high-quality local voices - Or use built-in browser TTS (no installation needed)
- Piper (recommended):
-
Clone the repository:
git clone <repository-url> cd geoai_metapanel
-
Install dependencies:
npm install
-
Set up your preferred LLM provider:
Option A - Ollama (Recommended for beginners):
# Install Ollama from https://ollama.ai ollama pull llama3.1 ollama serveOption B - LM Studio:
- Download and install LM Studio
- Download models through the GUI
- Start the local server (β tab)
Option C - OpenAI:
- Sign up and get an API key
- Configure in the app's Settings panel
See LLM_SETUP_GUIDE.md for detailed setup instructions.
Run the application in development mode with hot-reload:
npm run devThis will:
- Start the Vite development server for the React renderer
- Compile the Electron main process
- Launch the Electron application with DevTools open
npm run buildThis creates a distributable package for your current platform in the release/ directory.
npm run build:macCreates:
.dmginstaller (universal binary for Intel and Apple Silicon).ziparchive
Requirements:
- macOS 10.13 or higher
- Xcode Command Line Tools
Code Signing (Optional): To sign the macOS app, set these environment variables:
export CSC_LINK=/path/to/certificate.p12
export CSC_KEY_PASSWORD=your_password
export [email protected]
export APPLE_ID_PASSWORD=app-specific-passwordnpm run build:winCreates:
.exeNSIS installer (64-bit and 32-bit)- Portable
.exe(64-bit)
Requirements:
- Windows 7 or higher
- Can be built from macOS/Linux using Wine
Code Signing (Optional): To sign the Windows app, set these environment variables:
export CSC_LINK=/path/to/certificate.pfx
export CSC_KEY_PASSWORD=your_passwordnpm run build:linuxCreates:
.AppImage(portable).debpackage (Debian/Ubuntu)
Requirements:
- Ubuntu 18.04 or equivalent
For testing the packaged app without creating installers:
npm run build:dirConfigure your LLM provider in the app's Settings panel:
- Choose Provider: Select from Ollama, LM Studio, OpenAI, or Custom
- Test Connection: Click the test button to verify your setup
- Refresh Models: Load available models from your provider
- Set Default Model: Choose which model to use by default
- Per-Persona Overrides: Optionally assign different models to different personas
For detailed setup instructions for each provider, see LLM_SETUP_GUIDE.md.
Quick Setup Examples:
- Ollama: Base URL
http://localhost:11434, Modelllama3.1 - LM Studio: Base URL
http://localhost:1234, Modellocal-model - OpenAI: Base URL
https://api.openai.com/v1, Modelgpt-4, API Key required
NEW! Use your voice to ask questions instead of typing.
Quick Setup:
bash scripts/setup-whisper.shThen click the π€ button in the app to start recording. See VOICE_INPUT_QUICK_START.md for details.
Features:
- π 100% Private: All processing happens locally
- π Fast: 1-3 second transcription time
- π― Accurate: Uses OpenAI's Whisper model
- π° Free: No API costs
Choose from multiple TTS providers:
- Piper (Recommended): Local, high-quality neural voices - see PIPER_TTS_SETUP.md
- Web Speech API: Built-in browser TTS (free, no setup)
- Azure Neural: High-quality neural voices (requires Azure subscription)
- ElevenLabs: Premium AI voices (requires ElevenLabs API key)
To customize the application icon:
-
Create icons in the following formats:
- macOS:
build/icon.icns(1024x1024 PNG converted to ICNS) - Windows:
build/icon.ico(256x256 PNG converted to ICO) - Linux:
build/icon.png(512x512 PNG)
- macOS:
-
Use online tools or command-line utilities:
# macOS: Convert PNG to ICNS iconutil -c icns icon.iconset # Windows: Use ImageMagick convert icon.png -define icon:auto-resize=256,128,64,48,32,16 icon.ico
General Steps:
- Click π Test Connection in Settings to diagnose the issue
- Verify the base URL is correct (include
http://orhttps://) - Check that your LLM server is running
- Try the π Refresh button to reload available models
Provider-Specific:
Ollama:
- Ensure Ollama is running:
ollama serve - Check the base URL in Settings:
http://localhost:11434 - Verify models are installed:
ollama list
LM Studio:
- Open LM Studio and go to the Local Server tab (β)
- Click "Start Server"
- Verify the port matches your base URL (usually
1234) - Make sure a model is loaded
OpenAI:
- Verify your API key is correct
- Check your account has credits/billing enabled
- Ensure you have internet connectivity
For more troubleshooting help, see LLM_SETUP_GUIDE.md.
- macOS: Install Xcode Command Line Tools:
xcode-select --install - Windows: Ensure you have Visual Studio Build Tools or equivalent
- Linux: Install required dependencies:
sudo apt-get install -y libgtk-3-0 libnotify4 libnss3 libxss1 libxtst6 xdg-utils libatspi2.0-0 libdrm2 libgbm1 libxcb-dri3-0
If you encounter SSL certificate errors during npm install, you may need to configure npm:
npm config set strict-ssl falseOr use a corporate proxy if behind a firewall.
- Electron: Cross-platform desktop framework
- React: UI framework
- TypeScript: Type-safe JavaScript
- Vite: Fast build tool and dev server
- electron-builder: Application packaging and distribution
- Ollama: Local LLM inference
MIT
Contributions are welcome! Please feel free to submit a Pull Request.
For issues and questions, please open an issue on the GitHub repository.