2 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
This repo lets you query multiple large language models (LLMs) and see their individual responses side by side. It then has them review and rank each other's outputs, with a designated Chairman LLM providing the final answer. The project is a simple, local web app meant for exploration and comparison of LLMs.
If you do, here's more
The GitHub repository introduces a local web app called the "LLM Council," which allows users to query multiple language models (LLMs) like OpenAI's GPT 5.1 and Google's Gemini 3.0 Pro simultaneously. Instead of relying on a single provider for answers, the app sends user queries to a group of LLMs and facilitates a structured review process. It collects initial responses, has each model evaluate and rank those responses, and finally, a designated "Chairman" LLM compiles everything into a single answer for the user.
The process unfolds in three stages. First, user queries are sent to all LLMs, generating individual responses displayed in a tabbed interface for easy comparison. Next, each LLM reviews the others' outputs, ranking them based on accuracy and insight while keeping their identities anonymous to prevent bias. In the final stage, the Chairman LLM synthesizes the ranked responses into a coherent answer.
The project, described as a weekend hack, was primarily created for personal exploration of LLMs while reading. The developer emphasizes that it's a one-off effort, not intended for ongoing support or updates. Users interested in running the app will need to set up a .env file with their OpenRouter API key and make adjustments in the configuration file to customize which models are included in the council. The backend operates using FastAPI and Python, while the frontend is built with React and Vite, highlighting a straightforward tech stack for potential users.
Questions about this article
No questions yet.