Before you even know what fred is about, there are two key references to know:
Fred is a production-ready platform for building and operating multi-agent AI applications. It is designed around two complementary goals:
- A complete runtime platform — auth, session management, document ingestion, team access control, observability, and Kubernetes-ready deployment, all integrated and ready to use.
- A structured agent authoring SDK — a constrained, typed authoring model (v2 SDK) that lets domain engineers write reliable agents without having to design a distributed runtime from scratch.
Fred is composed of four components:
- a Python agentic backend (
agentic-backend) — multi-agent runtime, session orchestration, streaming, MCP tool integration - a Python knowledge flow backend (
knowledge-flow-backend) — document ingestion, vectorization, and retrieval - a Python control plane backend (
control-plane-backend) — team and user management, access policy, agent registry - a React frontend (
frontend) — chat interface and agent management UI
The repository also includes an academy with sample MCP servers and agents to get started quickly.
See the project site: https://fredk8.dev
Contents:
- Getting started
- k3d Local Deployment
- Production mode
- Agent authoring (v2 SDK)
- Agent coding academy
- Advanced configuration
- Core Architecture and Licensing Clarity
- Documentation
- Contributing
- Community
- Contacts
To ensure a smooth first-time experience, Fred’s maintainers designed Dev Container/Native startup to require no additional external components (except, of course, to LLM APIs).
By default, using either Dev Container or native startup:
- Fred stores all data locally using SQLite for SQL/metadata and ChromaDB for vectors/embeddings. (DuckDB has been deprecated.) Data includes metrics, chat conversations, document uploads, and embeddings.
- Authentication and authorization are mocked.
Note:
Accross all setup modes, a common requirement is to have access to Large Language Model (LLM) APIs via a model provider. Supported options include:
- Public OpenAI APIs: Connect using your OpenAI API key.
- Private Ollama Server: Host open-source models such as Mistral, Qwen, Gemma, and Phi locally or on a shared server.
- Private Azure AI Endpoints: Connect using your Azure OpenAI key.
Detailed instructions for configuring your chosen model provider are provided below.
Choose how you want to prepare Fred's development environment:
Details
Prefer an isolated environment with everything pre-installed?
The Dev Container setup takes care of all dependencies related to agentic backend, knowledge-flow backend, and frontend components.
| Tool | Purpose |
|---|---|
| Docker / Docker Desktop | Runs the container |
| VS Code | Primary IDE |
Dev Containers extension (ms-vscode-remote.remote-containers) |
Opens the repo inside the container |
- Clone (or open) the repository in VS Code.
- Press F1 → Dev Containers: Reopen in Container.
When the terminal prompt appears, the workspace is ready but you still need to run the different services with make run as specified in the next section. Ports 8000 (Agentic backend), 8111 (Knowledge Flow backend), and 5173 (Frontend (vite)) are automatically forwarded to the host.
- Rebuild the container: F1 → Dev Containers: Rebuild Container
- Dependencies feel stale? Delete the relevant
.venvorfrontend/node_modulesinside the container, then rerun the associatedmaketarget. - Need to change API keys or models? Update the backend
.envfiles inside the container and restart the relevant service. See Model configuration for more details.
Details
Note: Note that this native mode only applies to Unix-based OS (e.g., Mac or Linux-related OS).
First, make sure you have all the requirements installed
| Tool | Type | Version | Install hint |
|---|---|---|---|
| Pyenv | Python installer | latest | Pyenv installation instructions |
| Python | Programming language | 3.12.8 | Use pyenv install 3.12.8 |
| python3-venv | Python venv module/package | matching | Bundled with Python 3 on most systems; otherwise apt install python3-venv (Debian/Ubuntu) |
| nvm | Node installer | latest | nvm installation instructions |
| Node.js | Programming language | 22.13.0 | Use nvm install 22.13.0 |
| Make | Utility | system | Install via system package manager (e.g., apt install make, brew install make) |
| yq | Utility | system | Install via system package manager |
| SQLite | Local RDBMS engine | ≥ 3.35.0 | Install via system package manager |
| Pandoc | 2.9.2.1 | Pandoc installation instructions | For DOCX document ingestion |
| LibreOffice | Headless doc converter | LibreOffice installation instructions | Required for PPTX vision enrichment (pptx -> pdf) via the soffice command |
| libmagic | Identifies file types by content | Install via system package manager (e.g., apt install libmagic1, brew install libmagic) |
To check file type |
Dependency details
graph TD
subgraph FredComponents["Fred Components"]
style FredComponents fill:#b0e57c,stroke:#333,stroke-width:2px %% Green Color
Agentic["agentic-backend"]
Knowledge["knowledge-flow-backend"]
Frontend["frontend"]
end
subgraph ExternalDependencies["External Dependencies"]
style ExternalDependencies fill:#74a3d9,stroke:#333,stroke-width:2px %% Blue Color
Venv["python3-venv"]
Python["Python 3.12.8"]
SQLite["SQLite"]
Pandoc["Pandoc"]
libmagic["libmagic"]
Pyenv["Pyenv (Python installer)"]
Node["Node 22.13.0"]
NVM["nvm (Node installer)"]
end
subgraph Utilities["Utilities"]
style Utilities fill:#f9d5e5,stroke:#333,stroke-width:2px %% Pink Color
Make["Make utility"]
Yq["yq (YAML processor)"]
end
Agentic -->|depends on| Python
Agentic -->|depends on| Knowledge
Agentic -->|depends on| Venv
Knowledge -->|depends on| Python
Knowledge -->|depends on| Venv
Knowledge -->|depends on| Pandoc
Knowledge -->|depends on| SQLite
Knowledge -->|depends on| libmagic
Frontend -->|depends on| Node
Python -->|depends on| Pyenv
Node -->|depends on| NVM
git clone https://github.com/ThalesGroup/fred.git
cd fredNote: the PPTX vision enrichment path in
knowledge-flow-backendrequires LibreOffice to be installed locally and thesofficecommand to be available inPATH. On Debian/Ubuntu, this can be installed withapt install libreoffice.
Prerequisites:
- Visual Studio Code
- VS Code extensions:
- Python (ms-python.python)
- Pylance (ms-python.vscode-pylance)
To get full VS Code Python support (linting, IntelliSense, debugging, etc.) across our repo, we provide:
1. A VS Code workspace file `fred.code-workspace` that loads all sub‑projects.
After cloning the repo, you can open Fred's VS Code workspace with code .vscode/fred.code-workspace
When you open Fred's VS Code workspace, VS Code will load four folders:
fred– for any repo‑wide files, scripts, etcagentic-backend– first Python backendknowledge-flow-backend– second Python backendfred-core- a common python library for both python backendsfrontend– UI
2. Per‑folder `.vscode/settings.json` files in each Python backend to pin the interpreter.
Each backend ships its own virtual environment under .venv. We’ve added a per‑folder VS Code setting (see for instance agentic_backend/.vscode/settings.json) to automatically pick it:
This ensures that as soon as you open a Python file under agentic_backend/ (or knowledge_flow_backend/), VS Code will:
- Activate that folder’s virtual environment
- Provide linting, IntelliSense, formatting, and debugging using the correct Python
Model configuration for the agentic backend lives in agentic-backend/config/models_catalog.yaml. This file is separate from configuration.yaml and owns the full model setup: named profiles, provider settings, shared HTTP client limits, and routing rules.
Profiles are named model configurations. Each profile declares a provider, a model name, and optional settings (temperature, timeouts, retries). Profiles are referenced by profile_id.
Defaults declare which profile to use per capability when no rule matches:
default_profile_by_capability:
chat: default.chat.openai.prod
language: default.language.openai.prodRouting rules allow policy-based model selection based on team, agent, or operation context. Rules are evaluated in order; the first match wins:
rules:
- rule_id: team-a-uses-ollama
capability: chat
team_id: team-a
operation: routing
target_profile_id: chat.ollama.mistral
- rule_id: graph-g1-json-validation
capability: chat
agent_id: internal.graph.g1
operation: json_validation_fc
target_profile_id: chat.azure_apim.gpt4oThis makes it possible to route different teams, agents, or operation types to different models — including mixing providers — without changing any agent code.
For details on all supported match criteria (team_id, agent_id, user_id, operation, purpose) see docs/platform/LLM_ROUTING_FRED.md.
No matter which development environment you choose, both backends rely on .env files for secrets and configuration.yaml / models_catalog.yaml for settings:
- Agentic backend:
agentic-backend/config/.env,configuration.yaml, andmodels_catalog.yaml - Knowledge Flow backend:
knowledge-flow-backend/config/.envandconfiguration.yaml
-
Copy the templates (skip if they already exist).
cp agentic-backend/config/.env.template agentic-backend/config/.env cp knowledge-flow-backend/config/.env.template knowledge-flow-backend/config/.env
-
Edit the
.envfiles to set the API keys, base URLs, and deployment names that match your model provider. -
Update each backend’s
configuration.yamlso theprovider,name, and optional settings align with the same provider. Use the recipes below as a starting point.
OpenAI
Note: Out of the box, Fred is configured to use OpenAI public APIs with the following models:
- agentic backend: chat model
gpt-4o- knowledge flow backend: chat model
gpt-4o-miniand embedding modeltext-embedding-3-largeIf you plan to use Fred with these OpenAI models, you don't have to perform the
yqcommands below—just make sure the.envfiles contain your key.
-
agentic backend configuration
-
Chat model
yq eval '.ai.default_chat_model.provider = "openai"' -i agentic-backend/config/configuration.yaml yq eval '.ai.default_chat_model.name = "<your-openai-model-name>"' -i agentic-backend/config/configuration.yaml yq eval 'del(.ai.default_chat_model.settings)' -i agentic-backend/config/configuration.yaml
-
-
knowledge flow backend configuration
-
Chat model
yq eval '.chat_model.provider = "openai"' -i knowledge-flow-backend/config/configuration.yaml yq eval '.chat_model.name = "<your-openai-model-name>"' -i knowledge-flow-backend/config/configuration.yaml yq eval 'del(.chat_model.settings)' -i knowledge-flow-backend/config/configuration.yaml
-
Embedding model
yq eval '.embedding_model.provider = "openai"' -i knowledge-flow-backend/config/configuration.yaml yq eval '.embedding_model.name = "<your-openai-model-name>"' -i knowledge-flow-backend/config/configuration.yaml yq eval 'del(.embedding_model.settings)' -i knowledge-flow-backend/config/configuration.yaml
-
-
Copy-paste your
OPENAI_API_KEYvalue in both.envfiles.⚠️ AnOPENAI_API_KEYfrom a free OpenAI account unfortunately does not work.
Azure OpenAI
-
agentic backend configuration
-
Chat model
yq eval '.ai.default_chat_model.provider = "azure-openai"' -i agentic-backend/config/configuration.yaml yq eval '.ai.default_chat_model.name = "<your-azure-openai-deployment-name>"' -i agentic-backend/config/configuration.yaml yq eval 'del(.ai.default_chat_model.settings)' -i agentic-backend/config/configuration.yaml yq eval '.ai.default_chat_model.settings.azure_endpoint = "<your-azure-openai-endpoint>"' -i agentic-backend/config/configuration.yaml yq eval '.ai.default_chat_model.settings.azure_openai_api_version = "<your-azure-openai-api-version>"' -i agentic-backend/config/configuration.yaml
-
-
knowledge flow backend configuration
-
Chat model
yq eval '.chat_model.provider = "azure-openai"' -i knowledge-flow-backend/config/configuration.yaml yq eval '.chat_model.name = "<your-azure-openai-deployment-name>"' -i knowledge-flow-backend/config/configuration.yaml yq eval 'del(.chat_model.settings)' -i knowledge-flow-backend/config/configuration.yaml yq eval '.chat_model.settings.azure_endpoint = "<your-azure-openai-endpoint>"' -i knowledge-flow-backend/config/configuration.yaml yq eval '.chat_model.settings.azure_openai_api_version = "<your-azure-openai-api-version>"' -i knowledge-flow-backend/config/configuration.yaml
-
Embedding model
yq eval '.embedding_model.provider = "azure-openai"' -i knowledge-flow-backend/config/configuration.yaml yq eval '.embedding_model.name = "<your-azure-openai-deployment-name>"' -i knowledge-flow-backend/config/configuration.yaml yq eval 'del(.embedding_model.settings)' -i knowledge-flow-backend/config/configuration.yaml yq eval '.embedding_model.settings.azure_endpoint = "<your-azure-openai-endpoint>"' -i knowledge-flow-backend/config/configuration.yaml yq eval '.embedding_model.settings.azure_openai_api_version = "<your-azure-openai-api-version>"' -i knowledge-flow-backend/config/configuration.yaml
-
Vision model
yq eval '.vision_model.provider = "azure-openai"' -i knowledge_flow_backend/config/configuration.yaml yq eval '.vision_model.name = "<your-azure-openai-deployment-name>"' -i knowledge_flow_backend/config/configuration.yaml yq eval 'del(.vision_model.settings)' -i knowledge_flow_backend/config/configuration.yaml yq eval '.vision_model.settings.azure_endpoint = "<your-azure-openai-endpoint>"' -i knowledge_flow_backend/config/configuration.yaml yq eval '.vision_model.settings.azure_openai_api_version = "<your-azure-openai-api-version>"' -i knowledge_flow_backend/config/configuration.yaml
-
-
Copy-paste your
AZURE_OPENAI_API_KEYvalue in both.envfiles.
Ollama
-
agentic backend configuration
-
Chat model
yq eval '.ai.default_chat_model.provider = "ollama"' -i agentic-backend/config/configuration.yaml yq eval '.ai.default_chat_model.name = "<your-ollama-model-name>"' -i agentic-backend/config/configuration.yaml yq eval 'del(.ai.default_chat_model.settings)' -i agentic-backend/config/configuration.yaml yq eval '.ai.default_chat_model.settings.base_url = "<your-ollama-endpoint>"' -i agentic-backend/config/configuration.yaml
-
-
knowledge flow backend configuration
-
Chat model
yq eval '.chat_model.provider = "ollama"' -i knowledge-flow-backend/config/configuration.yaml yq eval '.chat_model.name = "<your-ollama-model-name>"' -i knowledge-flow-backend/config/configuration.yaml yq eval 'del(.chat_model.settings)' -i knowledge-flow-backend/config/configuration.yaml yq eval '.chat_model.settings.base_url = "<your-ollama-endpoint>"' -i knowledge-flow-backend/config/configuration.yaml
-
Embedding model
yq eval '.embedding_model.provider = "ollama"' -i knowledge-flow-backend/config/configuration.yaml yq eval '.embedding_model.name = "<your-ollama-model-name>"' -i knowledge-flow-backend/config/configuration.yaml yq eval 'del(.embedding_model.settings)' -i knowledge-flow-backend/config/configuration.yaml yq eval '.embedding_model.settings.base_url = "<your-ollama-endpoint>"' -i knowledge-flow-backend/config/configuration.yaml
-
Azure OpenAI via Azure APIM
-
agentic backend configuration
-
Chat model
yq eval '.ai.default_chat_model.provider = "azure-apim"' -i agentic-backend/config/configuration.yaml yq eval '.ai.default_chat_model.name = "<your-azure-openai-deployment-name>"' -i agentic-backend/config/configuration.yaml yq eval 'del(.ai.default_chat_model.settings)' -i agentic-backend/config/configuration.yaml yq eval '.ai.default_chat_model.settings.azure_ad_client_id = "<your-azure-apim-client-id>"' -i agentic-backend/config/configuration.yaml yq eval '.ai.default_chat_model.settings.azure_ad_client_scope = "<your-azure-apim-client-scope>"' -i agentic-backend/config/configuration.yaml yq eval '.ai.default_chat_model.settings.azure_apim_base_url = "<your-azure-apim-endpoint>"' -i agentic-backend/config/configuration.yaml yq eval '.ai.default_chat_model.settings.azure_apim_resource_path = "<your-azure-apim-resource-path>"' -i agentic-backend/config/configuration.yaml yq eval '.ai.default_chat_model.settings.azure_openai_api_version = "<your-azure-openai-api-version>"' -i agentic-backend/config/configuration.yaml yq eval '.ai.default_chat_model.settings.azure_tenant_id = "<your-azure-tenant-id>"' -i agentic-backend/config/configuration.yaml
-
-
knowledge flow backend configuration
-
Chat model
yq eval '.chat_model.provider = "azure-apim"' -i knowledge-flow-backend/config/configuration.yaml yq eval '.chat_model.name = "<your-azure-openai-deployment-name>"' -i knowledge-flow-backend/config/configuration.yaml yq eval 'del(.chat_model.settings)' -i knowledge-flow-backend/config/configuration.yaml yq eval '.chat_model.settings.azure_ad_client_id = "<your-azure-apim-client-id>"' -i knowledge-flow-backend/config/configuration.yaml yq eval '.chat_model.settings.azure_ad_client_scope = "<your-azure-apim-client-scope>"' -i knowledge-flow-backend/config/configuration.yaml yq eval '.chat_model.settings.azure_apim_base_url = "<your-azure-apim-endpoint>"' -i knowledge-flow-backend/config/configuration.yaml yq eval '.chat_model.settings.azure_apim_resource_path = "<your-azure-apim-resource-path>"' -i knowledge-flow-backend/config/configuration.yaml yq eval '.chat_model.settings.azure_openai_api_version = "<your-azure-openai-api-version>"' -i knowledge-flow-backend/config/configuration.yaml yq eval '.chat_model.settings.azure_tenant_id = "<your-azure-tenant-id>"' -i knowledge-flow-backend/config/configuration.yaml
-
Embedding model
yq eval '.embedding_model.provider = "azure-apim"' -i knowledge-flow-backend/config/configuration.yaml yq eval '.embedding_model.name = "<your-azure-openai-deployment-name>"' -i knowledge-flow-backend/config/configuration.yaml yq eval 'del(.embedding_model.settings)' -i knowledge-flow-backend/config/configuration.yaml yq eval '.embedding_model.settings.azure_ad_client_id = "<your-azure-apim-client-id>"' -i knowledge-flow-backend/config/configuration.yaml yq eval '.embedding_model.settings.azure_ad_client_scope = "<your-azure-apim-client-scope>"' -i knowledge-flow-backend/config/configuration.yaml yq eval '.embedding_model.settings.azure_apim_base_url = "<your-azure-apim-endpoint>"' -i knowledge-flow-backend/config/configuration.yaml yq eval '.embedding_model.settings.azure_apim_resource_path = "<your-azure-apim-resource-path>"' -i knowledge-flow-backend/config/configuration.yaml yq eval '.embedding_model.settings.azure_openai_api_version = "<your-azure-openai-api-version>"' -i knowledge-flow-backend/config/configuration.yaml yq eval '.embedding_model.settings.azure_tenant_id = "<your-azure-tenant-id>"' -i knowledge-flow-backend/config/configuration.yaml
-
-
Copy-paste your
AZURE_AD_CLIENT_SECRETandAZURE_APIM_SUBSCRIPTION_KEYvalues in both.envfiles.
# standalone mode (single-process backend: control-plane + agentic + knowledge-flow)
make run-app# split APIs mode (agentic:8000, knowledge-flow:8111, control-plane:8222)
make run-multi# default command (alias of `run-app`)
make run# backward-compatible alias
make run-app-multi# split APIs mode + all Temporal workers (requires Temporal running)
make run-multi-workersRun a single backend API from repository root:
make run-control-plane
make run-agentic
make run-knowledge-flowOr run each component from its own folder:
# knowledge-flow backend
cd knowledge-flow-backend && make run# agentic backend
cd agentic-backend && make run# control-plane backend
cd control-plane-backend && make run# frontend
cd frontend && make runOpen http://localhost:5173 in your browser.
Fred can be deployed locally into a k3d Kubernetes cluster using Helm. This mode mirrors a production-like setup while keeping everything on your machine.
| Tool | Purpose | Install |
|---|---|---|
| Docker | Container runtime | docs |
| k3d | Local Kubernetes clusters | curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash |
| Helm | Kubernetes package manager | docs |
| kubectl | Kubernetes CLI | docs |
You also need the infrastructure stack deployed via the fred-deployment-factory repository. Follow its README to run make k3d-up.
Important
You must add keycloak to your /etc/hosts file so your browser can reach the Keycloak server running inside k3d:
127.0.0.1 localhost keycloak
Without this entry, authentication will not work because the browser cannot resolve the keycloak hostname.
# 1. Set your OpenAI API key in the values file
# Edit deploy/local/k3d/values-local.yaml and fill OPENAI_API_KEY
# 2. Build, import images into k3d, and deploy via Helm (all-in-one)
make k3d-deploy| Target | Description |
|---|---|
make k3d-build |
Build Docker images for all services (agentic-backend, knowledge-flow-backend, frontend) |
make k3d-import |
Import built images into the k3d cluster |
make k3d-deploy |
All-in-one: build + import + deploy |
make k3d-deploy-only |
Deploy/upgrade the Helm chart only (images must already be imported) |
make k3d-undeploy |
Uninstall the Helm release |
make k3d-status |
Show pod and service status in the fred namespace |
make k3d-logs-agentic |
Tail logs for the agentic-backend |
make k3d-logs-kf |
Tail logs for the knowledge-flow-backend |
make k3d-logs-frontend |
Tail logs for the frontend |
Once deployed, open http://localhost:8088 in your browser. The Traefik Ingress routes all traffic through a single port:
| Path | Service |
|---|---|
/ |
Frontend |
/agentic/* |
Agentic backend |
/knowledge-flow/* |
Knowledge Flow backend |
/realms/* |
Keycloak (authentication) |
Other infrastructure services remain accessible on their usual ports:
| Service | URL |
|---|---|
| Keycloak | http://keycloak:8080 |
| Temporal UI | http://localhost:8233 |
| MinIO Console | http://localhost:9001 |
| OpenSearch Dashboards | http://localhost:5601 |
Important
Access-control reminder (shared environments):
Keycloak app roles and team ReBAC rights are different controls.
For the Fred access model and deployment bootstrap rules, see docs/platform/REBAC.md.
For production deployments (Kubernetes, VMs, on-prem or cloud), refer to:
docs/platform/DEPLOYMENT_GUIDE.md– high-level deployment guide (components, configuration, external dependencies).docs/platform/DEPLOYMENT_GUIDE_OPENSEARCH.md– OpenSearch-specific requirements. Use this only if you choose OpenSearch over the new PostgreSQL/pgvector option.docs/platform/REBAC.md– high-level access model (RBAC/ReBAC/organization/bootstrap).
The rest of this README.md focuses on local developer setup and model configuration.
Fred includes a structured agent authoring SDK designed for domain engineers and platform teams who need to write reliable, testable agents without re-implementing execution infrastructure.
The v2 SDK provides two authoring styles:
- ReAct / profile agents — for focused, tool-driven agents with a small state surface. Declare a role, a tool set, and a few instructions. The SDK owns the execution loop.
- Graph agents — for multi-step business workflows with explicit state, conditional routing, and human-in-the-loop confirmation gates. The business flow is expressed as a typed graph; the SDK handles streaming, checkpointing, and HITL interrupts.
Both styles support MCP tool integration and run on the same runtime.
Start with the agent authoring guide (v2). For the design philosophy behind the SDK, see SDK V2 positioning.
The academy contains sample MCP servers and standalone applications to experiment with agent development outside the main platform. The academy agents provide ready-to-run agent examples inside the agentic backend.
| Component | Location | Role |
|---|---|---|
| Frontend UI | ./frontend |
React chat interface and agent management UI |
| Agentic backend | ./agentic-backend |
Multi-agent runtime, session orchestration, streaming, MCP tools |
| Knowledge Flow backend | ./knowledge-flow-backend |
Document ingestion, vectorization, and retrieval |
| Control Plane backend | ./control-plane-backend |
Team and user management, access policy, agent registry |
| File | Purpose | Tip |
|---|---|---|
agentic-backend/config/.env |
Secrets (API keys, passwords). Not committed to Git. | Copy .env.template to .env and fill in any missing values. |
knowledge-flow-backend/config/.env |
Same as above | Same as above |
control-plane-backend/config/.env |
Same as above | Same as above |
agentic-backend/config/configuration.yaml |
Functional settings (providers, agents, feature flags). | - |
knowledge-flow-backend/config/configuration.yaml |
Same as above | - |
control-plane-backend/config/configuration.yaml |
Team/user policy settings. | - |
| Provider | How to enable |
|---|---|
| OpenAI (default) | Add OPENAI_API_KEY to config/.env; Adjust configuration.yaml |
| Azure OpenAI | Add AZURE_OPENAI_API_KEY to config/.env; Adjust configuration.yaml |
| Azure OpenAI via Azure APIM | Add AZURE_APIM_SUBSCRIPTION_KEY and AZURE_AD_CLIENT_SECRET to config/.env; Adjust configuration.yaml |
| Ollama (local models) | Adjust configuration.yaml |
See agentic-backend/config/configuration.yaml (section ai:) and knowledge-flow-backend/config/configuration.yaml (sections chat_model: and embedding_model:) for concrete examples.
- Enable Keycloak or another OIDC provider for authentication
- Persistence options:
- Laptop / dev (default): SQLite for metadata + ChromaDB for vectors (embedded, no external services)
- Production: PostgreSQL + pgvector for metadata/vectors, and optionally MinIO/S3 + OpenSearch if you prefer that stack
The four components described above form the entirety of the Fred platform. By default they run self-contained on a laptop using SQLite + ChromaDB (no external services).
Fred is modular: you can optionally add Keycloak/OpenFGA, MinIO/S3, OpenSearch, and PostgreSQL/pgvector for production-grade persistence.
Persistence options:
- Dev/laptop (default): SQLite for all SQL stores, ChromaDB for vectors, local filesystem for blobs.
- Production (recommended): PostgreSQL + pgvector for SQL + vectors; optionally pair with MinIO/S3 + OpenSearch if you prefer that stack.
-
Generic information
-
Agentic backend
-
Agent authoring (v2 SDK)
-
Architecture RFCs
-
Knowledge Flow backend
-
Frontend
-
Security-related topics
-
Developer and contributors guides
Fred is released under the Apache License 2.0. It does *not embed or depend on any LGPLv3 or copyleft-licensed components. Optional integrations (like OpenSearch or Weaviate) are configured externally and do not contaminate Fred's licensing. This ensures maximum freedom and clarity for commercial and internal use.
In short: Fred is 100% Apache 2.0, and you stay in full control of any additional components.
See the LICENSE for more details.
We welcome pull requests and issues. Start with the Contributing guide.
Join the discussion on our Discord server!