Complete AI & Development Glossary 2026
This is the master index of every AI development term, tool comparison, and engineering guide published by 100x Engineering. Bookmark it, share it, and come back — it's updated weekly.
Use it as a starting point to go deep on any concept, or to compare the tools your team is evaluating.
Glossary Terms
Foundational AI and sustainability concepts — defined plainly, for engineers and founders.
AI & LLM Concepts
- What is an AI Agent? — An AI agent perceives its environment, reasons over inputs, and acts autonomously to reach goals.
- What is Agentic AI? — Agentic AI plans and executes multi-step tasks autonomously; how it differs from chatbots and where it's used in 2026.
- What is AI Alignment? — How we ensure AI systems pursue goals that match human intentions, covering RLHF, constitutional AI, and key safety techniques.
- AI Governance Framework — The rules, roles, and controls organisations need to deploy AI responsibly — what's included and how to build one.
- What is AI Evaluation? — The discipline of measuring LLM application quality, safety, and reliability in production; key metrics and frameworks.
- What Are AI Guardrails? — Rules, filters, and controls that keep LLM outputs safe and on-topic; why every production AI app needs them.
- What is AI Observability? — Tracking inputs, outputs, latency, and costs of LLM apps in production; how to implement it and why it matters.
- What Are Embeddings in AI? — How AI converts text, images, and data into vectors that capture meaning, and when to use them in your stack.
- What is Fine-Tuning? — Training an LLM on your own data to change its behaviour; when fine-tuning beats prompting and when RAG wins.
- What is Function Calling in LLMs? — How LLMs invoke structured tools and APIs instead of just generating text; how to use it in production in 2026.
- What is Grounding in AI? — How anchoring AI outputs to verified facts, documents, or real-time data reduces hallucinations.
- What is Inference Optimization? — Techniques — quantization, caching, batching — that make AI models faster and cheaper to run in production.
- What is a Knowledge Graph? — How knowledge graphs store entities and relationships as connected nodes, and how they power AI reasoning and RAG.
- What is LLM Orchestration? — Connecting language models, tools, and memory into chains and pipelines; when your app needs it and what to use.
- What is MCP Protocol? — Anthropic's open standard connecting AI agents to external tools and data sources; what it is and why it matters.
- Model Context Protocol: Complete Guide — Deep dive: MCP architecture, server types, transport layers, security, and building production MCP integrations.
- What is Multimodal AI? — AI that processes text, images, audio, and video in a single model; which models lead and when to use it in production.
- What is Prompt Engineering? — The art and science of crafting inputs to AI models to produce accurate, reliable outputs; techniques used by top engineers.
- What is Prompt Injection? — The top LLM security vulnerability explained: how it works, real attack examples, and how to prevent it in production.
- What is RAG? — Retrieval-Augmented Generation combines a retrieval system with an LLM to answer questions from your own data.
- What is Tokenization in LLMs? — How tokenizers convert text to numerical tokens before an LLM processes it; why it matters for cost and performance.
- What is a Vector Database? — High-dimensional embedding storage for semantic search; how it works and which databases to choose in 2026.
- How Much Does an AI MVP Cost in 2026? — Real numbers from $5K sprints to $150K full builds: what drives cost and how to scope your project right.
- What is an AI MVP? — Definition for AI startups: what separates an AI MVP from a prototype, what it must include, and what to ship first.
ESG & Sustainability Concepts
- What is CSRD? — The EU's mandatory sustainability reporting law affecting 50,000+ companies; who's in scope, what to report, and how AI helps.
- What are ESRS Standards? — European Sustainability Reporting Standards explained: which companies must comply and what data to collect.
- What is Double Materiality? — The cornerstone of CSRD compliance: how to run a double materiality assessment, avoid mistakes, and cut a 6-month process to 6 weeks.
- Double Materiality Assessment Guide — A plain guide to the ESRS materiality assessment process: what it evaluates, how it works, and why CSRD requires it.
- Scope 3 Emissions Explained — The 15 GHG Protocol Scope 3 categories, where data comes from, and how to build an audit-ready reporting workflow.
Tool Comparisons
Side-by-side breakdowns of the most important AI infrastructure, model, and platform choices in 2026.
AI Models & APIs
- GPT-4 vs Claude 3.5 — Detailed comparison across coding, reasoning, context length, and pricing to pick the right model for your use case.
- Claude vs GPT-4 for Coding — Head-to-head benchmarks, agentic tasks, and real-world coding performance for software engineers.
- OpenAI vs Anthropic API Pricing 2026 — GPT-4o, o1, Claude 3.5 Sonnet, and Haiku compared on cost-performance ratio for production apps.
- Anthropic vs OpenAI for Enterprise — Security, compliance, model performance, pricing, and API reliability for enterprise AI deployments.
AI Orchestration & Frameworks
- LangChain vs LlamaIndex — Architecture, use cases, developer experience, and which orchestration framework to pick for your project in 2026.
- Dify vs Langflow — UI, model support, RAG pipelines, deployment options, and which no-code AI builder fits your team's use case.
- n8n vs Make for AI Workflows — Pricing, AI nodes, self-hosting, and complexity handling; which platform wins for LLM-powered automation.
- Temporal vs Inngest — Durability, developer experience, pricing, and scalability for AI pipelines and LLM workload orchestration.
Databases & Storage
- Pinecone vs Weaviate — Performance, pricing, hybrid search, and hosting compared; pick the right vector database for your AI product.
- Supabase vs PlanetScale — Vector search, scaling model, branching, pricing, and which database fits AI-native products.
- Firebase vs Supabase for AI Apps — Database, auth, vector search, pricing, and vendor lock-in compared for teams shipping AI products.
- Convex vs Supabase — Data model, reactivity, pricing, and developer experience for real-time AI app backends.
- PostgreSQL vs MongoDB for AI — Vector support, JSON handling, query flexibility, and scaling patterns for AI application databases.
Infrastructure & Deployment
- Vercel vs AWS for Startups — Honest breakdown of cost, scaling, developer experience, and when each platform makes sense for early-stage teams.
- Fly.io vs Railway — Pricing, GPU support, cold starts, and developer experience for deploying AI apps in 2026.
- Railway vs Render for AI Apps — The top PaaS choices for AI deployments: pricing, GPU support, cold starts, and developer experience compared.
- AWS Bedrock vs Azure OpenAI — Model access, pricing, compliance, latency, and which cloud AI platform fits your infrastructure.
- Modal vs Replicate for ML Inference — Cold start, pricing, GPU access, custom model support, and which inference platform fits your AI product.
- Together AI vs Anyscale — Pricing, training infrastructure, fine-tuning support, and developer experience for ML teams.
Developer Tools
- Cursor vs GitHub Copilot — Features, pricing, model quality, and which AI code editor actually ships better software in real workflows.
- Next.js vs Remix for AI Apps — Streaming, edge functions, data loading, and which framework ships better AI-powered products.
- Clerk vs Auth0 for AI Apps — Developer experience, pricing, token limits, and feature set for teams building LLM products.
- Hasura vs PostGraphile for AI API Layers — Performance, flexibility, real-time support, and which GraphQL layer fits LLM-powered products.
- Retool vs Custom AI Dashboard — When low-code hits its ceiling; how to decide based on use case, team, and budget.
- Stripe vs Lemon Squeezy for SaaS — MoR model, pricing, tax handling, and developer experience for SaaS billing in 2026.
Build vs Buy Decisions
- Build vs Buy Your AI MVP — Honest breakdown of cost, timeline, lock-in risk, and which path kills more startups.
- In-House vs Agency AI Development — Real costs, hiring timelines, speed comparisons, and when each model makes sense for your team.
ESG Platform Comparisons
- Manual vs Automated ESG Reporting — Direct comparison of hours spent, total cost, error rates, and audit readiness under CSRD and GRI standards.
- Persefoni vs Watershed — Pricing, use cases, integration depth, and which ESG platform fits your 2026 sustainability reporting needs.
Engineering Guides
Practical guides from 100x Engineering on building, shipping, and scaling AI products.
Building AI Products
- How We Ship AI MVPs in 3 Weeks — The sprint methodology 100x Engineering uses to ship production-ready AI MVPs in 21 days without cutting corners.
- The $4,999 MVP Development Sprint: How It Works — Week-by-week breakdown of what's included, what you ship, and why the fixed-price model works.
- Build an AI Product Without an ML Team — The modern stack and process for building LLM-powered products fast — no ML engineers or data scientists needed.
- From Vibe Coding to Production — Why AI prototypes need real engineering; what the gap looks like and how to bridge it without scrapping your work.
- The AI Tech Stack Every Startup Needs in 2026 — The right LLM, vector DB, orchestration, infra, and agent tools to ship fast and scale without regret.
AI Agent Engineering
- 5 AI Agent Architecture Patterns That Work — ReAct, Plan-and-Execute, multi-agent, RAG agents, and event-driven pipelines — the patterns that actually ship to production.
- Parallel Agent Psychosis: Managing Multi-Agent AI — What breaks when you run 11 simultaneous AI agents, and practical patterns for keeping coordination from becoming chaos.
- What Are Claws? Karpathy's Name for AI Agent Systems — Karpathy's term for orchestrated AI systems on personal hardware and the LLM-to-Agent-to-Claws arc.
- AI Sprint vs Traditional Development — How a 3-week AI sprint compares to traditional development on speed, cost, and outcomes.
Hiring & Working with AI Teams
- Why Startups Choose an AI Agency Over Hiring — A direct cost, speed, and risk comparison of hiring AI developers vs. using an AI agency.
- How to Evaluate AI Development Agencies — 8 questions that separate genuine AI builders from consultancies repackaging ChatGPT wrappers.
- How to Hire AI Developers in 2026 — Evaluating LLM expertise, agent architecture skills, and production experience — the complete hiring playbook.
- Fixed Price vs Hourly for MVP Development — Why fixed-price contracts protect founders, align incentives, and ship faster than open-ended hourly billing.
Finance, Strategy & Due Diligence
- AI Development Cost Breakdown: What to Expect — Real numbers behind team, infrastructure, APIs, and maintenance costs from MVP to production in 2026.
- 7 AI MVP Mistakes Founders Make — The most common mistakes: bad scope, wrong stack, missing validation — and how to fix each one.
- Technical Due Diligence for AI Startups — What investors and acquirers check, what red flags to fix, and how to prepare your codebase for scrutiny.
ESG & Sustainability Reporting
- CSRD Compliance Checklist: 25 Items for 2026 — 25-item checklist covering scope confirmation, ESRS disclosure requirements, data collection, assurance, and stakeholder communication.
- CSRD Omnibus 2026: What Changed and What It Means — The Omnibus package cut EU reporters from 50,000 to 5,000 — exactly what changed, who's in scope, and what to do now.
About This Index
This page is maintained by 100x Engineering — the AI development agency that ships production-ready AI products in weeks, not quarters. Every linked page is written by engineers who've built the systems they're describing.
We update this index weekly as new content is published. If you're looking for a topic we haven't covered yet, tell us →
Ready to Build Something?
If you've found what you were looking for here and you're ready to move from research to production, we'd love to help.
100x Engineering ships AI MVPs in fixed-price, fixed-scope sprints — typically 2–3 weeks. No hourly billing. No scope creep. Just a working product.
Whether you need a full AI product built, a technical co-founder for a sprint, or just want to pressure-test your architecture — start with a 15-minute call.
Related Resources
More articles:
Our solution: AI Workflow Automation
Glossary:
Comparisons:
Free Tool: Ready to build? Our free playbook covers architecture, stack decisions, and launch checklists. → AI MVP Playbook