{ "info": { "author": "Mahmoud Mabrouk", "author_email": "mahmoud@agenta.ai", "bugtrack_url": null, "classifiers": [ "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.9", "Topic :: Software Development :: Libraries" ], "description": "
\n \n
\n \n \n \n \"Shows\n \n
\n
\n

\n Home Page |\n Slack |\n Documentation\n

\n
\n

Collaborate on prompts, evaluate, and deploy LLM applications with confidence

\n The open-source LLM developer platform for prompt-engineering, evaluation, human feedback, and deployment of complex LLM apps.\n
\n
\n

\n \"MIT\n \n \"Doc\"\n \n\n \n \"PRs\n \n \"Contributors\"\n \"Last\n \"Commits\n\n \n \"PyPI\n \n
\n

\n\n\n

\n \n \n \n \n \n \n \n \n \n

\n\n\n
\n\n\n \n \n \n\n\n\n
\n
\n
\n
\n \n \n \n \"Mockup\n \n
\n\n
\n
\n
\n\n---\n\n

\n Quick Start •\n Features •\n Documentation •\n Enterprise •\n Roadmap •\n Join Our Slack •\n Contributing\n

\n\n---\n\n# \u2b50\ufe0f Why Agenta?\n\nAgenta is an end-to-end LLM developer platform. It provides the tools for **prompt engineering and management**, \u2696\ufe0f **evaluation**, **human annotation**, and :rocket: **deployment**. All without imposing any restrictions on your choice of framework, library, or model. \n\nAgenta allows developers and product teams to collaborate in building production-grade LLM-powered applications in less time. \n\n### With Agenta, you can:\n\n- [\ud83e\uddea **Experiment** and **compare** prompts](https://docs.agenta.ai/basic_guides/prompt_engineering) on [any LLM workflow](https://docs.agenta.ai/advanced_guides/custom_applications) (chain-of-prompts, Retrieval Augmented Generation (RAG), LLM agents...) \n- \u270d\ufe0f Collect and [**annotate golden test sets**](https://docs.agenta.ai/basic_guides/test_sets) for evaluation\n- \ud83d\udcc8 [**Evaluate** your application](https://docs.agenta.ai/basic_guides/automatic_evaluation) with pre-existing or [**custom evaluators**](https://docs.agenta.ai/advanced_guides/using_custom_evaluators)\n- [\ud83d\udd0d **Annotate** and **A/B test**](https://docs.agenta.aibasic_guides/human_evaluation) your applications with **human feedback**\n- [\ud83e\udd1d **Collaborate with product teams**](https://docs.agenta.ai/basic_guides/team_management) for prompt engineering and evaluation\n- [\ud83d\ude80 **Deploy your application**](https://docs.agenta.ai/basic_guides/deployment) in one-click in the UI, through CLI, or through github workflows. \n\n### Works with any LLM app workflow\n\nAgenta enables prompt engineering and evaluation on any LLM app architecture:\n- Chain of prompts\n- RAG\n- Agents\n- ...\n\nIt works with any framework such as [Langchain](https://langchain.com), [LlamaIndex](https://www.llamaindex.ai/) and any LLM provider (openAI, Cohere, Mistral). \n\n[Jump here to see how to use your own custom application with agenta](/advanced_guides/custom_applications)\n\n# Quick Start\n\n### [Get started for free](https://cloud.agenta.ai?utm_source=github&utm_medium=readme&utm_campaign=github)\n### [Explore the Docs](https://docs.agenta.ai)\n### [Create your first application in one-minute](https://docs.agenta.ai/quickstart/getting-started-ui)\n### [Create an application using Langchain](https://docs.agenta.ai/tutorials/first-app-with-langchain)\n### [Self-host agenta](https://docs.agenta.ai/self-host/host-locally)\n### [Check the Cookbook](https://docs.agenta.ai/cookbook)\n\n# Features\n\n\n| Playground | Evaluation |\n| ------- | ------- |\n| Compare and version prompts for any LLM app, from single prompt to agents.