Here, you’ll find all the information you need to get started with Dream. Our site offers detailed guides, reference materials, release notes and more to help both new and experienced users make the most of our offerings. Our ultimate aim is to ensure that your assistant-building experience with us is seamless and satisfying.
Take a look around, explore our resources, and don’t hesitate to reach out if you have any questions or feedback. We’re thrilled to have you here, and we hope you find everything you need to succeed!
We provide an open-source Apache 2.0-based, multi-skill platform that enables development of complex open-domain dialog systems. DeepPavlov Dream allows for combining different response generation methods, adding pre- and post-filters, utilizing Wikidata and custom knowledge graphs, designing custom dialog management algorithms, integrating large language models(LLMs) to production-ready dialog systems. DeepPavlov Dream also provides simple integration with load-balancing tools that is crucial for LLMs-based dialog systems in production. We are also working towards text-based and multimodal experiences like robotics.
Make sure you have Debian-based distribution (e.g., Ubuntu running natively or inside WSL2 in Windows 10/11) or macOS (Big Sur), Docker and Docker Compose installed.
More about software requirements here. If you get a permission denied error running docker-compose, make sure to configure your docker user correctly.
DeepPavlov Dream contains open-source code, so you may either clone our GitHub repository or fork it to have an opportunity to track both your changes and our updates. To clone Dream repository, run the following command:
git clone https://github.com/deeppavlov/dream.git
If you would like to create a fork, follow this instruction to create a public fork or this instruction to create a private fork.
The DeepPavlov Dream Platform utilizes a distribution-based approach for dialog systems development. A distribution is a set of YML-files specifying parameters of docker containers, and a configuration JSON-file determining a processing pipeline for DeepPavlov Agent (our own multi-skill orchestrator designed for dialog systems).
Platform contains different distributions including script-based English distributions, generative-based English, Russian and multi-lingual distributions, multimodal distribution, robot controller distribution, and lots of multi-skill distributions utilizing prompt-based generation with LLMs.
The list of ready-to-use distributions is given here.
Consider the main distribution dream
, then you should utilize the following command to raise the distribution locally (it requires 20 Gb CPU RAM and 20 Gb GPU memory in total):
docker-compose -f docker-compose.yml -f assistant_dists/dream/docker-compose.override.yml -f assistant_dists/dream/dev.yml up --build
We also provide proxy services for the most popular components (e.g., NN-based NLU components). More about proxy by DeepPavlov Dream could be found here. So, if you do not have that much resources, you may utilize proxy containers:
docker-compose -f docker-compose.yml -f assistant_dists/dream/docker-compose.override.yml -f assistant_dists/dream/dev.yml -f assistant_dists/dream/proxy.yml up --build
When you need to utilize particular component without building the whole distribution, add a container name to the command:
docker-compose -f docker-compose.yml -f assistant_dists/dream/docker-compose.override.yml -f assistant_dists/dream/dev.yml up --build container-name
When you need to restart particular docker container without re-building:
docker-compose -f docker-compose.yml -f assistant_dists/dream/docker-compose.override.yml -f assistant_dists/dream/dev.yml restart container-name
We provide several options for interaction: a command line interface, an HTTP API, and a Telegram bot.
In a separate terminal tab run:
docker-compose exec agent python -m deeppavlov_agent.run agent.debug=false agent.channel=cmd agent.pipeline_config=assistant_dists/dream/pipeline_conf.json
Enter a string username and have a chat with your bot!
Once you’ve started the bot, DeepPavlov’s Dream API will run on http://localhost:4242
. You can learn about the API from the orchestrator’s DeepPavlov Agent Docs.
You may send requests to the raised assistant in the following way:
result = requests.post(
"http://0.0.0.0:4242",
json={
"user_id": "test-user",
"payload": "What can you do?",
}).json()
print(result)
>> {'dialog_id': '000000', 'utt_id': '111111', 'user_id': 'test-user', 'response': 'I am an AI Assistant and I can help you with your requests.', 'active_skill': 'dff_dream_persona_prompted_skill', 'debug_output': [], 'attributes': {}}
A basic chat web interface is available at http://localhost:4242/chat
.
You may raise the bot to support both ways of communcation: HTTP API and Telegram bot (just another orchestrator container) using the following command:
export TG_TOKEN=<YOU_TELEGRAM_TOKEN_HERE>; docker-compose -f docker-compose.yml -f assistant_dists/dream/docker-compose.override.yml -f assistant_dists/dream/dev.yml -f assistant_dists/dream/telegram.yml up --build
Now you assistant is available in Telegram bot with the given token.
NOTE: treat your Telegram token as a secret and do not commit it to public repositories!
Different components are run independently in separate docker containers. Although some components may depend on the other ones, in general modular system’s elements can be safely removed or replaced with their analogues.
There are the main ways to create a custom dialog system:
Create a new distribution combining the existing components is they are. You may find the tutorial on this task on Medium. Learn how to create a simple and light-weight bot that specializes in discussing movies and answering factoid questions, utilizing the existing Dream components.
Create a new distribution editing the customizable components. For example, editing prompt-based skills by changing a prompt or LLM. You may find the tutorial on this task on Medium. Discover how to create a generative bot with a predefined persona, using the existing Dream Persona distribution that utilizes OpenAI ChatGPT.
Create a distribution with reasoning and API calls performing utilizing OpenAI ChatGPT to think of actions required to handle user requests and to choose the suitable API to use. You may find the tutorial on this task on Medium.
Create a distribution for Question Answering based on large documents using TF-IDF vectrorization to select most relevant parts and ChatGPT to generate responses based on them. You may find the tutorial on this task on Medium.