Flowise AI on Ubuntu with NVIDIA GPU acceleration, containerized using Docker and pre-configured by Miri Infotech Inc. This AMI delivers a ready-to-use AI orchestration platform designed for developers, data scientists, and enterprises. With GPU optimization and Docker integration, you can quickly build, deploy, and scale AI workflows, large language models (LLMs), and retrieval-augmented generation (RAG) pipelines without complex setup.
Flowise AI is a powerful open-source tool designed to make building LLM-based applications accessible and intuitive. Built on top of LangChain, Flowise provides a visual interface where users can create, customize, and deploy AI workflows using a drag-and-drop node system. This no-code/low-code platform allows developers, data scientists, and non-technical users to connect components like language models, vector databases, APIs, and more—visually and efficiently.
Flowise enables rapid prototyping and deployment of complex AI applications, such as chatbots, retrieval-augmented generation (RAG) systems, data agents, and more. It supports multiple LLM providers (OpenAI, Cohere, Hugging Face, etc.), vector stores (Pinecone, Chroma, Weaviate, etc.), and integrates easily into existing infrastructure with REST APIs and embedding capabilities.
100% open-source under the MIT License; customizable and extensible.
Drag-and-drop interface to design LLM workflows easily.
Built on LangChain for robust, modular, and scalable LLM pipelines.
Native support for Chroma, Pinecone, Weaviate, FAISS, etc.
Works with OpenAI, Cohere, Hugging Face, Azure, and more.
Integrate with other tools and systems via API endpoints.
It will take a few minutes for your VM to be deployed. When the deployment is finished, move on to the next section.
Connect to virtual machine
Create an SSH connection with the VM.
ssh azureuser@<ip>
Usage/ Deployment Instructions
Connect to VM- Port- 22.
Getting Started with Flowise AI on Ubuntu.
# You need to run commands to start with the setup in terminal.
cd ~/flowise-docker
docker compose up -d # start again in background
docker compose down # stop & remove container
Step 2: Use your web browser to access the application at:
http://<instance-ip-address>:3000
