15 Chat Platforms, One System
NeuroGen supports 15 chat integrations. Each one has its own blueprint, service layer, and database model. They are not thin wrappers. They handle authentication, message formatting, media attachments, conversation threading, and platform-specific quirks.
The full list:
The most popular integrations are Telegram, Discord, and Slack because they are easy to set up. WhatsApp and Microsoft Teams are also well-used but require more configuration on the platform side (Meta Business verification for WhatsApp, Azure app registration for Teams).
The Deployment Flow
We will walk through the actual process using Telegram as an example. The flow is similar for all platforms.
Create an AI Assistant
Go to the Assistant Builder. Give it a name and a system prompt. Something like: "You are a customer support assistant for [company]. Answer questions about our products, pricing, and shipping policies. Be helpful and concise."
Upload your documents
Go to the Knowledge Base tab. Upload your FAQ, product catalog, pricing sheet, shipping policy, return policy, or whatever documents your assistant needs. The system accepts 50+ file formats including PDF, DOCX, XLSX, CSV, HTML, Markdown, and plain text. NeuroGen Cortex chunks, indexes, and embeds everything automatically.
Pick your LLM model
Choose which model powers the assistant. The default is glm-4.7-flashx (fast and affordable). You can also use GPT-4o, Claude Sonnet, Gemini, or any of the other models we support. Each model has different cost per token, so check the pricing if budget matters.
Enable the Telegram integration
In the Chat Integrations section, click Telegram. Open Telegram, find @BotFather, create a new bot, and copy the token. Paste it into NeuroGen. Click Save. The webhook is registered automatically.
Test it
Open your Telegram bot and send a message. The assistant responds using your knowledge base and the LLM you selected. If the answer is wrong, update your documents or tweak the system prompt. Iterate until it works.
That whole process takes about 10 minutes if your documents are already prepared. The longest part is usually writing a good system prompt.
The Knowledge Base: 50+ Formats and Adaptive Retrieval
The Knowledge Base is what makes your chatbot actually useful. Without it, the assistant only knows what the LLM was trained on. With it, the assistant can answer questions about your business, your products, your internal docs.
NeuroGen Cortex
NeuroGen Cortex is our knowledge layer. It uses adaptive, multi-step retrieval to find the right answer — not just the closest vector match. It combines multiple retrieval strategies so that even when a question doesn't use the exact words in your documents, Cortex still finds the relevant content. For complex questions that need information from multiple sections, Cortex automatically breaks the question into parts and searches each one. Your assistant answers using your data, not hallucinations.
Supported formats include PDF, DOCX, DOC, XLSX, XLS, CSV, TXT, MD, HTML, JSON, JSONL, XML, RTF, ODT, ODS, PPTX, PPT, EPUB, and many more. The chunking service has 10 specialized extractors that handle each format correctly. PDFs with tables get table-aware extraction. Spreadsheets get row-by-row parsing. Code files get syntax-aware chunking.
You can organize documents into sections within a knowledge base. Cortex tracks which section each chunk belongs to, so you can do targeted searches. Your assistant can navigate the knowledge base intelligently — searching by keyword, browsing by section, or accessing specific document locations — instead of just dumping everything into context.
Guardrails: Three Layers of Protection
Deploying a chatbot to a public platform means people will try to break it. They will try to get it to say things it should not say, ignore its instructions, or leak your system prompt. We built a three-layer guardrails system to handle this.
You configure the guardrails per assistant. A customer support bot might block all profanity and competitor mentions. An internal knowledge bot might be more permissive. A public-facing marketing bot might block pricing discussions and redirect to the sales team. You set the rules.
What Happens Behind the Scenes
When a message arrives from Telegram (or any platform), the sequence goes like this:
- The platform webhook hits the integration's blueprint endpoint
- The message is authenticated and validated (webhook signatures, rate limits)
- The guardrails input filter runs
- NeuroGen Cortex searches the knowledge base for relevant context
- The message plus context is sent to the LLM
- The response goes through output validation
- Credits are deducted based on exact token usage
- The response is formatted for the target platform and sent back
- The conversation is logged for history and analytics
All of this happens in about 1-3 seconds depending on the LLM model and knowledge base size. The user on Telegram just sees a quick typing indicator followed by an answer.
Each chat integration has its own service file, database model, and blueprint. Telegram alone has dedicated handling for inline keyboards, photo messages, document uploads, group chats, and channel posts. We did not cut corners on platform-specific features.
Pick the Right Model for Your Budget
Model choice matters for chatbots because you are paying per message. A busy Telegram bot that handles 1,000 messages a day can burn through credits fast on an expensive model.
Our recommendation: start with glm-4.7-flashx (the default). It is fast, cheap, and good enough for most Q&A tasks. If you need better reasoning or more nuanced responses, step up to GPT-4o or Claude Sonnet. If you are on a tight budget and the questions are simple, the smaller models work fine.
You can change the model at any time without redeploying. Just update it in the Assistant Builder and the next message will use the new model. No downtime, no token changes, no webhook re-registration.
Deploy Your First Chatbot
Sign up, build an assistant, upload a few docs, and connect Telegram. You will have a live chatbot answering real questions in about 10 minutes.
Start Building