tva
← Insights

Scaling a Telegram AI Assistant from Solo to Team

You've built a project-specific Telegram AI assistant for yourself — a Claude, GPT, or similar LLM wrapped behind a Telegram bot, running on your own server. It works well solo. Now you want to share it with your team: developers, advisors, or specialists who'd benefit from on-demand access to the same assistant. The path from one user to a small team has more moving parts than the documentation usually mentions.

This guide covers every step: collecting team member Telegram IDs, extending the allow-list in a way that survives Docker's restart-vs-recreate distinction, opening the right kind of group, configuring trigger detection so the bot doesn't spam, and resolving the privacy trade-off that catches most teams off-guard.

What You'll Need

  • A working Telegram bot that already handles messages for at least one user (you)
  • SSH or shell access to the server running the bot container
  • The Telegram usernames of the team members you want to add
  • BotFather access (the same Telegram account that originally created the bot)
  • About twenty minutes for the technical work, plus async coordination time with team members

What This Guide Fixes

  • Team members can't trigger your bot — "works for me, doesn't work for them"
  • Allow-list changes appear to apply but the bot still ignores new users after a restart
  • Bot spams in group chats — responds to every message instead of relevant ones
  • Confused trigger behavior — when should the bot answer? Mention only? Keyword? Reply?
  • Privacy concerns about cross-user memory leakage you didn't anticipate

Step 1: Collect Each Team Member's Numeric Telegram User ID

Your Telegram AI assistant's allow-list is keyed by numeric user IDs, not usernames or display names. Usernames can change; the numeric ID is stable for the lifetime of the account. You need this ID for every team member you want to add.

The easiest way: ask each team member to start a chat with @userinfobot (a public utility bot). The first message it sends back contains their numeric ID — something like 100000001. Have them copy the ID and send it to you in DM.

Alternatives if @userinfobot is blocked or unavailable in your region:

  • Use your own bot's logs. Temporarily add a "log all attempts" line to your bot's allow-list middleware, ask the team member to send a message to your bot, and read the user ID from your container logs. Remove the logging line after you've collected the ID.
  • Read from a group message. If a user has already messaged in a group your bot is in, the message's from.id field contains their numeric ID — readable via the bot's getUpdates endpoint.

Keep the collected IDs in a safe place — a note, password manager, or directly in your .env file. Treat them like email addresses: they identify a specific human and can be cross-referenced with public Telegram profiles.

Step 2: Extend Your Bot's Allow-List

Most Telegram bot frameworks — aiogram, python-telegram-bot, Telegraf, grammY — implement allow-list checks as middleware. Every incoming update is filtered by sender ID before any handler runs. Non-allowed senders are silently discarded: your bot will receive their messages internally but never respond.

The allow-list itself usually lives in an environment variable, loaded into the container via env_file: in your docker-compose.yml:

BOT_ALLOWED_USERS=100000001,100000002,100000003
BOT_OPERATOR_ID=100000001

Two formats are commonly used:

  • CSV list: BOT_ALLOWED_USERS=111,222,333. Parsed by the bot's settings code as list[int].
  • JSON array: BOT_ALLOWED_USERS=[111,222,333]. Used when the settings parser expects JSON for complex types.

If your bot uses Python and pydantic-settings for configuration, the JSON-array form is the safer choice — even with a single user, prefer BOT_ALLOWED_USERS=[100000001] over BOT_ALLOWED_USERS=100000001. The reason is detailed in the pydantic sub-section further down, but the short version is that the JSON form bypasses a parser ambiguity that crashes your container on startup.

Step 3: Restart Your Container the Right Way

This is where most allow-list updates go silently wrong. You edit .env, run docker compose restart your-bot, watch the container come back up — and the new users still can't trigger the bot. The change "didn't take."

The reason: docker compose restart only stops and starts the existing container. It does not recreate it. Environment variables — including everything from env_file: — are injected into a container at create-time, not at start-time. A restart preserves the original env-var snapshot. Your edited .env file is irrelevant to a restarted container.

The correct command:

docker compose up -d --force-recreate --no-deps your-bot-service-name

What each flag does:

  • --force-recreate stops the old container, removes it, and creates a fresh one with the current Compose spec — including newly-edited env_file: contents.
  • --no-deps prevents Compose from also recreating any services your bot depends on (databases, message queues). If your bot has no depends_on, this flag is a no-op but harmless.
  • -d runs the recreated container detached, so your terminal returns immediately.

Verify the recreate succeeded by checking the container's uptime:

docker ps --filter name=your-bot --format "{{.Status}}"
# Expected: "Up 10 seconds" (not "Up 4 hours")

If the status shows the same long uptime as before, the recreate didn't happen — check for typos in the service name or whether you ran the command from the correct directory.

This pattern applies to any Docker Compose service whose configuration lives in an env-file: API gateways, workers, scrapers, monitoring agents. The same trap catches you each time. We've documented broader patterns for managing many such containers in our routine health-check guide for Dockerized infrastructure.

Pydantic-Settings: A Parser Trap to Know About

If your bot's Python settings layer uses pydantic-settings — the standard library for Pydantic v2 settings — and you declare your allow-list as list[int], you'll hit a parser issue worth understanding before it bites.

Pydantic-settings treats complex types (list, dict, tuple) as JSON-encoded by default. When it reads BOT_ALLOWED_USERS=111,222 from your env-file, it first attempts json.loads("111,222"). That fails with JSONDecodeError: Extra data because plain CSV isn't valid JSON. Your container crashes on startup with a SettingsError: error parsing value for field.

If you have a custom BeforeValidator attached to the field that knows how to parse CSV, you might assume it runs first and intercepts the raw string before the JSON-decode attempt. It doesn't. Pydantic-settings applies the JSON-decode step before any field-level validators for complex types.

You have two workarounds:

Quick fix — JSON-array syntax in the env-file:

BOT_ALLOWED_USERS=[100000001,100000002,100000003]

This is valid JSON. Pydantic-settings decodes it into a list of ints directly. No validator needed. The trade-off is purely cosmetic: brackets around a list.

Permanent fix — annotate the field with NoDecode:

from pydantic_settings import NoDecode
from pydantic import BeforeValidator
from typing import Annotated

def parse_csv(v):
    if isinstance(v, str):
        return [int(x.strip()) for x in v.split(",")]
    return v

class Settings(BaseSettings):
    bot_allowed_users: Annotated[list[int], NoDecode, BeforeValidator(parse_csv)]

NoDecode suppresses the JSON-decode step entirely. Your BeforeValidator receives the raw string and parses it as CSV. This is the cleaner fix if you control the settings code.

The underlying issue is tracked in pydantic-settings issue #157, with related discussion in #184 and #570. The behavior is consistent across all currently-released versions of pydantic-settings (v2.x). If you don't control the settings code — using a third-party bot framework — use the JSON-array syntax workaround.

Step 4: Open a Telegram Group with Your Bot and Team

Telegram has two kinds of groups for this use case:

  • Regular group: up to 200 members, simple admin model, no advanced features. Good for small teams.
  • Supergroup: up to 200,000 members, fine-grained admin permissions, threaded discussions, message history persistence. Convert a regular group later if you grow into one.

For team workflows up to a dozen members, a regular group is enough. Steps:

  • In your Telegram app, tap "New Group" and select your team members from your contacts
  • Name the group something descriptive — "Project X — AI Assistant", "Engineering Bot Workspace", etc.
  • Once created, open the group's settings, tap "Add Member," search for your bot's username (@your_bot_name), and add it
  • Promote the bot to admin only if it needs admin actions (deleting messages, pinning messages). For pure question-and-answer use, regular member status is enough

If you're managing multiple project-specific bots across multiple teams (we maintain a handful of these on shared infrastructure), the multi-tenant patterns we use are documented in our multi-tenant Docker development stack guide.

Step 5: Configure Trigger Detection

By default, a Telegram bot in a group only receives messages that explicitly mention it (@your_bot_name), reply to its messages, or use a slash-command. Telegram calls this "Privacy Mode," and it's enabled by default — a sensible default that prevents accidental bot-spam.

But for a Telegram AI assistant that should respond to natural questions ("Hey bot, what's the deployment status?" without an explicit mention), privacy mode is too restrictive. You have two paths:

Path A: Keep Privacy Mode on, train your team to @mention. Simple, no config change needed. The bot only sees what it should respond to. Downside: friction. Team members forget the @ and the bot stays silent.

Path B: Disable Privacy Mode and implement your own trigger logic. Open BotFather, send /setprivacy, choose your bot, set to Disable. The bot now receives every group message. You implement the "should I respond?" check yourself.

A practical trigger set we use in production:

  • Direct message: always respond — you're talking to the bot one-on-one
  • @mention in group: always respond — explicit invocation
  • Reply to one of the bot's messages: always respond — continuation of a thread the bot started
  • Group message containing the bot's trigger word: respond. The trigger word is typically the bot's nickname or project name, matched with word-boundary regex so "advisor" doesn't accidentally trigger on "advisory"
  • Everything else: silently log to a conversation file, no response

The silent-log part matters. Even when the bot doesn't respond, it still sees the group's conversation flow. Logging every message into a per-chat file gives the bot future context — when someone finally @mentions it with a question like "what did we decide?", the bot has the recent conversation available as context to reason over.

Implementation depends on your framework. In aiogram, a single message handler runs all five checks before deciding whether to call your LLM and reply. In Telegraf or grammY, the pattern is identical — a bot.on('message') handler that filters explicitly before reacting.

Step 6: Resolve the Privacy Trade-Off

Here's the question most teams don't think about until it becomes a problem: does your bot maintain separate memory per user, per group, or globally across all conversations?

Three patterns are common:

  • Per-chat memory: the bot starts a fresh session for every chat. DM with user A is independent from DM with user B, and both are independent from group X. Maximum privacy. Downside: the bot doesn't remember context across sessions, which limits its usefulness as an "assistant that knows our project"
  • Per-user memory: the bot maintains separate memory threads per user, but shares them across DMs and group mentions from that same user. Reasonable middle ground
  • Global memory: the bot has one session that all conversations contribute to. Maximum context-sharing — DMs and group conversations all build the same long-term memory. Downside: privacy leaks. A confidential thing one team member tells the bot in DM can surface in a group answer to another team member's question

Each pattern is defensible. Each has trade-offs your team needs to agree on before going multi-user.

If you pick global memory — we do, for tight-knit teams where context-sharing is part of the value proposition — be explicit with your team before they start using the bot: "Anything you tell this bot may surface in answers visible to the whole group. Treat it as a shared workspace, not a private confidant."

If you pick per-chat memory, you give up cross-context reasoning ("what did we decide last week about X?") but you avoid the leak risk entirely.

This is a design choice with real social consequences, not a technical knob you can change later without team-wide alignment. We discuss similar trade-offs in our broader writeup on AI agent skills for domain-specific workflows, where shared-context configurations show up in every customer engagement we run.

Scaling Your Telegram AI Assistant Beyond a Small Team

The env-file allow-list pattern works cleanly for teams up to roughly twenty to thirty users. Beyond that, hardcoded entries become painful — every onboard requires a git commit (if your env is checked in via SOPS or a similar secrets-management layer), a deploy, and a container recreate.

Patterns that scale further:

  • Database-backed allow-list: users live in a SQL table, the bot reads and caches the list at startup, then refreshes periodically (or via a webhook on user changes). Onboarding a user becomes an INSERT statement — no deploy needed
  • Group-membership-based access: instead of allowing individual users, allow any user who's a member of a specific Telegram group (or a small set of groups). Group membership becomes the access boundary. Telegram's getChatMember API confirms membership before each invocation
  • Channel-based: for read-only assistants (daily summaries, alerts, monitoring digests), use a Telegram channel rather than a group. Channels have a different permission model — only admins post, others read. Useful when there's a one-to-many fan-out rather than back-and-forth conversation

For small-team workflows — developers, advisors, occasional specialists — the env-file allow-list pattern is enough. We use it across most of our internal infrastructure and treat the database-backed variant as a refactor we do once a project demonstrably outgrows the simpler form.

Closing Checklist

Before your team starts using the bot, verify:

  • All team members' numeric Telegram IDs collected and added to the allow-list
  • Allow-list format matches what your settings parser expects — JSON-array if pydantic-settings is in the stack
  • Container recreated (not just restarted) so the new env-vars are loaded into a fresh container
  • Telegram group created with bot added as member (and promoted to admin if it needs admin actions)
  • BotFather privacy mode configured to match your trigger strategy — disabled only if you've implemented your own filter logic
  • Trigger logic in your bot code aligned with the privacy mode (don't disable privacy without filtering, or the bot will spam every group message)
  • Memory model (per-chat / per-user / global) chosen and communicated to the team
  • Privacy expectations explicitly set with team members before they start using the bot

If you're building a project-specific assistant from scratch and want to understand how the underlying bot infrastructure fits together, our companion writeup on building a project-specific AI assistant via Telegram covers the foundation. For email infrastructure that often accompanies these setups — notifications, escalation paths, audit trails — see our writeup on project-specific mailbox setup with DKIM and DMARC.

If you're running this kind of multi-user Telegram AI assistant infrastructure for clients or your own team and want help with the rollout, get in touch. We build and operate this kind of setup as part of our project work.


Related Insights

Further Reading