docs: add D1 and Cron guides, update module contract across docs

- docs/using-d1.md and docs/using-cron.md for module authors
- architecture, codebase-summary, adding-a-module, code-standards, deployment-guide refreshed
- CLAUDE.md module contract shows optional crons[] and sql in init
- docs/todo.md tracks manual follow-ups (D1 UUID, first deploy, smoke tests)
This commit is contained in:
2026-04-15 13:29:31 +07:00
parent 97ee30590a
commit f5e03cfff2
10 changed files with 985 additions and 81 deletions

View File

@@ -6,12 +6,15 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
```bash ```bash
npm run dev # local dev server (wrangler dev) at http://localhost:8787 npm run dev # local dev server (wrangler dev) at http://localhost:8787
npm run lint # biome check src tests scripts npm run lint # biome check src tests scripts + eslint src
npm run format # biome format --write npm run format # biome format --write
npm test # vitest run (all tests) npm test # vitest run (all tests)
npx vitest run tests/modules/trading/format.test.js # single test file npx vitest run tests/modules/trading/format.test.js # single test file
npx vitest run -t "formats with dot" # single test by name npx vitest run -t "formats with dot" # single test by name
npm run deploy # wrangler deploy + register webhook/commands with Telegram npm run db:migrate # apply migrations to D1 (prod)
npm run db:migrate -- --local # apply to local dev D1
npm run db:migrate -- --dry-run # preview without applying
npm run deploy # wrangler deploy + db:migrate + register webhook/commands
npm run register:dry # preview setWebhook + setMyCommands payloads without calling Telegram npm run register:dry # preview setWebhook + setMyCommands payloads without calling Telegram
``` ```
@@ -40,17 +43,25 @@ grammY Telegram bot on Cloudflare Workers. Modules are plug-n-play: each module
```js ```js
{ {
name: "mymod", // must match folder + import map key name: "mymod", // must match folder + import map key
init: async ({ db, env }) => { ... }, // optional — db is prefixed KVStore init: async ({ db, sql, env }) => { ... }, // optional — db: KVStore, sql: SqlStore|null
commands: [{ commands: [{
name: "mycmd", // ^[a-z0-9_]{1,32}$, no leading slash name: "mycmd", // ^[a-z0-9_]{1,32}$, no leading slash
visibility: "public", // "public" | "protected" | "private" visibility: "public", // "public" | "protected" | "private"
description: "Does a thing", // required for all visibilities description: "Does a thing", // required for all visibilities
handler: async (ctx) => { ... }, // grammY context handler: async (ctx) => { ... }, // grammY context
}], }],
crons: [{ // optional scheduled jobs
schedule: "0 17 * * *", // cron expression
name: "daily-cleanup", // unique within module
handler: async (event, ctx) => { ... }, // receives { db, sql, env }
}],
} }
``` ```
Command names must be globally unique across ALL modules and visibilities. Conflicts throw at load time. - Command names must be globally unique across ALL modules and visibilities. Conflicts throw at load time.
- Cron schedules declared here MUST also be registered in `wrangler.toml` `[triggers] crons`.
- For D1 setup (migrations, table naming), see [`docs/using-d1.md`](docs/using-d1.md).
- For cron syntax and testing, see [`docs/using-cron.md`](docs/using-cron.md).
## Testing ## Testing

View File

@@ -8,9 +8,10 @@ Modules are added or removed via a single `MODULES` env var. Each module registe
- **Drop-in modules.** Write a single file, list the folder name in `MODULES`, redeploy. No registration boilerplate, no manual command wiring. - **Drop-in modules.** Write a single file, list the folder name in `MODULES`, redeploy. No registration boilerplate, no manual command wiring.
- **Three visibility levels out of the box.** Public commands show in Telegram's `/` menu and `/help`; protected show only in `/help`; private are hidden slash-command easter eggs. One namespace, loud conflict detection. - **Three visibility levels out of the box.** Public commands show in Telegram's `/` menu and `/help`; protected show only in `/help`; private are hidden slash-command easter eggs. One namespace, loud conflict detection.
- **Storage is swappable.** Modules talk to a small `KVStore` interface Cloudflare KV today, a different backend tomorrow, with a one-file change. - **Dual storage backends.** Modules talk to a small `KVStore` interface (Cloudflare KV for simple state) or `SqlStore` interface (D1 for relational data, scans, leaderboards). Swappable with one-file changes.
- **Scheduled jobs.** Modules declare cron-based cleanup, stats refresh, or maintenance tasks — registered via `wrangler.toml` and dispatched automatically.
- **Zero admin surface.** No in-Worker `/admin/*` routes, no admin secret. `setWebhook` + `setMyCommands` run at deploy time from a local node script. - **Zero admin surface.** No in-Worker `/admin/*` routes, no admin secret. `setWebhook` + `setMyCommands` run at deploy time from a local node script.
- **Tested.** 105 vitest unit tests cover registry, storage, dispatcher, help renderer, validators, HTML escaping, and the trading module. - **Tested.** 105+ vitest unit tests cover registry, storage, dispatcher, cron validation, help renderer, validators, HTML escaping, and the trading module.
## How a request flows ## How a request flows
@@ -42,27 +43,44 @@ ctx.reply(...) → response back to Telegram
``` ```
src/ src/
├── index.js # fetch handler: POST /webhook + GET / health ├── index.js # fetch + scheduled handlers: POST /webhook + cron triggers
├── bot.js # memoized grammY Bot, lazy dispatcher install ├── bot.js # memoized grammY Bot, lazy dispatcher + registry install
├── types.js # JSDoc typedefs (central: Env, Module, Command, Cron, etc.)
├── db/ ├── db/
│ ├── kv-store-interface.js # JSDoc typedefs (the contract) │ ├── kv-store-interface.js # KVStore contract (JSDoc)
│ ├── cf-kv-store.js # Cloudflare KV implementation │ ├── cf-kv-store.js # Cloudflare KV implementation
── create-store.js # per-module prefixing factory ── create-store.js # KV per-module prefixing factory
│ ├── sql-store-interface.js # SqlStore contract (JSDoc)
│ ├── cf-sql-store.js # Cloudflare D1 implementation
│ └── create-sql-store.js # D1 per-module prefixing factory
├── modules/ ├── modules/
│ ├── index.js # static import map — register new modules here │ ├── index.js # static import map — register new modules here
│ ├── registry.js # load, validate, build command tables │ ├── registry.js # load, validate, build command + cron tables
│ ├── dispatcher.js # wires every command via bot.command() │ ├── dispatcher.js # wires every command via bot.command()
│ ├── validate-command.js │ ├── cron-dispatcher.js # dispatches cron handlers by schedule match
│ ├── validate-command.js # command contract validator
│ ├── validate-cron.js # cron contract validator
│ ├── util/ # /info, /help (fully implemented) │ ├── util/ # /info, /help (fully implemented)
│ ├── trading/ # paper trading — VN stocks (dynamic symbol resolution) │ ├── trading/ # paper trading — VN stocks (D1 storage, daily cron)
│ │ └── migrations/
│ │ └── 0001_trades.sql
│ ├── wordle/ # stub — proves plugin system │ ├── wordle/ # stub — proves plugin system
│ ├── loldle/ # stub │ ├── loldle/ # stub
│ └── misc/ # stub │ └── misc/ # stub (KV storage)
└── util/ └── util/
└── escape-html.js └── escape-html.js
scripts/ scripts/
├── register.js # post-deploy: setWebhook + setMyCommands ├── register.js # post-deploy: setWebhook + setMyCommands
── stub-kv.js ── migrate.js # discover + apply D1 migrations
└── stub-kv.js # no-op KV binding for deploy-time registry build
tests/
└── fakes/
├── fake-kv-namespace.js
├── fake-d1.js # in-memory SQL for testing
├── fake-bot.js
└── fake-modules.js
``` ```
## Command visibility ## Command visibility
@@ -122,11 +140,12 @@ Command names must match `^[a-z0-9_]{1,32}$` (Telegram's slash-command limit). C
```bash ```bash
npm run dev # wrangler dev — runs the Worker at http://localhost:8787 npm run dev # wrangler dev — runs the Worker at http://localhost:8787
npm run lint # biome check npm run lint # biome check + eslint
npm test # vitest npm test # vitest
npm run db:migrate # apply D1 migrations (--local for local dev, --dry-run to preview)
``` ```
The local `wrangler dev` server exposes `GET /` (health) and `POST /webhook`. For end-to-end testing you'd ngrok/cloudflared the local port and point a test bot's `setWebhook` at it — but pure unit tests (`npm test`) cover the logic seams without Telegram. The local `wrangler dev` server exposes `GET /` (health), `POST /webhook` (Telegram), and `/__scheduled?cron=...` (cron simulation). For end-to-end testing you'd ngrok/cloudflared the local port and point a test bot's `setWebhook` at it — but pure unit tests (`npm test`) cover the logic seams without Telegram.
## Deploy ## Deploy
@@ -136,17 +155,19 @@ Single command, idempotent:
npm run deploy npm run deploy
``` ```
That runs `wrangler deploy` followed by `scripts/register.js`, which calls Telegram's `setWebhook` + `setMyCommands` using values from `.env.deploy`. That runs `wrangler deploy`, applies D1 migrations, then `scripts/register.js`, which calls Telegram's `setWebhook` + `setMyCommands` using values from `.env.deploy`.
First-time deploy flow: First-time deploy flow:
1. Run `wrangler deploy` once to learn the `*.workers.dev` URL printed at the end. 1. Create D1 database: `npx wrangler d1 create miti99bot-db` and paste ID into `wrangler.toml`.
2. Paste it into `.env.deploy` as `WORKER_URL`. 2. Run `wrangler deploy` once to learn the `*.workers.dev` URL printed at the end.
3. Preview the register payloads without calling Telegram: 3. Paste it into `.env.deploy` as `WORKER_URL`.
4. Apply migrations: `npm run db:migrate`.
5. Preview the register payloads without calling Telegram:
```bash ```bash
npm run register:dry npm run register:dry
``` ```
4. Run the real thing: 6. Run the real deploy:
```bash ```bash
npm run deploy npm run deploy
``` ```
@@ -177,6 +198,10 @@ TL;DR:
## Further reading ## Further reading
- [`docs/architecture.md`](docs/architecture.md) — deeper dive: cold-start, module lifecycle, DB namespacing, deploy flow, design tradeoffs. - [`docs/architecture.md`](docs/architecture.md) — deeper dive: cold-start, module lifecycle, KV + D1 storage, cron dispatch, deploy flow, design tradeoffs.
- [`docs/adding-a-module.md`](docs/adding-a-module.md) — step-by-step guide to authoring a new module. - [`docs/adding-a-module.md`](docs/adding-a-module.md) — step-by-step guide to authoring a new module (commands, KV storage, D1 + migrations, crons).
- `plans/260411-0853-telegram-bot-plugin-framework/` — full phased implementation plan (9 phase files + researcher reports). - [`docs/using-d1.md`](docs/using-d1.md) — when to use D1, writing migrations, SQL API reference, worked examples.
- [`docs/using-cron.md`](docs/using-cron.md) — scheduling syntax, handler signature, wrangler.toml registration, local testing, worked examples.
- [`docs/deployment-guide.md`](docs/deployment-guide.md) — D1 + KV setup, migration, secret rotation, rollback.
- `plans/260415-1010-d1-cron-infra/` — phased implementation plan for D1 + cron support (6 phases + reports).
- `plans/260411-0853-telegram-bot-plugin-framework/` — original plugin framework implementation plan (9 phases + reports).

View File

@@ -143,9 +143,77 @@ Each command is:
Private commands are still slash commands — users type `/mycmd`. They're simply absent from Telegram's `/` popup and from `/help` output. Private commands are still slash commands — users type `/mycmd`. They're simply absent from Telegram's `/` popup and from `/help` output.
## Optional: D1 Storage
If your module needs a SQL database for relational queries, scans, or append-only history, add an `init` hook that receives `sql`:
```js
/** @type {import("../../db/sql-store-interface.js").SqlStore | null} */
let sql = null;
const myModule = {
name: "mymod",
init: async ({ db, sql: sqlStore, env }) => {
db = store;
sql = sqlStore; // null when env.DB is not configured
},
commands: [ /* ... */ ],
};
```
Create migration files in `src/modules/<name>/migrations/`:
```
src/modules/mymod/migrations/
├── 0001_initial.sql
├── 0002_add_index.sql
└── ...
```
Run migrations at deploy time:
```bash
npm run db:migrate # production
npm run db:migrate -- --local # local dev
npm run db:migrate -- --dry-run # preview
```
For full details on D1 usage, table naming, and the SQL API, see [`docs/using-d1.md`](./using-d1.md).
## Optional: Scheduled Jobs
If your module needs to run maintenance tasks (cleanup, stats refresh) on a schedule, add a `crons` array:
```js
const myModule = {
name: "mymod",
init: async ({ db, sql, env }) => { /* ... */ },
commands: [ /* ... */ ],
crons: [
{
schedule: "0 2 * * *", // 2 AM UTC daily
name: "daily-cleanup",
handler: async (event, { db, sql, env }) => {
// handler receives same context as init
await sql.run("DELETE FROM mymod_old WHERE created < ?", oldTimestamp);
},
},
],
};
```
**Important:** Every cron schedule declared in a module MUST also be registered in `wrangler.toml`:
```toml
[triggers]
crons = ["0 2 * * *"] # matches module declaration
```
For full details on cron syntax, local testing, and worked examples, see [`docs/using-cron.md`](./using-cron.md).
## Testing your module ## Testing your module
Add a test in `tests/modules/<name>.test.js` or extend an existing suite. The `tests/fakes/` directory provides `fake-kv-namespace.js`, `fake-bot.js`, and `fake-modules.js` for hermetic unit tests that don't touch Cloudflare or Telegram. Add a test in `tests/modules/<name>.test.js` or extend an existing suite. The `tests/fakes/` directory provides `fake-kv-namespace.js`, `fake-bot.js`, `fake-d1.js`, and `fake-modules.js` for hermetic unit tests that don't touch Cloudflare or Telegram.
Run: Run:
@@ -163,4 +231,4 @@ This prints the `setMyCommands` payload your module will push to Telegram — a
## Full example ## Full example
See `src/modules/misc/index.js` — it's a minimal module that uses the DB (`putJSON` / `getJSON` via `/ping` + `/mstats`) and registers one command at each visibility level. Copy it as a starting point for your own module. See `src/modules/misc/index.js` — it's a minimal module that uses the DB (`putJSON` / `getJSON` via `/ping` + `/mstats`) and registers one command at each visibility level. Copy it as a starting point for your own module. See `src/modules/trading/` for a full example with D1 storage and scheduled crons.

View File

@@ -69,7 +69,7 @@ Every module is a single default export with this shape:
```js ```js
export default { export default {
name: "wordle", // must match folder + import map key name: "wordle", // must match folder + import map key
init: async ({ db, env }) => { ... }, // optional, called once at build time init: async ({ db, sql, env }) => { ... }, // optional, called once at build time
commands: [ commands: [
{ {
name: "wordle", // ^[a-z0-9_]{1,32}$, no leading slash name: "wordle", // ^[a-z0-9_]{1,32}$, no leading slash
@@ -79,12 +79,20 @@ export default {
}, },
// ... // ...
], ],
crons: [ // optional scheduled jobs
{
schedule: "0 2 * * *", // cron expression
name: "cleanup", // unique within module
handler: async (event, ctx) => { ... }, // receives { db, sql, env }
},
],
}; };
``` ```
- The command name regex is **uniform** across all visibility levels. A private command is still a slash command (`/konami`) — it is simply absent from Telegram's `/` menu and from `/help` output. It is NOT a hidden text-match easter egg. - The command name regex is **uniform** across all visibility levels. A private command is still a slash command (`/konami`) — it is simply absent from Telegram's `/` menu and from `/help` output. It is NOT a hidden text-match easter egg.
- `description` is required for **all** visibilities. Private descriptions never reach Telegram; they exist so the registry remains self-documenting for debugging. - `description` is required for **all** visibilities. Private descriptions never reach Telegram; they exist so the registry remains self-documenting for debugging.
- `init({ db, env })` is the one place where a module should do setup work. The `db` parameter is a `KVStore` whose keys are automatically prefixed with `<moduleName>:`. `env` is the raw worker env (read-only by convention). - `init({ db, sql, env })` is the one place where a module should do setup work. The `db` parameter is a `KVStore` whose keys are automatically prefixed with `<moduleName>:`. The `sql` parameter is a `SqlStore` (or `null` if `env.DB` is not bound) — for relational data. `env` is the raw worker env (read-only by convention).
- `crons` is optional. Each entry declares a scheduled job; the schedule MUST also be registered in `wrangler.toml` `[triggers] crons`.
Validation runs per-command at registry load, and cross-module conflict detection runs at the same step. Any violation throws — deployment fails loudly before any request is served. Validation runs per-command at registry load, and cross-module conflict detection runs at the same step. Any violation throws — deployment fails loudly before any request is served.
@@ -149,9 +157,13 @@ Every command — public, protected, **and private** — is registered via `bot.
There is no custom text-match middleware, no `bot.on("message:text", ...)` handler, no private-command-specific path. One routing path for all three visibilities. This is what reduced the original two-path design (slash + text-match) to one during the revision pass. There is no custom text-match middleware, no `bot.on("message:text", ...)` handler, no private-command-specific path. One routing path for all three visibilities. This is what reduced the original two-path design (slash + text-match) to one during the revision pass.
## 8. Storage: the KVStore interface ## 8. Storage: KVStore and SqlStore
Modules NEVER touch `env.KV` directly. They get a `KVStore` from `createStore(moduleName, env)`: Modules NEVER touch `env.KV` or `env.DB` directly. They receive prefixed stores from the module context.
### KVStore (key-value, fast reads/writes)
For simple state and blobs, use `db` (a `KVStore`):
```js ```js
// In a module's init: // In a module's init:
@@ -175,7 +187,7 @@ getJSON(key) // → any | null (swallows corrupt JSON)
putJSON(key, value, { expirationTtl? }) putJSON(key, value, { expirationTtl? })
``` ```
### Prefix mechanics #### Prefix mechanics
`createStore("wordle", env)` returns a wrapped store where every key is rewritten: `createStore("wordle", env)` returns a wrapped store where every key is rewritten:
@@ -189,15 +201,40 @@ list({prefix:"games:"})──► list({prefix:"wordle:games:"}) (then strips "
Two stores for different modules cannot read each other's data unless they reconstruct prefixes by hand — a code-review boundary, not a cryptographic one. Two stores for different modules cannot read each other's data unless they reconstruct prefixes by hand — a code-review boundary, not a cryptographic one.
### Why `getJSON`/`putJSON` are in the interface ### SqlStore (relational, scans, append-only history)
Every planned module stores structured state (game state, user stats, timestamps). Without helpers, every module would repeat `JSON.parse(await store.get(k))` and `store.put(k, JSON.stringify(v))`. That's genuine DRY. For complex queries, aggregates, or audit logs, use `sql` (a `SqlStore`):
`getJSON` is deliberately forgiving: if the stored value is not valid JSON (a corrupt record, a partial write, manual tampering), it logs a warning and returns `null`. A single bad record must not crash the handler. ```js
// In a module's init:
init: async ({ sql }) => {
sqlStore = sql; // null if env.DB not bound
},
### Swapping the backend // In a handler or cron:
const trades = await sqlStore.all(
"SELECT * FROM trading_trades WHERE user_id = ? ORDER BY ts DESC LIMIT 10",
userId
);
```
To replace Cloudflare KV with a different store (e.g. Upstash Redis, D1, Postgres): The interface (full JSDoc in `src/db/sql-store-interface.js`):
```js
run(query, ...binds) // INSERT/UPDATE/DELETE — returns { changes, last_row_id }
all(query, ...binds) // SELECT all rows → array of objects
first(query, ...binds) // SELECT first row → object | null
prepare(query, ...binds) // Prepared statement for batch operations
batch(statements) // Execute multiple statements in one round-trip
```
All tables must follow the naming convention `{moduleName}_{table}` (e.g., `trading_trades`).
Tables are created via migrations in `src/modules/<name>/migrations/*.sql`. The migration runner (`scripts/migrate.js`) applies them on deploy and tracks them in `_migrations` table.
### Swapping the backends
To replace Cloudflare KV with a different store (e.g. Upstash Redis, Postgres):
1. Create a new `src/db/<name>-store.js` that implements the `KVStore` interface. 1. Create a new `src/db/<name>-store.js` that implements the `KVStore` interface.
2. Change the one `new CFKVStore(env.KV)` line in `src/db/create-store.js` to construct your new adapter. 2. Change the one `new CFKVStore(env.KV)` line in `src/db/create-store.js` to construct your new adapter.
@@ -205,7 +242,15 @@ To replace Cloudflare KV with a different store (e.g. Upstash Redis, D1, Postgre
That's the full change. No module code moves. That's the full change. No module code moves.
## 9. The webhook entry point To replace D1 with a different SQL backend:
1. Create a new `src/db/<name>-sql-store.js` that implements the `SqlStore` interface.
2. Change the one `new CFSqlStore(env.DB)` line in `src/db/create-sql-store.js` to construct your new adapter.
3. Update `wrangler.toml` bindings.
## 9. HTTP and Scheduled Entry Points
### Webhook (HTTP)
```js ```js
// src/index.js — simplified // src/index.js — simplified
@@ -221,12 +266,48 @@ export default {
} }
return new Response("not found", { status: 404 }); return new Response("not found", { status: 404 });
}, },
async scheduled(event, env, ctx) {
// Cloudflare cron trigger
const registry = await getRegistry(env);
dispatchScheduled(event, env, ctx, registry);
},
}; };
``` ```
`getWebhookHandler` is itself memoized — it constructs `webhookCallback(bot, "cloudflare-mod", { secretToken: env.TELEGRAM_WEBHOOK_SECRET })` once and reuses it. `getWebhookHandler` is memoized and constructs `webhookCallback(bot, "cloudflare-mod", { secretToken: env.TELEGRAM_WEBHOOK_SECRET })` once. grammY's `webhookCallback` validates the `X-Telegram-Bot-Api-Secret-Token` header on every request, so a missing or mismatched secret returns `401` before the update reaches any handler.
grammY's `webhookCallback` validates the `X-Telegram-Bot-Api-Secret-Token` header on every request, so a missing or mismatched secret returns `401` before the update reaches any handler. There is no manual header parsing in this codebase. ### Scheduled (Cron)
Cloudflare fires cron triggers specified in `wrangler.toml` `[triggers] crons`. The `scheduled(event, env, ctx)` handler receives:
- `event.cron` — the schedule string (e.g., "0 17 * * *")
- `event.scheduledTime` — Unix timestamp (ms) when the trigger fired
- `ctx.waitUntil(promise)` — keeps the handler alive until promise resolves
Flow:
```
Cloudflare cron trigger
scheduled(event, env, ctx)
├── getRegistry(env) — build registry (same as HTTP)
│ └── load + init all modules
└── dispatchScheduled(event, env, ctx, registry)
├── filter registry.crons by event.cron match
└── for each matching cron:
├── createStore(moduleName, env) — KV store
├── createSqlStore(moduleName, env) — D1 store
└── ctx.waitUntil(handler(event, { db, sql, env }))
└── wrapped in try/catch for isolation
```
Each handler fires independently. If one fails, others still run.
## 10. Deploy flow and the register script ## 10. Deploy flow and the register script

View File

@@ -20,6 +20,25 @@ Enforced by `npm run lint` / `npm run format`:
Run `npm run format` before committing. Run `npm run format` before committing.
## JSDoc & Type Definitions
- **Central typedefs location:** `src/types.js` — all module-level typedefs live here (Env, Module, Command, Cron, ModuleContext, SqlStore, KVStore, Trade, Portfolio, etc.).
- **When to add JSDoc:** Required on exported functions, types, and public module interfaces. Optional on internal helpers (< 5 lines, obviously self-documenting).
- **Validation:** ESLint (`eslint src`) enforces valid JSDoc syntax. Run `npm run lint` to check.
- **No TypeScript:** JSDoc + `.js` files only. Full type info available to editor tooling without a build step.
- **Example:**
```js
/**
* Validate a trade before insertion.
*
* @param {Trade} trade
* @returns {boolean}
*/
function isValidTrade(trade) {
return trade.qty > 0 && trade.priceVnd > 0;
}
```
## File Organization ## File Organization
- **Max 200 lines per code file.** Split into focused submodules when approaching the limit. - **Max 200 lines per code file.** Split into focused submodules when approaching the limit.
@@ -44,14 +63,17 @@ Every module default export must have:
export default { export default {
name: "modname", // === folder name === import map key name: "modname", // === folder name === import map key
commands: [...], // validated at load time commands: [...], // validated at load time
init: async ({ db, env }) => { ... }, // optional init: async ({ db, sql, env }) => { ... }, // optional
crons: [...], // optional scheduled jobs
}; };
``` ```
- Store module-level `db` reference in a closure variable, set during `init` - Store module-level `db` and `sql` references in closure variables, set during `init`
- Never access `env.KV` directly — always use the prefixed `db` from `init` - Never access `env.KV` or `env.DB` directly — always use the prefixed `db` (KV) or `sql` (D1) from `init`
- Handlers receive grammY `ctx` — use `ctx.match` for command arguments, `ctx.from.id` for user identity - `sql` is `null` when `env.DB` is not bound — always guard with `if (!sql) return`
- Command handlers receive grammY `ctx` — use `ctx.match` for command arguments, `ctx.from.id` for user identity
- Reply with `ctx.reply(text)` — plain text or Telegram HTML - Reply with `ctx.reply(text)` — plain text or Telegram HTML
- Cron handlers receive `(event, { db, sql, env })` — same context as `init`
## Error Handling ## Error Handling

View File

@@ -17,13 +17,13 @@ Telegram bot on Cloudflare Workers with a plug-n-play module system. grammY hand
## Active Modules ## Active Modules
| Module | Status | Commands | Description | | Module | Status | Commands | Storage | Crons | Description |
|--------|--------|----------|-------------| |--------|--------|----------|---------|-------|-------------|
| `util` | Complete | `/info`, `/help` | Bot info and command help renderer | | `util` | Complete | `/info`, `/help` | — | — | Bot info and command help renderer |
| `trading` | Complete | `/trade_topup`, `/trade_buy`, `/trade_sell`, `/trade_convert`, `/trade_stats` | Paper trading — VN stocks with dynamic symbol resolution. Crypto/gold/forex coming soon. | | `trading` | Complete | `/trade_topup`, `/trade_buy`, `/trade_sell`, `/trade_convert`, `/trade_stats`, `/history` | D1 (trades) | Daily 5PM trim | Paper trading — VN stocks with dynamic symbol resolution. Crypto/gold/forex coming soon. |
| `misc` | Stub | `/ping`, `/mstats`, `/fortytwo` | Health check + DB demo | | `misc` | Stub | `/ping`, `/mstats`, `/fortytwo` | KV | — | Health check + DB demo |
| `wordle` | Stub | `/wordle`, `/wstats`, `/konami` | Placeholder for word game | | `wordle` | Stub | `/wordle`, `/wstats`, `/konami` | — | — | Placeholder for word game |
| `loldle` | Stub | `/loldle`, `/ggwp` | Placeholder for LoL game | | `loldle` | Stub | `/loldle`, `/ggwp` | — | — | Placeholder for LoL game |
## Key Data Flows ## Key Data Flows
@@ -31,13 +31,26 @@ Telegram bot on Cloudflare Workers with a plug-n-play module system. grammY hand
``` ```
Telegram update → POST /webhook → grammY secret validation Telegram update → POST /webhook → grammY secret validation
→ getBot(env) → dispatcher routes /cmd → module handler → getBot(env) → dispatcher routes /cmd → module handler
→ handler reads/writes KV via db.getJSON/putJSON → handler reads/writes KV via db.getJSON/putJSON (or D1 via sql.all/run)
→ ctx.reply() → response to Telegram → ctx.reply() → response to Telegram
``` ```
### Scheduled Job (Cron)
```
Cloudflare timer fires (e.g., "0 17 * * *")
→ scheduled(event, env, ctx) handler
→ getRegistry(env) → load + init modules
→ dispatchScheduled(event, env, ctx, registry)
→ filter matching crons by event.cron
→ for each: handler reads/writes D1 via sql.all/run (or KV via db)
→ ctx.waitUntil(promise) keeps handler alive
```
### Deploy Pipeline ### Deploy Pipeline
``` ```
npm run deploy → wrangler deploy (upload to CF) npm run deploy
→ wrangler deploy (upload to CF, set env vars and bindings)
→ npm run db:migrate (apply any new migrations to D1)
→ scripts/register.js → buildRegistry with stub KV → scripts/register.js → buildRegistry with stub KV
→ POST setWebhook + POST setMyCommands to Telegram API → POST setWebhook + POST setMyCommands to Telegram API
``` ```
@@ -57,11 +70,12 @@ Each module maintains its own `README.md` with commands, data model, and impleme
## Test Coverage ## Test Coverage
105 tests across 11 test files: 105+ tests across 11+ test files:
| Area | Tests | What's Covered | | Area | Tests | What's Covered |
|------|-------|---------------| |------|-------|---------------|
| DB layer | 19 | KV store, prefixing, JSON helpers, pagination | | DB layer (KV) | 19 | KV store, prefixing, JSON helpers, pagination |
| Module framework | 33 | Registry, dispatcher, validators, help renderer | | DB layer (D1) | | Fake D1 in-memory implementation (fake-d1.js) |
| Module framework | 33 | Registry, dispatcher, validators, help renderer, cron validation |
| Utilities | 4 | HTML escaping | | Utilities | 4 | HTML escaping |
| Trading module | 49 | Dynamic symbol resolution, formatters, flat portfolio CRUD, command handlers | | Trading module | 49 | Dynamic symbol resolution, formatters, flat portfolio CRUD, command handlers, history/retention |

View File

@@ -3,13 +3,38 @@
## Prerequisites ## Prerequisites
- Node.js ≥ 20.6 - Node.js ≥ 20.6
- Cloudflare account with Workers + KV enabled - Cloudflare account with Workers + KV + D1 enabled
- Telegram bot token from [@BotFather](https://t.me/BotFather) - Telegram bot token from [@BotFather](https://t.me/BotFather)
- `wrangler` CLI authenticated: `npx wrangler login` - `wrangler` CLI authenticated: `npx wrangler login`
## Environment Setup ## Environment Setup
### 1. Cloudflare KV Namespaces ### 1. Cloudflare D1 Database (Optional but Recommended)
If your modules need relational data or append-only history, set up a D1 database:
```bash
npx wrangler d1 create miti99bot-db
```
Copy the database ID from the output, then add it to `wrangler.toml`:
```toml
[[d1_databases]]
binding = "DB"
database_name = "miti99bot-db"
database_id = "<paste-id-here>"
```
After this, run migrations to set up tables:
```bash
npm run db:migrate
```
The migration runner discovers all `src/modules/*/migrations/*.sql` files and applies them.
### 2. Cloudflare KV Namespaces
```bash ```bash
npx wrangler kv namespace create miti99bot-kv npx wrangler kv namespace create miti99bot-kv
@@ -25,7 +50,7 @@ id = "<production-id>"
preview_id = "<preview-id>" preview_id = "<preview-id>"
``` ```
### 2. Worker Secrets ### 3. Worker Secrets
```bash ```bash
npx wrangler secret put TELEGRAM_BOT_TOKEN npx wrangler secret put TELEGRAM_BOT_TOKEN
@@ -34,7 +59,7 @@ npx wrangler secret put TELEGRAM_WEBHOOK_SECRET
`TELEGRAM_WEBHOOK_SECRET` — any high-entropy string (e.g. `openssl rand -hex 32`). grammY validates it on every webhook update via `X-Telegram-Bot-Api-Secret-Token`. `TELEGRAM_WEBHOOK_SECRET` — any high-entropy string (e.g. `openssl rand -hex 32`). grammY validates it on every webhook update via `X-Telegram-Bot-Api-Secret-Token`.
### 3. Local Dev Config ### 4. Local Dev Config
```bash ```bash
cp .dev.vars.example .dev.vars # for wrangler dev cp .dev.vars.example .dev.vars # for wrangler dev
@@ -49,11 +74,23 @@ Both are gitignored. Fill in matching token + secret values.
## Deploy ## Deploy
### Cron Configuration (if using scheduled jobs)
If any of your modules declare crons, they MUST also be registered in `wrangler.toml`:
```toml
[triggers]
crons = ["0 17 * * *", "0 2 * * *"] # list all cron schedules used by modules
```
The schedule string must exactly match what modules declare. For details on cron expressions and examples, see [`docs/using-cron.md`](./using-cron.md).
### First Time ### First Time
```bash ```bash
npx wrangler deploy # learn the *.workers.dev URL npx wrangler deploy # learn the *.workers.dev URL
# paste URL into .env.deploy as WORKER_URL # paste URL into .env.deploy as WORKER_URL
npm run db:migrate # apply any migrations to D1
npm run register:dry # preview payloads npm run register:dry # preview payloads
npm run deploy # deploy + register webhook + commands npm run deploy # deploy + register webhook + commands
``` ```
@@ -64,7 +101,7 @@ npm run deploy # deploy + register webhook + commands
npm run deploy npm run deploy
``` ```
This runs `wrangler deploy` then `scripts/register.js` (setWebhook + setMyCommands). This runs `wrangler deploy`, `npm run db:migrate`, then `scripts/register.js` (setWebhook + setMyCommands).
### What the Register Script Does ### What the Register Script Does

47
docs/todo.md Normal file
View File

@@ -0,0 +1,47 @@
# TODO
Manual follow-ups after the D1 + Cron infra rollout (plan: `plans/260415-1010-d1-cron-infra/`).
## Pre-deploy (required before next `npm run deploy`)
- [ ] Create the D1 database:
```bash
npx wrangler d1 create miti99bot-db
```
Copy the returned UUID.
- [ ] Replace `REPLACE_ME_D1_UUID` in `wrangler.toml` (`[[d1_databases]]` → `database_id`) with the real UUID.
- [ ] Commit `wrangler.toml` with the real UUID (the ID is not a secret).
## First deploy verification
- [ ] Run `npm run db:migrate -- --dry-run` — confirm it lists `src/modules/trading/migrations/0001_trades.sql` as pending.
- [ ] Run `npm run deploy` — chain is `wrangler deploy` → `npm run db:migrate` → `npm run register`.
- [ ] Verify in Cloudflare dashboard:
- D1 database `miti99bot-db` shows `trading_trades` + `_migrations` tables
- Worker shows a cron trigger `0 17 * * *`
## Post-deploy smoke tests
- [ ] Send `/buy VNM 10 80000` (or whatever the real buy syntax is) via Telegram, then `/history` — expect 1 row.
- [ ] Manually fire the cron to verify retention:
```bash
npx wrangler dev --test-scheduled
# in another terminal:
curl "http://localhost:8787/__scheduled?cron=0+17+*+*+*"
```
Check logs for `trim-trades` output.
## Nice-to-have (not blocking)
- [ ] End-to-end test of `wrangler dev --test-scheduled` documented with real output snippet in `docs/using-cron.md`.
- [ ] Decide on migration rollback story (currently forward-only). Either document "write a new migration to undo" explicitly, or add a `down/` convention.
- [ ] Tune `trim-trades` schedule if 17:00 UTC conflicts with anything — currently chosen as ~00:00 ICT.
- [ ] Consider per-environment D1 (staging vs prod) if a staging bot is added later.

283
docs/using-cron.md Normal file
View File

@@ -0,0 +1,283 @@
# Using Cron (Scheduled Jobs)
Cron allows modules to run scheduled tasks at fixed intervals. Use crons for cleanup (purge old data), maintenance (recompute stats), or periodic notifications.
## Declaring Crons
In your module's default export, add a `crons` array:
```js
export default {
name: "mymod",
init: async ({ db, sql, env }) => { /* ... */ },
commands: [ /* ... */ ],
crons: [
{
schedule: "0 17 * * *", // 5 PM UTC daily
name: "cleanup", // human-readable identifier
handler: async (event, ctx) => {
// event.cron = "0 17 * * *"
// event.scheduledTime = timestamp (ms)
// ctx = { db, sql, env } (same as module init)
},
},
],
};
```
**Handler signature:**
```js
async (event, { db, sql, env }) => {
// event.cron — the schedule string that fired
// event.scheduledTime — Unix timestamp (ms)
// db — namespaced KV store (same as init)
// sql — namespaced D1 store (same as init), null if not bound
// env — raw worker environment
}
```
## Cron Expression Syntax
Standard 5-field cron format (minute, hour, day-of-month, month, day-of-week):
```
minute hour day-of-month month day-of-week
0-59 0-23 1-31 1-12 0-6 (0=Sunday)
"0 17 * * *" — 5 PM UTC daily
"*/5 * * * *" — every 5 minutes
"0 0 1 * *" — midnight on the 1st of each month
"0 9 * * 1" — 9 AM UTC every Monday
"30 2 * * *" — 2:30 AM UTC daily
```
See Cloudflare's [cron expression docs](https://developers.cloudflare.com/workers/configuration/cron-triggers/) for full syntax.
## Registering in wrangler.toml
**Important:** Crons declared in the module MUST also be listed in `wrangler.toml`. This is because Cloudflare needs to know what schedules to fire at deploy time.
Edit `wrangler.toml`:
```toml
[triggers]
crons = ["0 17 * * *", "0 0 * * *", "*/5 * * * *"]
```
Both the module contract and the `[triggers] crons` array must list the same schedules. If a schedule is in the module but not in `wrangler.toml`, Cloudflare won't fire it. If it's in `wrangler.toml` but not in any module, the worker won't know what to do with it.
**Multiple modules can share a schedule** — all matching handlers will fire (fan-out). Each module must declare its own `crons` entry; the registry validates them at load time.
## Handler Details
### Error Isolation
If one handler fails, other handlers still run. Each handler is wrapped in try/catch:
```js
// In cron-dispatcher.js
for (const entry of matching) {
ctx.waitUntil(
(async () => {
try {
await entry.handler(event, handlerCtx);
} catch (err) {
console.error(`[cron] handler failed:`, err);
}
})(),
);
}
```
Errors are logged to the Workers console but don't crash the dispatch loop.
### Execution Time Limits
Cloudflare cron tasks have a **15-minute wall-clock limit**. Operations exceeding this timeout are killed. For large data operations:
- Batch in chunks (e.g., delete 1000 rows at a time, looping)
- Use pagination to avoid loading entire datasets into memory
- Monitor execution time and add logging
### Context Availability
Cron handlers run in the same Worker runtime as HTTP handlers, so they have access to:
- `db` — the module's namespaced KV store (read/write)
- `sql` — the module's namespaced D1 store (read/write), or null if not configured
- `env` — all Worker environment bindings (secrets, etc.)
### Return Value
Handlers should return `Promise<void>`. The runtime ignores return values.
## Local Testing
Use the local `wrangler dev` server to simulate cron triggers:
```bash
npm run dev
```
In another terminal, send a simulated cron request:
```bash
# Trigger the 5 PM daily cron
curl "http://localhost:8787/__scheduled?cron=0+17+*+*+*"
# URL-encode the cron string (spaces → +)
```
The Worker responds with `200` and logs handler output to the dev server console.
### Simulating Multiple Crons
If you have several crons with different schedules, test each by passing the exact schedule string:
```bash
curl "http://localhost:8787/__scheduled?cron=*/5+*+*+*+*" # every 5 min
curl "http://localhost:8787/__scheduled?cron=0+0+1+*+*" # monthly
```
## Worked Example: Trade Retention
The trading module uses a daily cron at `0 17 * * *` (5 PM UTC) to trim old trades:
**Module declaration (src/modules/trading/index.js):**
```js
crons: [
{
schedule: "0 17 * * *",
name: "trim-trades",
handler: (event, ctx) => trimTradesHandler(event, ctx),
},
],
```
**wrangler.toml:**
```toml
[triggers]
crons = ["0 17 * * *"]
```
**Handler (src/modules/trading/retention.js):**
```js
/**
* Delete trades older than 90 days.
*/
export async function trimTradesHandler(event, { sql }) {
if (!sql) return; // database not configured
const ninetyDaysAgoMs = Date.now() - 90 * 24 * 60 * 60 * 1000;
const result = await sql.run(
"DELETE FROM trading_trades WHERE ts < ?",
ninetyDaysAgoMs
);
console.log(`[cron] trim-trades: deleted ${result.changes} old trades`);
}
```
**wrangler.toml:**
```toml
[triggers]
crons = ["0 17 * * *"]
```
At 5 PM UTC every day, Cloudflare fires the `0 17 * * *` cron. The Worker loads the registry, finds the trading module's handler, executes `trimTradesHandler`, and logs the number of deleted rows.
## Worked Example: Stats Recalculation
Imagine a leaderboard module that caches top-10 stats:
```js
export default {
name: "leaderboard",
init: async ({ db, sql }) => {
// ...
},
crons: [
{
schedule: "0 12 * * *", // noon UTC daily
name: "refresh-stats",
handler: async (event, { sql, db }) => {
if (!sql) return;
// Recompute aggregate stats from raw data
const topTen = await sql.all(
`SELECT user_id, SUM(score) as total_score
FROM leaderboard_plays
GROUP BY user_id
ORDER BY total_score DESC
LIMIT 10`
);
// Cache in KV for fast /leaderboard command response
await db.putJSON("cached_top_10", topTen);
console.log(`[cron] refresh-stats: updated top 10`);
},
},
],
};
```
Every day at noon, the leaderboard updates its cached stats without waiting for a user request.
## Crons and Cold Starts
Crons execute on a fresh Worker instance (potential cold start). Module `init` hooks run before the first handler, so cron handlers can safely assume initialization is complete.
If `init` throws, the cron fires anyway but has `sql` and `db` in a half-initialized state. Handle this gracefully:
```js
handler: async (event, { sql, db }) => {
if (!sql) {
console.warn("sql store not available, skipping");
return;
}
// proceed with confidence
}
```
## Adding a New Cron
1. **Declare in module:**
```js
crons: [
{ schedule: "0 3 * * *", name: "my-cron", handler: myHandler }
],
```
2. **Add to wrangler.toml:**
```toml
[triggers]
crons = ["0 3 * * *", "0 17 * * *"] # keep existing schedules
```
3. **Deploy:**
```bash
npm run deploy
```
4. **Test locally:**
```bash
npm run dev
# in another terminal:
curl "http://localhost:8787/__scheduled?cron=0+3+*+*+*"
```
## Monitoring Crons
Cron execution is logged to the Cloudflare Workers console. Check the tail:
```bash
npx wrangler tail
```
Look for `[cron]` prefixed log lines to see which crons ran and what they did.

316
docs/using-d1.md Normal file
View File

@@ -0,0 +1,316 @@
# Using D1 (SQL Database)
D1 is Cloudflare's serverless SQL database. Use it when your module needs to query structured data, perform scans, or maintain append-only history. For simple key → JSON blobs or per-user state, KV is lighter and faster.
## When to Choose D1 vs KV
| Use Case | D1 | KV |
|----------|----|----|
| Simple key → JSON state | — | ✓ |
| Per-user blob (config, stats) | — | ✓ |
| Relational queries (JOIN, GROUP BY) | ✓ | — |
| Scans (all users' records, filtered) | ✓ | — |
| Leaderboards, sorted aggregates | ✓ | — |
| Append-only history/audit log | ✓ | — |
| Exact row counts with WHERE | ✓ | — |
The trading module uses D1 for `trading_trades` (append-only history). Each `/trade_buy` and `/trade_sell` writes a row; `/history` scans the last N rows per user.
## Accessing SQL in a Module
In your module's `init`, receive `sql` (alongside `db` for KV):
```js
/** @type {import("../../db/sql-store-interface.js").SqlStore | null} */
let sql = null;
const myModule = {
name: "mymod",
init: async ({ db, sql: sqlStore, env }) => {
sql = sqlStore; // cache for handlers
},
commands: [
{
name: "myquery",
visibility: "public",
description: "Query the database",
handler: async (ctx) => {
if (!sql) {
await ctx.reply("Database not configured");
return;
}
const rows = await sql.all("SELECT * FROM mymod_items LIMIT 10");
await ctx.reply(`Found ${rows.length} rows`);
},
},
],
};
```
**Important:** `sql` is `null` when `env.DB` is not bound (e.g., in tests without a fake D1 setup). Always guard:
```js
if (!sql) {
// handle gracefully — module still works, just without persistence
}
```
## Table Naming Convention
All tables must follow the pattern `{moduleName}_{table}`:
- `trading_trades` — trading module's trades table
- `mymod_items` — mymod's items table
- `mymod_leaderboard` — mymod's leaderboard table
Enforce this by convention in code review. The `sql.tablePrefix` property is available for dynamic table names:
```js
const tableName = `${sql.tablePrefix}items`; // = "mymod_items"
await sql.all(`SELECT * FROM ${tableName}`);
```
## Writing Migrations
Migrations live in `src/modules/<name>/migrations/NNNN_descriptive.sql`. Files are sorted lexically and applied in order (one-way only; no down migrations).
**Naming:** Use a 4-digit numeric prefix, then a descriptive name:
```
src/modules/trading/migrations/
├── 0001_trades.sql # first migration
├── 0002_add_fees.sql # second migration (optional)
└── 0003_...
```
**Example migration:**
```sql
-- src/modules/mymod/migrations/0001_items.sql
CREATE TABLE mymod_items (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER NOT NULL,
name TEXT NOT NULL,
created_at INTEGER NOT NULL
);
CREATE INDEX idx_mymod_items_user ON mymod_items(user_id);
```
Key points:
- One-way only — migrations never roll back.
- Create indexes for columns you'll filter/sort by.
- Use `user_id` (snake_case in SQL), not `userId`.
- Reference other tables with full names: `other_module_items`.
The migration runner (`scripts/migrate.js`) tracks applied migrations in a `_migrations` table and skips any that have already run.
## SQL API Reference
The `SqlStore` provides these methods. All accept parameterized bindings (? placeholders):
### `run(query, ...binds)`
Execute INSERT / UPDATE / DELETE / CREATE. Returns `{ changes, last_row_id }`.
```js
const result = await sql.run(
"INSERT INTO mymod_items (user_id, name) VALUES (?, ?)",
userId,
"Widget"
);
console.log(result.last_row_id); // newly inserted row ID
```
### `all(query, ...binds)`
Execute SELECT, return all rows as plain objects.
```js
const items = await sql.all(
"SELECT * FROM mymod_items WHERE user_id = ?",
userId
);
// items = [{ id: 1, user_id: 123, name: "Widget", created_at: 1234567 }, ...]
```
### `first(query, ...binds)`
Execute SELECT, return first row or `null` if no match.
```js
const item = await sql.first(
"SELECT * FROM mymod_items WHERE id = ?",
itemId
);
if (!item) {
// not found
}
```
### `prepare(query, ...binds)`
Advanced: return a D1 prepared statement for use with `.batch()`.
```js
const stmt = sql.prepare("INSERT INTO mymod_items (user_id, name) VALUES (?, ?)");
const batch = [
stmt.bind(userId1, "Item1"),
stmt.bind(userId2, "Item2"),
];
await sql.batch(batch);
```
### `batch(statements)`
Execute multiple prepared statements in a single round-trip.
```js
const stmt = sql.prepare("INSERT INTO mymod_items (user_id, name) VALUES (?, ?)");
const results = await sql.batch([
stmt.bind(userId1, "Item1"),
stmt.bind(userId2, "Item2"),
stmt.bind(userId3, "Item3"),
]);
```
## Running Migrations
### Production
```bash
npm run db:migrate
```
This walks `src/modules/*/migrations/*.sql` (sorted), checks which have already run (tracked in `_migrations` table), and applies only new ones via `wrangler d1 execute --remote`.
### Local Dev
```bash
npm run db:migrate -- --local
```
Applies migrations to your local D1 binding in `.dev.vars`.
### Preview (Dry Run)
```bash
npm run db:migrate -- --dry-run
```
Prints the migration plan without executing anything. Useful before a production deploy.
## Testing with Fake D1
For hermetic unit tests without a real D1 binding, use `tests/fakes/fake-d1.js`. It's a minimal in-memory SQL implementation that covers common patterns:
```js
import { describe, it, expect, beforeEach, vi } from "vitest";
import { FakeD1 } from "../../fakes/fake-d1.js";
describe("trading trades", () => {
let sql;
beforeEach(async () => {
const fakeDb = new FakeD1();
// setup
await fakeDb.run(
"CREATE TABLE trading_trades (id INTEGER PRIMARY KEY, user_id INTEGER, qty INTEGER)"
);
sql = fakeDb;
});
it("inserts and retrieves trades", async () => {
await sql.run(
"INSERT INTO trading_trades (user_id, qty) VALUES (?, ?)",
123,
10
);
const rows = await sql.all(
"SELECT qty FROM trading_trades WHERE user_id = ?",
123
);
expect(rows).toEqual([{ qty: 10 }]);
});
});
```
Note: `FakeD1` supports a subset of SQL features needed for current modules. Extend it in `tests/fakes/fake-d1.js` if you need additional syntax (CTEs, window functions, etc.).
## First-Time D1 Setup
If your deployment environment doesn't have a D1 database yet:
```bash
npx wrangler d1 create miti99bot-db
```
Copy the database ID from the output, then add it to `wrangler.toml`:
```toml
[[d1_databases]]
binding = "DB"
database_name = "miti99bot-db"
database_id = "<paste-id-here>"
```
Then run migrations:
```bash
npm run db:migrate
```
The `_migrations` table is created automatically. After that, new migrations apply on every deploy.
## Worked Example: Simple Counter
**Migration:**
```sql
-- src/modules/counter/migrations/0001_counters.sql
CREATE TABLE counter_state (
id INTEGER PRIMARY KEY CHECK (id = 1),
count INTEGER NOT NULL DEFAULT 0
);
INSERT INTO counter_state (count) VALUES (0);
```
**Module:**
```js
import { createCounterHandler } from "./handler.js";
/** @type {import("../../db/sql-store-interface.js").SqlStore | null} */
let sql = null;
export default {
name: "counter",
init: async ({ sql: sqlStore }) => {
sql = sqlStore;
},
commands: [
{
name: "count",
visibility: "public",
description: "Increment global counter",
handler: (ctx) => createCounterHandler(sql)(ctx),
},
],
};
```
**Handler:**
```js
export function createCounterHandler(sql) {
return async (ctx) => {
if (!sql) {
await ctx.reply("Database not configured");
return;
}
// increment
await sql.run("UPDATE counter_state SET count = count + 1 WHERE id = 1");
// read current
const row = await sql.first("SELECT count FROM counter_state WHERE id = 1");
await ctx.reply(`Counter: ${row.count}`);
};
}
```
Run `/count` multiple times and watch the counter increment. The count persists across restarts because it's stored in D1.