mirror of
https://github.com/tiennm99/miti99bot.git
synced 2026-04-17 13:21:31 +00:00
docs: add D1 and Cron guides, update module contract across docs
- docs/using-d1.md and docs/using-cron.md for module authors - architecture, codebase-summary, adding-a-module, code-standards, deployment-guide refreshed - CLAUDE.md module contract shows optional crons[] and sql in init - docs/todo.md tracks manual follow-ups (D1 UUID, first deploy, smoke tests)
This commit is contained in:
@@ -143,9 +143,77 @@ Each command is:
|
||||
|
||||
Private commands are still slash commands — users type `/mycmd`. They're simply absent from Telegram's `/` popup and from `/help` output.
|
||||
|
||||
## Optional: D1 Storage
|
||||
|
||||
If your module needs a SQL database for relational queries, scans, or append-only history, add an `init` hook that receives `sql`:
|
||||
|
||||
```js
|
||||
/** @type {import("../../db/sql-store-interface.js").SqlStore | null} */
|
||||
let sql = null;
|
||||
|
||||
const myModule = {
|
||||
name: "mymod",
|
||||
init: async ({ db, sql: sqlStore, env }) => {
|
||||
db = store;
|
||||
sql = sqlStore; // null when env.DB is not configured
|
||||
},
|
||||
commands: [ /* ... */ ],
|
||||
};
|
||||
```
|
||||
|
||||
Create migration files in `src/modules/<name>/migrations/`:
|
||||
|
||||
```
|
||||
src/modules/mymod/migrations/
|
||||
├── 0001_initial.sql
|
||||
├── 0002_add_index.sql
|
||||
└── ...
|
||||
```
|
||||
|
||||
Run migrations at deploy time:
|
||||
|
||||
```bash
|
||||
npm run db:migrate # production
|
||||
npm run db:migrate -- --local # local dev
|
||||
npm run db:migrate -- --dry-run # preview
|
||||
```
|
||||
|
||||
For full details on D1 usage, table naming, and the SQL API, see [`docs/using-d1.md`](./using-d1.md).
|
||||
|
||||
## Optional: Scheduled Jobs
|
||||
|
||||
If your module needs to run maintenance tasks (cleanup, stats refresh) on a schedule, add a `crons` array:
|
||||
|
||||
```js
|
||||
const myModule = {
|
||||
name: "mymod",
|
||||
init: async ({ db, sql, env }) => { /* ... */ },
|
||||
commands: [ /* ... */ ],
|
||||
crons: [
|
||||
{
|
||||
schedule: "0 2 * * *", // 2 AM UTC daily
|
||||
name: "daily-cleanup",
|
||||
handler: async (event, { db, sql, env }) => {
|
||||
// handler receives same context as init
|
||||
await sql.run("DELETE FROM mymod_old WHERE created < ?", oldTimestamp);
|
||||
},
|
||||
},
|
||||
],
|
||||
};
|
||||
```
|
||||
|
||||
**Important:** Every cron schedule declared in a module MUST also be registered in `wrangler.toml`:
|
||||
|
||||
```toml
|
||||
[triggers]
|
||||
crons = ["0 2 * * *"] # matches module declaration
|
||||
```
|
||||
|
||||
For full details on cron syntax, local testing, and worked examples, see [`docs/using-cron.md`](./using-cron.md).
|
||||
|
||||
## Testing your module
|
||||
|
||||
Add a test in `tests/modules/<name>.test.js` or extend an existing suite. The `tests/fakes/` directory provides `fake-kv-namespace.js`, `fake-bot.js`, and `fake-modules.js` for hermetic unit tests that don't touch Cloudflare or Telegram.
|
||||
Add a test in `tests/modules/<name>.test.js` or extend an existing suite. The `tests/fakes/` directory provides `fake-kv-namespace.js`, `fake-bot.js`, `fake-d1.js`, and `fake-modules.js` for hermetic unit tests that don't touch Cloudflare or Telegram.
|
||||
|
||||
Run:
|
||||
|
||||
@@ -163,4 +231,4 @@ This prints the `setMyCommands` payload your module will push to Telegram — a
|
||||
|
||||
## Full example
|
||||
|
||||
See `src/modules/misc/index.js` — it's a minimal module that uses the DB (`putJSON` / `getJSON` via `/ping` + `/mstats`) and registers one command at each visibility level. Copy it as a starting point for your own module.
|
||||
See `src/modules/misc/index.js` — it's a minimal module that uses the DB (`putJSON` / `getJSON` via `/ping` + `/mstats`) and registers one command at each visibility level. Copy it as a starting point for your own module. See `src/modules/trading/` for a full example with D1 storage and scheduled crons.
|
||||
|
||||
@@ -68,23 +68,31 @@ Every module is a single default export with this shape:
|
||||
|
||||
```js
|
||||
export default {
|
||||
name: "wordle", // must match folder + import map key
|
||||
init: async ({ db, env }) => { ... }, // optional, called once at build time
|
||||
name: "wordle", // must match folder + import map key
|
||||
init: async ({ db, sql, env }) => { ... }, // optional, called once at build time
|
||||
commands: [
|
||||
{
|
||||
name: "wordle", // ^[a-z0-9_]{1,32}$, no leading slash
|
||||
visibility: "public", // "public" | "protected" | "private"
|
||||
description: "Play wordle", // required, ≤256 chars
|
||||
handler: async (ctx) => { ... }, // grammY context
|
||||
name: "wordle", // ^[a-z0-9_]{1,32}$, no leading slash
|
||||
visibility: "public", // "public" | "protected" | "private"
|
||||
description: "Play wordle", // required, ≤256 chars
|
||||
handler: async (ctx) => { ... }, // grammY context
|
||||
},
|
||||
// ...
|
||||
],
|
||||
crons: [ // optional scheduled jobs
|
||||
{
|
||||
schedule: "0 2 * * *", // cron expression
|
||||
name: "cleanup", // unique within module
|
||||
handler: async (event, ctx) => { ... }, // receives { db, sql, env }
|
||||
},
|
||||
],
|
||||
};
|
||||
```
|
||||
|
||||
- The command name regex is **uniform** across all visibility levels. A private command is still a slash command (`/konami`) — it is simply absent from Telegram's `/` menu and from `/help` output. It is NOT a hidden text-match easter egg.
|
||||
- `description` is required for **all** visibilities. Private descriptions never reach Telegram; they exist so the registry remains self-documenting for debugging.
|
||||
- `init({ db, env })` is the one place where a module should do setup work. The `db` parameter is a `KVStore` whose keys are automatically prefixed with `<moduleName>:`. `env` is the raw worker env (read-only by convention).
|
||||
- `init({ db, sql, env })` is the one place where a module should do setup work. The `db` parameter is a `KVStore` whose keys are automatically prefixed with `<moduleName>:`. The `sql` parameter is a `SqlStore` (or `null` if `env.DB` is not bound) — for relational data. `env` is the raw worker env (read-only by convention).
|
||||
- `crons` is optional. Each entry declares a scheduled job; the schedule MUST also be registered in `wrangler.toml` `[triggers] crons`.
|
||||
|
||||
Validation runs per-command at registry load, and cross-module conflict detection runs at the same step. Any violation throws — deployment fails loudly before any request is served.
|
||||
|
||||
@@ -149,9 +157,13 @@ Every command — public, protected, **and private** — is registered via `bot.
|
||||
|
||||
There is no custom text-match middleware, no `bot.on("message:text", ...)` handler, no private-command-specific path. One routing path for all three visibilities. This is what reduced the original two-path design (slash + text-match) to one during the revision pass.
|
||||
|
||||
## 8. Storage: the KVStore interface
|
||||
## 8. Storage: KVStore and SqlStore
|
||||
|
||||
Modules NEVER touch `env.KV` directly. They get a `KVStore` from `createStore(moduleName, env)`:
|
||||
Modules NEVER touch `env.KV` or `env.DB` directly. They receive prefixed stores from the module context.
|
||||
|
||||
### KVStore (key-value, fast reads/writes)
|
||||
|
||||
For simple state and blobs, use `db` (a `KVStore`):
|
||||
|
||||
```js
|
||||
// In a module's init:
|
||||
@@ -175,7 +187,7 @@ getJSON(key) // → any | null (swallows corrupt JSON)
|
||||
putJSON(key, value, { expirationTtl? })
|
||||
```
|
||||
|
||||
### Prefix mechanics
|
||||
#### Prefix mechanics
|
||||
|
||||
`createStore("wordle", env)` returns a wrapped store where every key is rewritten:
|
||||
|
||||
@@ -189,15 +201,40 @@ list({prefix:"games:"})──► list({prefix:"wordle:games:"}) (then strips "
|
||||
|
||||
Two stores for different modules cannot read each other's data unless they reconstruct prefixes by hand — a code-review boundary, not a cryptographic one.
|
||||
|
||||
### Why `getJSON`/`putJSON` are in the interface
|
||||
### SqlStore (relational, scans, append-only history)
|
||||
|
||||
Every planned module stores structured state (game state, user stats, timestamps). Without helpers, every module would repeat `JSON.parse(await store.get(k))` and `store.put(k, JSON.stringify(v))`. That's genuine DRY.
|
||||
For complex queries, aggregates, or audit logs, use `sql` (a `SqlStore`):
|
||||
|
||||
`getJSON` is deliberately forgiving: if the stored value is not valid JSON (a corrupt record, a partial write, manual tampering), it logs a warning and returns `null`. A single bad record must not crash the handler.
|
||||
```js
|
||||
// In a module's init:
|
||||
init: async ({ sql }) => {
|
||||
sqlStore = sql; // null if env.DB not bound
|
||||
},
|
||||
|
||||
### Swapping the backend
|
||||
// In a handler or cron:
|
||||
const trades = await sqlStore.all(
|
||||
"SELECT * FROM trading_trades WHERE user_id = ? ORDER BY ts DESC LIMIT 10",
|
||||
userId
|
||||
);
|
||||
```
|
||||
|
||||
To replace Cloudflare KV with a different store (e.g. Upstash Redis, D1, Postgres):
|
||||
The interface (full JSDoc in `src/db/sql-store-interface.js`):
|
||||
|
||||
```js
|
||||
run(query, ...binds) // INSERT/UPDATE/DELETE — returns { changes, last_row_id }
|
||||
all(query, ...binds) // SELECT all rows → array of objects
|
||||
first(query, ...binds) // SELECT first row → object | null
|
||||
prepare(query, ...binds) // Prepared statement for batch operations
|
||||
batch(statements) // Execute multiple statements in one round-trip
|
||||
```
|
||||
|
||||
All tables must follow the naming convention `{moduleName}_{table}` (e.g., `trading_trades`).
|
||||
|
||||
Tables are created via migrations in `src/modules/<name>/migrations/*.sql`. The migration runner (`scripts/migrate.js`) applies them on deploy and tracks them in `_migrations` table.
|
||||
|
||||
### Swapping the backends
|
||||
|
||||
To replace Cloudflare KV with a different store (e.g. Upstash Redis, Postgres):
|
||||
|
||||
1. Create a new `src/db/<name>-store.js` that implements the `KVStore` interface.
|
||||
2. Change the one `new CFKVStore(env.KV)` line in `src/db/create-store.js` to construct your new adapter.
|
||||
@@ -205,7 +242,15 @@ To replace Cloudflare KV with a different store (e.g. Upstash Redis, D1, Postgre
|
||||
|
||||
That's the full change. No module code moves.
|
||||
|
||||
## 9. The webhook entry point
|
||||
To replace D1 with a different SQL backend:
|
||||
|
||||
1. Create a new `src/db/<name>-sql-store.js` that implements the `SqlStore` interface.
|
||||
2. Change the one `new CFSqlStore(env.DB)` line in `src/db/create-sql-store.js` to construct your new adapter.
|
||||
3. Update `wrangler.toml` bindings.
|
||||
|
||||
## 9. HTTP and Scheduled Entry Points
|
||||
|
||||
### Webhook (HTTP)
|
||||
|
||||
```js
|
||||
// src/index.js — simplified
|
||||
@@ -221,12 +266,48 @@ export default {
|
||||
}
|
||||
return new Response("not found", { status: 404 });
|
||||
},
|
||||
|
||||
async scheduled(event, env, ctx) {
|
||||
// Cloudflare cron trigger
|
||||
const registry = await getRegistry(env);
|
||||
dispatchScheduled(event, env, ctx, registry);
|
||||
},
|
||||
};
|
||||
```
|
||||
|
||||
`getWebhookHandler` is itself memoized — it constructs `webhookCallback(bot, "cloudflare-mod", { secretToken: env.TELEGRAM_WEBHOOK_SECRET })` once and reuses it.
|
||||
`getWebhookHandler` is memoized and constructs `webhookCallback(bot, "cloudflare-mod", { secretToken: env.TELEGRAM_WEBHOOK_SECRET })` once. grammY's `webhookCallback` validates the `X-Telegram-Bot-Api-Secret-Token` header on every request, so a missing or mismatched secret returns `401` before the update reaches any handler.
|
||||
|
||||
grammY's `webhookCallback` validates the `X-Telegram-Bot-Api-Secret-Token` header on every request, so a missing or mismatched secret returns `401` before the update reaches any handler. There is no manual header parsing in this codebase.
|
||||
### Scheduled (Cron)
|
||||
|
||||
Cloudflare fires cron triggers specified in `wrangler.toml` `[triggers] crons`. The `scheduled(event, env, ctx)` handler receives:
|
||||
|
||||
- `event.cron` — the schedule string (e.g., "0 17 * * *")
|
||||
- `event.scheduledTime` — Unix timestamp (ms) when the trigger fired
|
||||
- `ctx.waitUntil(promise)` — keeps the handler alive until promise resolves
|
||||
|
||||
Flow:
|
||||
|
||||
```
|
||||
Cloudflare cron trigger
|
||||
│
|
||||
▼
|
||||
scheduled(event, env, ctx)
|
||||
│
|
||||
├── getRegistry(env) — build registry (same as HTTP)
|
||||
│ └── load + init all modules
|
||||
│
|
||||
└── dispatchScheduled(event, env, ctx, registry)
|
||||
│
|
||||
├── filter registry.crons by event.cron match
|
||||
│
|
||||
└── for each matching cron:
|
||||
├── createStore(moduleName, env) — KV store
|
||||
├── createSqlStore(moduleName, env) — D1 store
|
||||
└── ctx.waitUntil(handler(event, { db, sql, env }))
|
||||
└── wrapped in try/catch for isolation
|
||||
```
|
||||
|
||||
Each handler fires independently. If one fails, others still run.
|
||||
|
||||
## 10. Deploy flow and the register script
|
||||
|
||||
|
||||
@@ -20,6 +20,25 @@ Enforced by `npm run lint` / `npm run format`:
|
||||
|
||||
Run `npm run format` before committing.
|
||||
|
||||
## JSDoc & Type Definitions
|
||||
|
||||
- **Central typedefs location:** `src/types.js` — all module-level typedefs live here (Env, Module, Command, Cron, ModuleContext, SqlStore, KVStore, Trade, Portfolio, etc.).
|
||||
- **When to add JSDoc:** Required on exported functions, types, and public module interfaces. Optional on internal helpers (< 5 lines, obviously self-documenting).
|
||||
- **Validation:** ESLint (`eslint src`) enforces valid JSDoc syntax. Run `npm run lint` to check.
|
||||
- **No TypeScript:** JSDoc + `.js` files only. Full type info available to editor tooling without a build step.
|
||||
- **Example:**
|
||||
```js
|
||||
/**
|
||||
* Validate a trade before insertion.
|
||||
*
|
||||
* @param {Trade} trade
|
||||
* @returns {boolean}
|
||||
*/
|
||||
function isValidTrade(trade) {
|
||||
return trade.qty > 0 && trade.priceVnd > 0;
|
||||
}
|
||||
```
|
||||
|
||||
## File Organization
|
||||
|
||||
- **Max 200 lines per code file.** Split into focused submodules when approaching the limit.
|
||||
@@ -44,14 +63,17 @@ Every module default export must have:
|
||||
export default {
|
||||
name: "modname", // === folder name === import map key
|
||||
commands: [...], // validated at load time
|
||||
init: async ({ db, env }) => { ... }, // optional
|
||||
init: async ({ db, sql, env }) => { ... }, // optional
|
||||
crons: [...], // optional scheduled jobs
|
||||
};
|
||||
```
|
||||
|
||||
- Store module-level `db` reference in a closure variable, set during `init`
|
||||
- Never access `env.KV` directly — always use the prefixed `db` from `init`
|
||||
- Handlers receive grammY `ctx` — use `ctx.match` for command arguments, `ctx.from.id` for user identity
|
||||
- Store module-level `db` and `sql` references in closure variables, set during `init`
|
||||
- Never access `env.KV` or `env.DB` directly — always use the prefixed `db` (KV) or `sql` (D1) from `init`
|
||||
- `sql` is `null` when `env.DB` is not bound — always guard with `if (!sql) return`
|
||||
- Command handlers receive grammY `ctx` — use `ctx.match` for command arguments, `ctx.from.id` for user identity
|
||||
- Reply with `ctx.reply(text)` — plain text or Telegram HTML
|
||||
- Cron handlers receive `(event, { db, sql, env })` — same context as `init`
|
||||
|
||||
## Error Handling
|
||||
|
||||
|
||||
@@ -17,13 +17,13 @@ Telegram bot on Cloudflare Workers with a plug-n-play module system. grammY hand
|
||||
|
||||
## Active Modules
|
||||
|
||||
| Module | Status | Commands | Description |
|
||||
|--------|--------|----------|-------------|
|
||||
| `util` | Complete | `/info`, `/help` | Bot info and command help renderer |
|
||||
| `trading` | Complete | `/trade_topup`, `/trade_buy`, `/trade_sell`, `/trade_convert`, `/trade_stats` | Paper trading — VN stocks with dynamic symbol resolution. Crypto/gold/forex coming soon. |
|
||||
| `misc` | Stub | `/ping`, `/mstats`, `/fortytwo` | Health check + DB demo |
|
||||
| `wordle` | Stub | `/wordle`, `/wstats`, `/konami` | Placeholder for word game |
|
||||
| `loldle` | Stub | `/loldle`, `/ggwp` | Placeholder for LoL game |
|
||||
| Module | Status | Commands | Storage | Crons | Description |
|
||||
|--------|--------|----------|---------|-------|-------------|
|
||||
| `util` | Complete | `/info`, `/help` | — | — | Bot info and command help renderer |
|
||||
| `trading` | Complete | `/trade_topup`, `/trade_buy`, `/trade_sell`, `/trade_convert`, `/trade_stats`, `/history` | D1 (trades) | Daily 5PM trim | Paper trading — VN stocks with dynamic symbol resolution. Crypto/gold/forex coming soon. |
|
||||
| `misc` | Stub | `/ping`, `/mstats`, `/fortytwo` | KV | — | Health check + DB demo |
|
||||
| `wordle` | Stub | `/wordle`, `/wstats`, `/konami` | — | — | Placeholder for word game |
|
||||
| `loldle` | Stub | `/loldle`, `/ggwp` | — | — | Placeholder for LoL game |
|
||||
|
||||
## Key Data Flows
|
||||
|
||||
@@ -31,13 +31,26 @@ Telegram bot on Cloudflare Workers with a plug-n-play module system. grammY hand
|
||||
```
|
||||
Telegram update → POST /webhook → grammY secret validation
|
||||
→ getBot(env) → dispatcher routes /cmd → module handler
|
||||
→ handler reads/writes KV via db.getJSON/putJSON
|
||||
→ handler reads/writes KV via db.getJSON/putJSON (or D1 via sql.all/run)
|
||||
→ ctx.reply() → response to Telegram
|
||||
```
|
||||
|
||||
### Scheduled Job (Cron)
|
||||
```
|
||||
Cloudflare timer fires (e.g., "0 17 * * *")
|
||||
→ scheduled(event, env, ctx) handler
|
||||
→ getRegistry(env) → load + init modules
|
||||
→ dispatchScheduled(event, env, ctx, registry)
|
||||
→ filter matching crons by event.cron
|
||||
→ for each: handler reads/writes D1 via sql.all/run (or KV via db)
|
||||
→ ctx.waitUntil(promise) keeps handler alive
|
||||
```
|
||||
|
||||
### Deploy Pipeline
|
||||
```
|
||||
npm run deploy → wrangler deploy (upload to CF)
|
||||
npm run deploy
|
||||
→ wrangler deploy (upload to CF, set env vars and bindings)
|
||||
→ npm run db:migrate (apply any new migrations to D1)
|
||||
→ scripts/register.js → buildRegistry with stub KV
|
||||
→ POST setWebhook + POST setMyCommands to Telegram API
|
||||
```
|
||||
@@ -57,11 +70,12 @@ Each module maintains its own `README.md` with commands, data model, and impleme
|
||||
|
||||
## Test Coverage
|
||||
|
||||
105 tests across 11 test files:
|
||||
105+ tests across 11+ test files:
|
||||
|
||||
| Area | Tests | What's Covered |
|
||||
|------|-------|---------------|
|
||||
| DB layer | 19 | KV store, prefixing, JSON helpers, pagination |
|
||||
| Module framework | 33 | Registry, dispatcher, validators, help renderer |
|
||||
| DB layer (KV) | 19 | KV store, prefixing, JSON helpers, pagination |
|
||||
| DB layer (D1) | — | Fake D1 in-memory implementation (fake-d1.js) |
|
||||
| Module framework | 33 | Registry, dispatcher, validators, help renderer, cron validation |
|
||||
| Utilities | 4 | HTML escaping |
|
||||
| Trading module | 49 | Dynamic symbol resolution, formatters, flat portfolio CRUD, command handlers |
|
||||
| Trading module | 49 | Dynamic symbol resolution, formatters, flat portfolio CRUD, command handlers, history/retention |
|
||||
|
||||
@@ -3,13 +3,38 @@
|
||||
## Prerequisites
|
||||
|
||||
- Node.js ≥ 20.6
|
||||
- Cloudflare account with Workers + KV enabled
|
||||
- Cloudflare account with Workers + KV + D1 enabled
|
||||
- Telegram bot token from [@BotFather](https://t.me/BotFather)
|
||||
- `wrangler` CLI authenticated: `npx wrangler login`
|
||||
|
||||
## Environment Setup
|
||||
|
||||
### 1. Cloudflare KV Namespaces
|
||||
### 1. Cloudflare D1 Database (Optional but Recommended)
|
||||
|
||||
If your modules need relational data or append-only history, set up a D1 database:
|
||||
|
||||
```bash
|
||||
npx wrangler d1 create miti99bot-db
|
||||
```
|
||||
|
||||
Copy the database ID from the output, then add it to `wrangler.toml`:
|
||||
|
||||
```toml
|
||||
[[d1_databases]]
|
||||
binding = "DB"
|
||||
database_name = "miti99bot-db"
|
||||
database_id = "<paste-id-here>"
|
||||
```
|
||||
|
||||
After this, run migrations to set up tables:
|
||||
|
||||
```bash
|
||||
npm run db:migrate
|
||||
```
|
||||
|
||||
The migration runner discovers all `src/modules/*/migrations/*.sql` files and applies them.
|
||||
|
||||
### 2. Cloudflare KV Namespaces
|
||||
|
||||
```bash
|
||||
npx wrangler kv namespace create miti99bot-kv
|
||||
@@ -25,7 +50,7 @@ id = "<production-id>"
|
||||
preview_id = "<preview-id>"
|
||||
```
|
||||
|
||||
### 2. Worker Secrets
|
||||
### 3. Worker Secrets
|
||||
|
||||
```bash
|
||||
npx wrangler secret put TELEGRAM_BOT_TOKEN
|
||||
@@ -34,7 +59,7 @@ npx wrangler secret put TELEGRAM_WEBHOOK_SECRET
|
||||
|
||||
`TELEGRAM_WEBHOOK_SECRET` — any high-entropy string (e.g. `openssl rand -hex 32`). grammY validates it on every webhook update via `X-Telegram-Bot-Api-Secret-Token`.
|
||||
|
||||
### 3. Local Dev Config
|
||||
### 4. Local Dev Config
|
||||
|
||||
```bash
|
||||
cp .dev.vars.example .dev.vars # for wrangler dev
|
||||
@@ -49,11 +74,23 @@ Both are gitignored. Fill in matching token + secret values.
|
||||
|
||||
## Deploy
|
||||
|
||||
### Cron Configuration (if using scheduled jobs)
|
||||
|
||||
If any of your modules declare crons, they MUST also be registered in `wrangler.toml`:
|
||||
|
||||
```toml
|
||||
[triggers]
|
||||
crons = ["0 17 * * *", "0 2 * * *"] # list all cron schedules used by modules
|
||||
```
|
||||
|
||||
The schedule string must exactly match what modules declare. For details on cron expressions and examples, see [`docs/using-cron.md`](./using-cron.md).
|
||||
|
||||
### First Time
|
||||
|
||||
```bash
|
||||
npx wrangler deploy # learn the *.workers.dev URL
|
||||
# paste URL into .env.deploy as WORKER_URL
|
||||
npm run db:migrate # apply any migrations to D1
|
||||
npm run register:dry # preview payloads
|
||||
npm run deploy # deploy + register webhook + commands
|
||||
```
|
||||
@@ -64,7 +101,7 @@ npm run deploy # deploy + register webhook + commands
|
||||
npm run deploy
|
||||
```
|
||||
|
||||
This runs `wrangler deploy` then `scripts/register.js` (setWebhook + setMyCommands).
|
||||
This runs `wrangler deploy`, `npm run db:migrate`, then `scripts/register.js` (setWebhook + setMyCommands).
|
||||
|
||||
### What the Register Script Does
|
||||
|
||||
|
||||
47
docs/todo.md
Normal file
47
docs/todo.md
Normal file
@@ -0,0 +1,47 @@
|
||||
# TODO
|
||||
|
||||
Manual follow-ups after the D1 + Cron infra rollout (plan: `plans/260415-1010-d1-cron-infra/`).
|
||||
|
||||
## Pre-deploy (required before next `npm run deploy`)
|
||||
|
||||
- [ ] Create the D1 database:
|
||||
```bash
|
||||
npx wrangler d1 create miti99bot-db
|
||||
```
|
||||
Copy the returned UUID.
|
||||
|
||||
- [ ] Replace `REPLACE_ME_D1_UUID` in `wrangler.toml` (`[[d1_databases]]` → `database_id`) with the real UUID.
|
||||
|
||||
- [ ] Commit `wrangler.toml` with the real UUID (the ID is not a secret).
|
||||
|
||||
## First deploy verification
|
||||
|
||||
- [ ] Run `npm run db:migrate -- --dry-run` — confirm it lists `src/modules/trading/migrations/0001_trades.sql` as pending.
|
||||
|
||||
- [ ] Run `npm run deploy` — chain is `wrangler deploy` → `npm run db:migrate` → `npm run register`.
|
||||
|
||||
- [ ] Verify in Cloudflare dashboard:
|
||||
- D1 database `miti99bot-db` shows `trading_trades` + `_migrations` tables
|
||||
- Worker shows a cron trigger `0 17 * * *`
|
||||
|
||||
## Post-deploy smoke tests
|
||||
|
||||
- [ ] Send `/buy VNM 10 80000` (or whatever the real buy syntax is) via Telegram, then `/history` — expect 1 row.
|
||||
|
||||
- [ ] Manually fire the cron to verify retention:
|
||||
```bash
|
||||
npx wrangler dev --test-scheduled
|
||||
# in another terminal:
|
||||
curl "http://localhost:8787/__scheduled?cron=0+17+*+*+*"
|
||||
```
|
||||
Check logs for `trim-trades` output.
|
||||
|
||||
## Nice-to-have (not blocking)
|
||||
|
||||
- [ ] End-to-end test of `wrangler dev --test-scheduled` documented with real output snippet in `docs/using-cron.md`.
|
||||
|
||||
- [ ] Decide on migration rollback story (currently forward-only). Either document "write a new migration to undo" explicitly, or add a `down/` convention.
|
||||
|
||||
- [ ] Tune `trim-trades` schedule if 17:00 UTC conflicts with anything — currently chosen as ~00:00 ICT.
|
||||
|
||||
- [ ] Consider per-environment D1 (staging vs prod) if a staging bot is added later.
|
||||
283
docs/using-cron.md
Normal file
283
docs/using-cron.md
Normal file
@@ -0,0 +1,283 @@
|
||||
# Using Cron (Scheduled Jobs)
|
||||
|
||||
Cron allows modules to run scheduled tasks at fixed intervals. Use crons for cleanup (purge old data), maintenance (recompute stats), or periodic notifications.
|
||||
|
||||
## Declaring Crons
|
||||
|
||||
In your module's default export, add a `crons` array:
|
||||
|
||||
```js
|
||||
export default {
|
||||
name: "mymod",
|
||||
init: async ({ db, sql, env }) => { /* ... */ },
|
||||
commands: [ /* ... */ ],
|
||||
crons: [
|
||||
{
|
||||
schedule: "0 17 * * *", // 5 PM UTC daily
|
||||
name: "cleanup", // human-readable identifier
|
||||
handler: async (event, ctx) => {
|
||||
// event.cron = "0 17 * * *"
|
||||
// event.scheduledTime = timestamp (ms)
|
||||
// ctx = { db, sql, env } (same as module init)
|
||||
},
|
||||
},
|
||||
],
|
||||
};
|
||||
```
|
||||
|
||||
**Handler signature:**
|
||||
|
||||
```js
|
||||
async (event, { db, sql, env }) => {
|
||||
// event.cron — the schedule string that fired
|
||||
// event.scheduledTime — Unix timestamp (ms)
|
||||
// db — namespaced KV store (same as init)
|
||||
// sql — namespaced D1 store (same as init), null if not bound
|
||||
// env — raw worker environment
|
||||
}
|
||||
```
|
||||
|
||||
## Cron Expression Syntax
|
||||
|
||||
Standard 5-field cron format (minute, hour, day-of-month, month, day-of-week):
|
||||
|
||||
```
|
||||
minute hour day-of-month month day-of-week
|
||||
0-59 0-23 1-31 1-12 0-6 (0=Sunday)
|
||||
|
||||
"0 17 * * *" — 5 PM UTC daily
|
||||
"*/5 * * * *" — every 5 minutes
|
||||
"0 0 1 * *" — midnight on the 1st of each month
|
||||
"0 9 * * 1" — 9 AM UTC every Monday
|
||||
"30 2 * * *" — 2:30 AM UTC daily
|
||||
```
|
||||
|
||||
See Cloudflare's [cron expression docs](https://developers.cloudflare.com/workers/configuration/cron-triggers/) for full syntax.
|
||||
|
||||
## Registering in wrangler.toml
|
||||
|
||||
**Important:** Crons declared in the module MUST also be listed in `wrangler.toml`. This is because Cloudflare needs to know what schedules to fire at deploy time.
|
||||
|
||||
Edit `wrangler.toml`:
|
||||
|
||||
```toml
|
||||
[triggers]
|
||||
crons = ["0 17 * * *", "0 0 * * *", "*/5 * * * *"]
|
||||
```
|
||||
|
||||
Both the module contract and the `[triggers] crons` array must list the same schedules. If a schedule is in the module but not in `wrangler.toml`, Cloudflare won't fire it. If it's in `wrangler.toml` but not in any module, the worker won't know what to do with it.
|
||||
|
||||
**Multiple modules can share a schedule** — all matching handlers will fire (fan-out). Each module must declare its own `crons` entry; the registry validates them at load time.
|
||||
|
||||
## Handler Details
|
||||
|
||||
### Error Isolation
|
||||
|
||||
If one handler fails, other handlers still run. Each handler is wrapped in try/catch:
|
||||
|
||||
```js
|
||||
// In cron-dispatcher.js
|
||||
for (const entry of matching) {
|
||||
ctx.waitUntil(
|
||||
(async () => {
|
||||
try {
|
||||
await entry.handler(event, handlerCtx);
|
||||
} catch (err) {
|
||||
console.error(`[cron] handler failed:`, err);
|
||||
}
|
||||
})(),
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
Errors are logged to the Workers console but don't crash the dispatch loop.
|
||||
|
||||
### Execution Time Limits
|
||||
|
||||
Cloudflare cron tasks have a **15-minute wall-clock limit**. Operations exceeding this timeout are killed. For large data operations:
|
||||
|
||||
- Batch in chunks (e.g., delete 1000 rows at a time, looping)
|
||||
- Use pagination to avoid loading entire datasets into memory
|
||||
- Monitor execution time and add logging
|
||||
|
||||
### Context Availability
|
||||
|
||||
Cron handlers run in the same Worker runtime as HTTP handlers, so they have access to:
|
||||
|
||||
- `db` — the module's namespaced KV store (read/write)
|
||||
- `sql` — the module's namespaced D1 store (read/write), or null if not configured
|
||||
- `env` — all Worker environment bindings (secrets, etc.)
|
||||
|
||||
### Return Value
|
||||
|
||||
Handlers should return `Promise<void>`. The runtime ignores return values.
|
||||
|
||||
## Local Testing
|
||||
|
||||
Use the local `wrangler dev` server to simulate cron triggers:
|
||||
|
||||
```bash
|
||||
npm run dev
|
||||
```
|
||||
|
||||
In another terminal, send a simulated cron request:
|
||||
|
||||
```bash
|
||||
# Trigger the 5 PM daily cron
|
||||
curl "http://localhost:8787/__scheduled?cron=0+17+*+*+*"
|
||||
|
||||
# URL-encode the cron string (spaces → +)
|
||||
```
|
||||
|
||||
The Worker responds with `200` and logs handler output to the dev server console.
|
||||
|
||||
### Simulating Multiple Crons
|
||||
|
||||
If you have several crons with different schedules, test each by passing the exact schedule string:
|
||||
|
||||
```bash
|
||||
curl "http://localhost:8787/__scheduled?cron=*/5+*+*+*+*" # every 5 min
|
||||
curl "http://localhost:8787/__scheduled?cron=0+0+1+*+*" # monthly
|
||||
```
|
||||
|
||||
## Worked Example: Trade Retention
|
||||
|
||||
The trading module uses a daily cron at `0 17 * * *` (5 PM UTC) to trim old trades:
|
||||
|
||||
**Module declaration (src/modules/trading/index.js):**
|
||||
|
||||
```js
|
||||
crons: [
|
||||
{
|
||||
schedule: "0 17 * * *",
|
||||
name: "trim-trades",
|
||||
handler: (event, ctx) => trimTradesHandler(event, ctx),
|
||||
},
|
||||
],
|
||||
```
|
||||
|
||||
**wrangler.toml:**
|
||||
|
||||
```toml
|
||||
[triggers]
|
||||
crons = ["0 17 * * *"]
|
||||
```
|
||||
|
||||
**Handler (src/modules/trading/retention.js):**
|
||||
|
||||
```js
|
||||
/**
|
||||
* Delete trades older than 90 days.
|
||||
*/
|
||||
export async function trimTradesHandler(event, { sql }) {
|
||||
if (!sql) return; // database not configured
|
||||
|
||||
const ninetyDaysAgoMs = Date.now() - 90 * 24 * 60 * 60 * 1000;
|
||||
|
||||
const result = await sql.run(
|
||||
"DELETE FROM trading_trades WHERE ts < ?",
|
||||
ninetyDaysAgoMs
|
||||
);
|
||||
|
||||
console.log(`[cron] trim-trades: deleted ${result.changes} old trades`);
|
||||
}
|
||||
```
|
||||
|
||||
**wrangler.toml:**
|
||||
|
||||
```toml
|
||||
[triggers]
|
||||
crons = ["0 17 * * *"]
|
||||
```
|
||||
|
||||
At 5 PM UTC every day, Cloudflare fires the `0 17 * * *` cron. The Worker loads the registry, finds the trading module's handler, executes `trimTradesHandler`, and logs the number of deleted rows.
|
||||
|
||||
## Worked Example: Stats Recalculation
|
||||
|
||||
Imagine a leaderboard module that caches top-10 stats:
|
||||
|
||||
```js
|
||||
export default {
|
||||
name: "leaderboard",
|
||||
init: async ({ db, sql }) => {
|
||||
// ...
|
||||
},
|
||||
crons: [
|
||||
{
|
||||
schedule: "0 12 * * *", // noon UTC daily
|
||||
name: "refresh-stats",
|
||||
handler: async (event, { sql, db }) => {
|
||||
if (!sql) return;
|
||||
|
||||
// Recompute aggregate stats from raw data
|
||||
const topTen = await sql.all(
|
||||
`SELECT user_id, SUM(score) as total_score
|
||||
FROM leaderboard_plays
|
||||
GROUP BY user_id
|
||||
ORDER BY total_score DESC
|
||||
LIMIT 10`
|
||||
);
|
||||
|
||||
// Cache in KV for fast /leaderboard command response
|
||||
await db.putJSON("cached_top_10", topTen);
|
||||
console.log(`[cron] refresh-stats: updated top 10`);
|
||||
},
|
||||
},
|
||||
],
|
||||
};
|
||||
```
|
||||
|
||||
Every day at noon, the leaderboard updates its cached stats without waiting for a user request.
|
||||
|
||||
## Crons and Cold Starts
|
||||
|
||||
Crons execute on a fresh Worker instance (potential cold start). Module `init` hooks run before the first handler, so cron handlers can safely assume initialization is complete.
|
||||
|
||||
If `init` throws, the cron fires anyway but has `sql` and `db` in a half-initialized state. Handle this gracefully:
|
||||
|
||||
```js
|
||||
handler: async (event, { sql, db }) => {
|
||||
if (!sql) {
|
||||
console.warn("sql store not available, skipping");
|
||||
return;
|
||||
}
|
||||
// proceed with confidence
|
||||
}
|
||||
```
|
||||
|
||||
## Adding a New Cron
|
||||
|
||||
1. **Declare in module:**
|
||||
```js
|
||||
crons: [
|
||||
{ schedule: "0 3 * * *", name: "my-cron", handler: myHandler }
|
||||
],
|
||||
```
|
||||
|
||||
2. **Add to wrangler.toml:**
|
||||
```toml
|
||||
[triggers]
|
||||
crons = ["0 3 * * *", "0 17 * * *"] # keep existing schedules
|
||||
```
|
||||
|
||||
3. **Deploy:**
|
||||
```bash
|
||||
npm run deploy
|
||||
```
|
||||
|
||||
4. **Test locally:**
|
||||
```bash
|
||||
npm run dev
|
||||
# in another terminal:
|
||||
curl "http://localhost:8787/__scheduled?cron=0+3+*+*+*"
|
||||
```
|
||||
|
||||
## Monitoring Crons
|
||||
|
||||
Cron execution is logged to the Cloudflare Workers console. Check the tail:
|
||||
|
||||
```bash
|
||||
npx wrangler tail
|
||||
```
|
||||
|
||||
Look for `[cron]` prefixed log lines to see which crons ran and what they did.
|
||||
316
docs/using-d1.md
Normal file
316
docs/using-d1.md
Normal file
@@ -0,0 +1,316 @@
|
||||
# Using D1 (SQL Database)
|
||||
|
||||
D1 is Cloudflare's serverless SQL database. Use it when your module needs to query structured data, perform scans, or maintain append-only history. For simple key → JSON blobs or per-user state, KV is lighter and faster.
|
||||
|
||||
## When to Choose D1 vs KV
|
||||
|
||||
| Use Case | D1 | KV |
|
||||
|----------|----|----|
|
||||
| Simple key → JSON state | — | ✓ |
|
||||
| Per-user blob (config, stats) | — | ✓ |
|
||||
| Relational queries (JOIN, GROUP BY) | ✓ | — |
|
||||
| Scans (all users' records, filtered) | ✓ | — |
|
||||
| Leaderboards, sorted aggregates | ✓ | — |
|
||||
| Append-only history/audit log | ✓ | — |
|
||||
| Exact row counts with WHERE | ✓ | — |
|
||||
|
||||
The trading module uses D1 for `trading_trades` (append-only history). Each `/trade_buy` and `/trade_sell` writes a row; `/history` scans the last N rows per user.
|
||||
|
||||
## Accessing SQL in a Module
|
||||
|
||||
In your module's `init`, receive `sql` (alongside `db` for KV):
|
||||
|
||||
```js
|
||||
/** @type {import("../../db/sql-store-interface.js").SqlStore | null} */
|
||||
let sql = null;
|
||||
|
||||
const myModule = {
|
||||
name: "mymod",
|
||||
init: async ({ db, sql: sqlStore, env }) => {
|
||||
sql = sqlStore; // cache for handlers
|
||||
},
|
||||
commands: [
|
||||
{
|
||||
name: "myquery",
|
||||
visibility: "public",
|
||||
description: "Query the database",
|
||||
handler: async (ctx) => {
|
||||
if (!sql) {
|
||||
await ctx.reply("Database not configured");
|
||||
return;
|
||||
}
|
||||
const rows = await sql.all("SELECT * FROM mymod_items LIMIT 10");
|
||||
await ctx.reply(`Found ${rows.length} rows`);
|
||||
},
|
||||
},
|
||||
],
|
||||
};
|
||||
```
|
||||
|
||||
**Important:** `sql` is `null` when `env.DB` is not bound (e.g., in tests without a fake D1 setup). Always guard:
|
||||
|
||||
```js
|
||||
if (!sql) {
|
||||
// handle gracefully — module still works, just without persistence
|
||||
}
|
||||
```
|
||||
|
||||
## Table Naming Convention
|
||||
|
||||
All tables must follow the pattern `{moduleName}_{table}`:
|
||||
|
||||
- `trading_trades` — trading module's trades table
|
||||
- `mymod_items` — mymod's items table
|
||||
- `mymod_leaderboard` — mymod's leaderboard table
|
||||
|
||||
Enforce this by convention in code review. The `sql.tablePrefix` property is available for dynamic table names:
|
||||
|
||||
```js
|
||||
const tableName = `${sql.tablePrefix}items`; // = "mymod_items"
|
||||
await sql.all(`SELECT * FROM ${tableName}`);
|
||||
```
|
||||
|
||||
## Writing Migrations
|
||||
|
||||
Migrations live in `src/modules/<name>/migrations/NNNN_descriptive.sql`. Files are sorted lexically and applied in order (one-way only; no down migrations).
|
||||
|
||||
**Naming:** Use a 4-digit numeric prefix, then a descriptive name:
|
||||
|
||||
```
|
||||
src/modules/trading/migrations/
|
||||
├── 0001_trades.sql # first migration
|
||||
├── 0002_add_fees.sql # second migration (optional)
|
||||
└── 0003_...
|
||||
```
|
||||
|
||||
**Example migration:**
|
||||
|
||||
```sql
|
||||
-- src/modules/mymod/migrations/0001_items.sql
|
||||
CREATE TABLE mymod_items (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
user_id INTEGER NOT NULL,
|
||||
name TEXT NOT NULL,
|
||||
created_at INTEGER NOT NULL
|
||||
);
|
||||
CREATE INDEX idx_mymod_items_user ON mymod_items(user_id);
|
||||
```
|
||||
|
||||
Key points:
|
||||
- One-way only — migrations never roll back.
|
||||
- Create indexes for columns you'll filter/sort by.
|
||||
- Use `user_id` (snake_case in SQL), not `userId`.
|
||||
- Reference other tables with full names: `other_module_items`.
|
||||
|
||||
The migration runner (`scripts/migrate.js`) tracks applied migrations in a `_migrations` table and skips any that have already run.
|
||||
|
||||
## SQL API Reference
|
||||
|
||||
The `SqlStore` provides these methods. All accept parameterized bindings (? placeholders):
|
||||
|
||||
### `run(query, ...binds)`
|
||||
Execute INSERT / UPDATE / DELETE / CREATE. Returns `{ changes, last_row_id }`.
|
||||
|
||||
```js
|
||||
const result = await sql.run(
|
||||
"INSERT INTO mymod_items (user_id, name) VALUES (?, ?)",
|
||||
userId,
|
||||
"Widget"
|
||||
);
|
||||
console.log(result.last_row_id); // newly inserted row ID
|
||||
```
|
||||
|
||||
### `all(query, ...binds)`
|
||||
Execute SELECT, return all rows as plain objects.
|
||||
|
||||
```js
|
||||
const items = await sql.all(
|
||||
"SELECT * FROM mymod_items WHERE user_id = ?",
|
||||
userId
|
||||
);
|
||||
// items = [{ id: 1, user_id: 123, name: "Widget", created_at: 1234567 }, ...]
|
||||
```
|
||||
|
||||
### `first(query, ...binds)`
|
||||
Execute SELECT, return first row or `null` if no match.
|
||||
|
||||
```js
|
||||
const item = await sql.first(
|
||||
"SELECT * FROM mymod_items WHERE id = ?",
|
||||
itemId
|
||||
);
|
||||
if (!item) {
|
||||
// not found
|
||||
}
|
||||
```
|
||||
|
||||
### `prepare(query, ...binds)`
|
||||
Advanced: return a D1 prepared statement for use with `.batch()`.
|
||||
|
||||
```js
|
||||
const stmt = sql.prepare("INSERT INTO mymod_items (user_id, name) VALUES (?, ?)");
|
||||
const batch = [
|
||||
stmt.bind(userId1, "Item1"),
|
||||
stmt.bind(userId2, "Item2"),
|
||||
];
|
||||
await sql.batch(batch);
|
||||
```
|
||||
|
||||
### `batch(statements)`
|
||||
Execute multiple prepared statements in a single round-trip.
|
||||
|
||||
```js
|
||||
const stmt = sql.prepare("INSERT INTO mymod_items (user_id, name) VALUES (?, ?)");
|
||||
const results = await sql.batch([
|
||||
stmt.bind(userId1, "Item1"),
|
||||
stmt.bind(userId2, "Item2"),
|
||||
stmt.bind(userId3, "Item3"),
|
||||
]);
|
||||
```
|
||||
|
||||
## Running Migrations
|
||||
|
||||
### Production
|
||||
|
||||
```bash
|
||||
npm run db:migrate
|
||||
```
|
||||
|
||||
This walks `src/modules/*/migrations/*.sql` (sorted), checks which have already run (tracked in `_migrations` table), and applies only new ones via `wrangler d1 execute --remote`.
|
||||
|
||||
### Local Dev
|
||||
|
||||
```bash
|
||||
npm run db:migrate -- --local
|
||||
```
|
||||
|
||||
Applies migrations to your local D1 binding in `.dev.vars`.
|
||||
|
||||
### Preview (Dry Run)
|
||||
|
||||
```bash
|
||||
npm run db:migrate -- --dry-run
|
||||
```
|
||||
|
||||
Prints the migration plan without executing anything. Useful before a production deploy.
|
||||
|
||||
## Testing with Fake D1
|
||||
|
||||
For hermetic unit tests without a real D1 binding, use `tests/fakes/fake-d1.js`. It's a minimal in-memory SQL implementation that covers common patterns:
|
||||
|
||||
```js
|
||||
import { describe, it, expect, beforeEach, vi } from "vitest";
|
||||
import { FakeD1 } from "../../fakes/fake-d1.js";
|
||||
|
||||
describe("trading trades", () => {
|
||||
let sql;
|
||||
|
||||
beforeEach(async () => {
|
||||
const fakeDb = new FakeD1();
|
||||
// setup
|
||||
await fakeDb.run(
|
||||
"CREATE TABLE trading_trades (id INTEGER PRIMARY KEY, user_id INTEGER, qty INTEGER)"
|
||||
);
|
||||
sql = fakeDb;
|
||||
});
|
||||
|
||||
it("inserts and retrieves trades", async () => {
|
||||
await sql.run(
|
||||
"INSERT INTO trading_trades (user_id, qty) VALUES (?, ?)",
|
||||
123,
|
||||
10
|
||||
);
|
||||
const rows = await sql.all(
|
||||
"SELECT qty FROM trading_trades WHERE user_id = ?",
|
||||
123
|
||||
);
|
||||
expect(rows).toEqual([{ qty: 10 }]);
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
Note: `FakeD1` supports a subset of SQL features needed for current modules. Extend it in `tests/fakes/fake-d1.js` if you need additional syntax (CTEs, window functions, etc.).
|
||||
|
||||
## First-Time D1 Setup
|
||||
|
||||
If your deployment environment doesn't have a D1 database yet:
|
||||
|
||||
```bash
|
||||
npx wrangler d1 create miti99bot-db
|
||||
```
|
||||
|
||||
Copy the database ID from the output, then add it to `wrangler.toml`:
|
||||
|
||||
```toml
|
||||
[[d1_databases]]
|
||||
binding = "DB"
|
||||
database_name = "miti99bot-db"
|
||||
database_id = "<paste-id-here>"
|
||||
```
|
||||
|
||||
Then run migrations:
|
||||
|
||||
```bash
|
||||
npm run db:migrate
|
||||
```
|
||||
|
||||
The `_migrations` table is created automatically. After that, new migrations apply on every deploy.
|
||||
|
||||
## Worked Example: Simple Counter
|
||||
|
||||
**Migration:**
|
||||
|
||||
```sql
|
||||
-- src/modules/counter/migrations/0001_counters.sql
|
||||
CREATE TABLE counter_state (
|
||||
id INTEGER PRIMARY KEY CHECK (id = 1),
|
||||
count INTEGER NOT NULL DEFAULT 0
|
||||
);
|
||||
INSERT INTO counter_state (count) VALUES (0);
|
||||
```
|
||||
|
||||
**Module:**
|
||||
|
||||
```js
|
||||
import { createCounterHandler } from "./handler.js";
|
||||
|
||||
/** @type {import("../../db/sql-store-interface.js").SqlStore | null} */
|
||||
let sql = null;
|
||||
|
||||
export default {
|
||||
name: "counter",
|
||||
init: async ({ sql: sqlStore }) => {
|
||||
sql = sqlStore;
|
||||
},
|
||||
commands: [
|
||||
{
|
||||
name: "count",
|
||||
visibility: "public",
|
||||
description: "Increment global counter",
|
||||
handler: (ctx) => createCounterHandler(sql)(ctx),
|
||||
},
|
||||
],
|
||||
};
|
||||
```
|
||||
|
||||
**Handler:**
|
||||
|
||||
```js
|
||||
export function createCounterHandler(sql) {
|
||||
return async (ctx) => {
|
||||
if (!sql) {
|
||||
await ctx.reply("Database not configured");
|
||||
return;
|
||||
}
|
||||
|
||||
// increment
|
||||
await sql.run("UPDATE counter_state SET count = count + 1 WHERE id = 1");
|
||||
|
||||
// read current
|
||||
const row = await sql.first("SELECT count FROM counter_state WHERE id = 1");
|
||||
await ctx.reply(`Counter: ${row.count}`);
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
Run `/count` multiple times and watch the counter increment. The count persists across restarts because it's stored in D1.
|
||||
Reference in New Issue
Block a user