Ship two new loldle-family modules mirroring loldle.net's non-classic
modes. Text-only MVP (ability/splash phases stay deferred).
- loldle-emoji: 5 guesses, emoji-sequence clue. Pool derived algorithmically
from classic's champions.json metadata (species/region/resource mapping
table) since loldle.net's bundle has no static emoji pool.
- loldle-quote: 6 guesses, lore-blurb clue. Pool seeded from Data Dragon
champion title + first lore sentence; champion name redacted to ___.
- scripts/fetch-ddragon-data.js: single generator for both JSONs.
- src/util/normalize-name.js: shared lookup helper; loldle/lookup.js
refactored to import it.
35 new tests (484 total passing). Lint clean.
BGE embeddings occupy a narrow cone in vector space, so raw cosine of
two unrelated words already sits at ~0.40-0.55. Displaying `raw * 100`
made every random guess read as 40-70% warm, which defeated the warmth
UX.
format.js now applies a normalized sigmoid (FLOOR 0.40, CENTER 0.60,
SCALE 8) to remap raw cosine → displayed 0-100. Unrelated pairs drop
to ≤30, loose relation lands around 40-55, clear synonyms hit 85+, and
exact match stays at 100. Emoji buckets were rebased onto the calibrated
score; formatWarmth lost its sign column (calibrated output is always
non-negative).
render.js rounds once and feeds the integer to both formatWarmth and
warmthEmoji so the display value and bucket stay in sync.
Constants are empirical — retune if swapping to a non-BGE model.
ConceptNet (api.conceptnet.io) was returning sustained 502s, breaking
every guess with an "Upstream hiccup" reply. Replace with env.AI.run
on @cf/baai/bge-small-en-v1.5 and score guesses by computing cosine
similarity locally against the target vector.
The local google-10k wordlist doubles as the in/out-of-vocabulary set,
so OOV detection is an O(1) Set.has() with no upstream call. The
similarity() response shape is unchanged, so handlers/render/state
stay as-is.
Free on the Workers Free plan: 10k Neurons/day cap, ~0.0037 Neurons
per 2-word guess → ~2.7M guesses/day headroom for this bot.
Captures the feasibility check and auth-token investigation that led to
the lolschedule module: confirmed endpoint + table + field set, and
documented why caching beats token-based rate-limit mitigation on Fandom.
grammY-based bot with a module plugin system loaded from the MODULES env
var. Three command visibility levels (public/protected/private) share a
unified command namespace with conflict detection at registry build.
- 4 initial modules (util, wordle, loldle, misc); util fully implemented,
others are stubs proving the plugin system end-to-end
- util: /info (chat/thread/sender ids) + /help (pure renderer over the
registry, HTML parse mode, escapes user-influenced strings)
- KVStore interface with CFKVStore and a per-module prefixing factory;
getJSON/putJSON convenience helpers; other backends drop in via one file
- Webhook at POST /webhook with secret-token validation via grammY's
webhookCallback; no admin HTTP surface
- Post-deploy register script (npm run deploy = wrangler deploy && node
--env-file=.env.deploy scripts/register.js) for setWebhook and
setMyCommands; --dry-run flag for preview
- 56 vitest unit tests across 7 suites covering registry, db wrapper,
dispatcher, help renderer, validators, and HTML escaper
- biome for lint + format; phased implementation plan under plans/