Architecture Overview
Lucky Funatic's backend is a Go REST API that serves the Telegram Mini App frontend and handles all game logic server-side. The system is designed around three databases, a layered service architecture, and background workers for async processing.
High-Level Architecture
flowchart TD
TG["Telegram Mini App\n(Frontend)"] -->|"HTTPS + JWT"| API["Go API\n(Fiber Framework)"]
MG["Minigame Clients\n(Iframe)"] -->|"HTTPS + JWT"| API
FP["Funtico Platform\n(Laravel)"] -->|"HTTPS + Bearer Token"| API
API --> MW["Middleware Layer\n(Auth, CORS, Session)"]
MW --> Handlers["Handlers\n(HTTP request processing)"]
Handlers --> Services["Service Layer\n(Business logic)"]
Services --> Repos["Repository Layer\n(Data access)"]
Repos --> MySQL["MySQL\n(Relational data)"]
Repos --> Redis["Redis\n(Real-time state)"]
Repos --> Scylla["ScyllaDB\n(Time-series data)"]
API --> WS["WebSocket\n(Balloon game)"]
Workers["Background Workers"] --> Repos
Tech Stack
| Component | Technology | Purpose |
|---|---|---|
| Language | Go 1.23 | Backend API |
| Web Framework | Fiber v2 | HTTP server, routing, middleware |
| Primary Database | MySQL | Users, game state, cards, tournaments, store |
| Cache & Real-time | Redis | Game state cache, leaderboards, boosters, distributed locks |
| Time-series DB | ScyllaDB (Cassandra) | State change audit trail, session tracking, transaction history |
| Authentication | JWT v5 | Token-based auth for all API consumers |
| Distributed Locks | Redsync | Preventing race conditions across instances |
| Monitoring | Sentry + Betterstack | Error tracking and logging |
| Metrics | Prometheus | Application metrics endpoint |
| ID Generation | Custom Snowflake | 64-bit distributed unique IDs |
Three-Database Strategy
The system deliberately splits data across three databases based on access patterns:
MySQL handles all relational data that needs ACID transactions and complex queries. This includes user accounts, card definitions, tournament configurations, store items, quest definitions, and the authoritative game state. When a player buys a card or enters a tournament, MySQL is the source of truth.
Redis acts as the hot cache and real-time state engine. Every game route request reads the player's game state from Redis first. If the data is stale (60+ seconds since last pull), it's persisted back to MySQL. Redis also powers leaderboards (sorted sets), active booster timers, distributed locking (Redsync), and runs Lua scripts for atomic operations like passive income calculation and booster activation.
ScyllaDB stores high-volume time-series data that doesn't need relational queries. This includes the state change audit trail (every currency change, booster activation, quest completion), player session analytics, booster snapshots for recovery, and item transaction history. ScyllaDB's write-optimized architecture handles the volume without impacting game performance.
flowchart LR
subgraph MySQL["MySQL (Source of Truth)"]
Users["Users & Profiles"]
Cards["Cards & Stories"]
Tournaments["Tournaments"]
Store["Store & Listings"]
Quests["Quests"]
Raffles["Raffles"]
GameStatePersist["Game State\n(persistent)"]
end
subgraph Redis["Redis (Hot State)"]
GSCache["Game State Cache"]
Leaderboards["Leaderboards\n(sorted sets)"]
Boosters["Active Boosters\n& Cooldowns"]
Locks["Distributed Locks\n(Redsync)"]
LuaScripts["Lua Scripts\n(atomic ops)"]
Sessions["Session Tracking"]
end
subgraph Scylla["ScyllaDB (Audit & Analytics)"]
StateChanges["State Change Log"]
ItemTx["Item Transactions"]
SessionHistory["Session History"]
BoosterSnaps["Booster Snapshots"]
end
GSCache <-->|"sync every 60s"| GameStatePersist
Leaderboards -->|"persist every 5m"| Scylla
Sessions -->|"flush every 60s"| SessionHistory
Service Architecture
The codebase uses a layered architecture with two service tiers:
Request Flow
A typical API request flows through these layers:
- Middleware -- validates JWT, sets up CORS, tracks session, ensures game state is loaded from Redis
- Handler -- parses HTTP request, extracts parameters, calls the appropriate service method
- Service -- contains business logic, orchestrates between repositories, enforces game rules
- Repository -- executes database queries, Lua scripts, and external API calls
Two Service Layers
The codebase has two service packages reflecting an ongoing modernization:
New Services (internal/newservices/) -- the modern layer with explicit dependency injection. Each service has a focused responsibility:
| Service | Responsibility |
|---|---|
| GameStateService | Central hub -- manages all player state (taps, funz, energy, boosters). Referenced by almost every other service |
| InventoryService | Item management, currency conversions, platform transfers |
| UserService | Authentication, wallet management, login flow |
| CardService | Card catalog, purchases, upgrades, special card forging |
| QuestService | Friend invite quest progression and reward claiming |
| DailyBonusService | Daily wheel spin, streak tracking, reward distribution |
| RaffleService | Raffle eligibility checking and stats |
| TournamentService | Tournament history and game records |
| BalloonService | WebSocket-based balloon pop minigame (real-time game loop) |
| FrenzyModeService | Frenzy multiplier state and activation |
| SessionService | Player session tracking for analytics |
| StateService | Async audit logging of all game state changes to ScyllaDB |
| SupportService | Admin operations (account reset, fund adjustments) |
| TelegramService | Telegram Mini App auth validation (HMAC-SHA256) |
| NotificationService | Notification read status tracking |
| BoosterService | Booster data sync between Redis and ScyllaDB snapshots |
| TimeTrialService | Time trial completion tracking and rewards |
| EarnBannerService | Marketing promotional banners |
Legacy Services (internal/services/) -- older services handling complex business logic not yet migrated:
| Service | Responsibility |
|---|---|
| BoosterSystem | Booster mechanics (activation, cooldowns, daily limits, pricing) |
| QuestService | Quest definitions, completion tracking, reward distribution |
| TournamentService | Tournament lifecycle, scoring, leaderboards, prize distribution |
| StoreService | Item catalog, purchase logic, eligibility, transaction limits |
| RedisService | Redis connection management |
| MySQLService | MySQL connection pooling |
| ScyllaService | ScyllaDB session management |
| MarketingService | Campaign tracking |
Key Service Interactions
GameStateService sits at the center of the architecture. Most other services depend on it to read or modify player state. InventoryService is the second hub, handling all item movements and platform transfers.
flowchart TD
UserService --> GameStateService
UserService --> TelegramService
UserService --> InventoryService
CardService --> GameStateService
QuestService --> GameStateService
QuestService --> InventoryService
DailyBonusService --> InventoryService
RaffleService --> GameStateService
SupportService --> GameStateService
TimeTrialService --> GameStateService
InventoryService --> GameStateService
GameStateService --> StateService
InventoryService --> StateService
StateService -->|"async queue\n(100K buffer)"| ScyllaDB["ScyllaDB"]
StateService deserves special mention: it runs an async goroutine with a buffered channel (100,000 capacity) that writes state changes to ScyllaDB without blocking game requests. Every currency change, booster activation, quest completion, and card purchase is logged as an audit event.
Redis Lua Scripts
Several critical operations use Redis Lua scripts for atomicity. These run entirely within Redis, preventing race conditions from concurrent requests:
| Script | Purpose |
|---|---|
get_game_state.lua |
Retrieves game state with active booster effects calculated inline (energy max, regen rate, funz per tap) |
apply_passive_income.lua |
Calculates offline income since last update, applies 1.5-hour cap, updates all affected fields atomically |
activate_booster.lua |
Validates cooldown/daily limits, deducts currency, applies booster effect, sets timers -- all in one atomic operation |
deduct_currency.lua |
Checks balance and deducts atomically (prevents negative balances from race conditions) |
update_quest_rewards.lua |
Awards jokers and funz from quest completion atomically |
update_tournament_score.lua |
Updates leaderboard sorted set with new score based on tournament's scoring method |
Concurrency & Distributed Locking
The API is designed to run as multiple instances behind a load balancer. To prevent race conditions:
- Redsync (Redis-based distributed mutex) is used for game state updates, session cleanup, tournament prize distribution, and booster data sync
- Lua scripts handle atomic read-modify-write operations within Redis
- Transaction manager (Avito TRM) wraps MySQL operations that need ACID guarantees
Entry Point & Bootstrap
The application starts in main.go which:
- Loads configuration from environment variables
- Initializes logging (Sentry, Betterstack, Zerolog)
- Sets up JWT signing
- Creates the App struct (connects to all three databases, initializes all services and repositories)
- Sets up the Fiber server with route groups and middleware
- Optionally starts background workers
- Listens for shutdown signals (SIGTERM, SIGINT) for graceful cleanup
The -worker flag allows running as a dedicated worker process without the HTTP server, useful for separating concerns in deployment.