sahil_mehta.
all work

02 / project

Custom Reports & Dashboards

Self-Serve Analytics Platform · Enidus

ReactViteNode.jsExpressSQL ServerStored ProceduresJWTCSRFCron

The problem

Enterprise customers in Enidus's portal needed real answers from their account data — line counts on a given billing account, devices due for renewal, anomalous usage by cost center, scheduled monthly summaries delivered to finance — and the only way to get those answers used to be filing a ticket with engineering. Engineering would write a one-off SQL query, paste results into an email, and the cycle would repeat. Turnaround was measured in days. Customers were unhappy; engineering time was bleeding to repetitive queries; the queue never went down.

I built Custom Reports & Dashboards end-to-end — backend, frontend, database, scheduler, security layer, every line — to make that loop disappear. Self-serve: customers compose their own reports, dashboards, and charts over their own data without ever opening a ticket.

What it actually contains

This is not a "reports tool." It's a small product:

  • Report Builder — a visual filter UI (over twenty filter operators, multi-level sorting, column selection with custom aliases, top/bottom row limiting, real-time preview) that compiles to validated SQL against the customer's tenant-scoped data source.
  • Custom Dashboards — a drag-tile composer where each tile is bound to a saved report or a chart definition. Dashboards have their own permission model layered on top of the report permissions.
  • Chart Wizard — a generic chart widget that takes a chart definition + a dataset + a filter context and renders interactively. Lets customers promote a tabular report into a visualization without re-querying.
  • Scheduling — timezone-aware cron-driven exports. Daily / weekly / monthly / one-time. Output formats: CSV, password-protected XLSX, inline-HTML for email. Per-schedule custom subject + header + footer + CC/BCC with disclaimer notifications.
  • Permissions — per-report sharing (public, admin-only, or specific users), validated server-side; the frontend mirrors but never trusts the client.
  • Execution History — every scheduled and ad-hoc run lands in an audit table with status, duration, recipients, error message, and a retry handle. Admins see everything; users see their own.
  • Auth & onboarding — email-OTP user verification with short expiry, JWT session with refresh, admin-bootstrap flow.

The stack: React + Vite + a commercial-grade data grid + a query-builder library on the frontend; Node.js + Express on the backend; SQL Server with stored procedures for every CRUD path; SendGrid for email delivery.

The security architecture

A self-serve query tool over a multi-tenant production database is a vulnerability surface waiting to happen. The system has to make several attack classes structurally impossible, not just defended-in-depth. Four layers:

1. Stored procedures as the contract. No raw SQL from controllers. Every CRUD path goes through a stored procedure; the application layer can't compose ad-hoc queries against the operational tables. The DBA has a single auditable surface; we get parameterization for free; updates to data shape change in one place.

2. Two-layer filter validation. The query-builder produces structured filter JSON. A composition layer in the application decodes it, enforces operator-specific formatting (between requires two operands, in requires a list, contains wraps with %), quotes strings and dates, validates numeric values numerically, and rejects anything outside an explicit operator allowlist. Then a database-side validation procedure re-checks the filter before any save commits — catching any drift between what the application accepts and what the runtime SQL would actually execute. Defense in depth in the literal sense: two independent rule sets, both have to pass.

3. Tenant scope injected at the runner. Each dataset carries a tenant clause as metadata that customers never see or set. When the runner executes a report, it splices the validated tenant clause into the dataset's base SQL, bound from the authenticated session. There is no shape of saved report — by mistake or by intent — that can address another tenant's data, because the binding doesn't happen at save time, it happens at execution time, against a clause the customer never touched.

4. Standard hardening. JWT auth on every protected route. Per-session CSRF tokens with rotation; required on every state-changing request. Strict Content Security Policy headers. bcrypt for password hashing. AES-256-CBC for the Excel passwords customers set on scheduled XLSX exports — stored encrypted with a key from environment, separate from the database connection, so even DB compromise doesn't leak the password the recipient needs to open their export. Preview SQL is enforced SELECT-only with auto-injected row caps; the preview surface can't ever do harm.

Multi-tenant isolation, concretely

Every meaningful row carries a tenant identifier. The runner injects the tenant scope from validated dataset metadata at execution time. Cron jobs run with the same RBAC context as the user who scheduled them — a scheduled export can never see data the user couldn't see when they created the report. This eliminates the "I scheduled a report while I had broader access; now I get sensitive data forever" failure mode that quietly compromises a lot of self-serve analytics products.

What it changed

Analytics turnaround went from days (file ticket → engineering writes query → email results → customer asks for one tweak → repeat) to minutes (customer composes the report → preview → schedule). The engineering time previously spent on one-off SQL went back to product work. Customer-support escalations tied to "I can't get my data" effectively stopped.

This is also the project that established the Enterprise Portal plugin pattern — shared auth shell, shared multi-tenant primitives, shared RBAC, separately deployable feature plugins. Custom Reports shipped first, validated those primitives in production, and the AI Copilot was built on top of the same foundation a year later.

Engineering decisions worth defending

  • Curated dimension catalog over free-form SQL. Letting customers write their own SQL would have been faster to ship and would have been a security disaster. The catalog approach (datasets define the queryable surface) takes more upfront work but the system's blast radius for a misconfigured query is bounded. The right tradeoff for billing-grade data.
  • Tenant clause bound at runtime, not at save time. First draft had tenant scoping baked into each saved report. Code review caught it. Moving the binding to runtime, against validated metadata the customer doesn't control, means there is no shape of saved report that can leak across tenants.
  • Two independent rule sets, not one. The application-layer composer and the database-layer validation use overlapping but not identical rules — drift between them is itself a smell that gets caught in CI. One layer alone would be one bug from a vulnerability.
  • Excel passwords encrypted at rest with AES-256-CBC. Customers set a password on scheduled exports so the recipient needs the password to open. We have to use the password at execution time to encrypt the workbook, so it can't just be hashed — encrypted with a key from environment, kept separate from the database connection.
  • Cron through the same code path as interactive runs. No separate "scheduled" code path means no separate attack surface to maintain. The runner accepts a report identifier and treats interactive vs. scheduled as a flag, not as different code.

Tech

React · Vite · commercial data-grid + query-builder UI · Node.js · Express · SQL Server with stored procedures · JWT + per-session CSRF rotation · strict CSP · bcrypt · AES-256-CBC for stored Excel passwords · SendGrid · cron-driven scheduler with timezone-aware next-run computation · multi-tenant via runtime tenant-clause injection.

Built end-to-end alone at Enidus. Currently in production for the Enidus Enterprise Portal customer base. Architecture details (specific function names, internal schemas) live behind the work — happy to walk through them in interviews.


next project

AI Chatbot & Agentic Copilot

T-Mobile for Business · Enidus