The BI deadlock is over.
zenital selfBI gives IT the governance they need and IT the freedom they want.
Revenue by month
Every mid-sized company is stuck in the same loop.
IT can't keep up.
Power BI requests pile up. Each one takes weeks because requirements are vague and the data needs reshaping. Two engineers, fourteen tickets, no time to improve the warehouse.
Management can't wait.
So they export a CSV. They build it in Excel. They paste confidential data into ChatGPT. There are now two versions of every chart, and IT has no idea what data is leaking.
The result: data dispersion, unclear requirements, no feedback loop, zero governance. AI spend is invisible. Excels live on personal laptops. Nobody is happy. This is the problem zenital selfBI solves.
A regional director needs revenue by region this quarter.
- Files a request to IT
- Waits for a clarification meeting
- Pushes back on the metric definition
- Gets a final dashboard — already stale
- In parallel: builds an Excel, mails it around. Two versions, two numbers.
- Opens selfBI
- Types "Revenue by region this quarter vs last"
- Chart appears in 4 seconds
- Adds a filter: "exclude internal sales"
- Pins it. Shares the link with her team.
A real feedback loop, finally.
Every Excel upload, every AI call, every dashboard. Filter by team, user, period. Spot the columns your warehouse should have — straight from how people actually work. Live in /admin today.
- Org-scoped data sources — connect once, choose which workspaces see it
- Cost observability — by workspace, user, model, period
- Excel surveillance — every upload logged with columns and topics
- Wizard observability — every prompt, every "AI is wrong" flag
- AI key policy — one company key, per workspace, or BYOKComing soon
AI spend, last 7 days
Recent Excel uploads
- Q2_sales_export.csvSales
- churn_score_weekly.xlsxnew columnRetention
- invoices_jan_apr.csvFinance
- team_workload.csvOps
Prompts that failed this week — gaps in the warehouse
- “Show me cancellation reasons by week”column
cancellation_reasonnot in any model - “Compare churn vs retention this quarter”column
churn_statusnot in any model - “NPS score by region”column
nps_responsenot in any model
See what your business is asking for — even when nobody asks IT.
Every Excel uploaded is logged with its columns, who uploaded it, and which topics it covers. When three teams upload files with a column called `customer_health_score`, you know what to add to the warehouse next sprint. Live in /admin/files today.
Excel uploads — this week
23 files · 3 with columns not in your warehouse
| File | Uploaded by | Size | Topics | Signal |
|---|---|---|---|---|
Q2_sales_export.csv | maria.gomez | 12 cols · 8.4k rows | — | |
churn_score_weekly.xlsx | david.ruiz | 7 cols · 2.1k rows | +2 new cols | |
NPS_responses_apr.csv | lucia.fdz | 9 cols · 1.3k rows | +1 new col | |
invoices_jan_apr.csv | lucia.fdz | 18 cols · 14k rows | — | |
team_workload.csv | pablo.lara | 5 cols · 340 rows | — |
Hi Maria. I see you connected the Retention dataset. Want me to start with one of these?
Found 1 chart for your data. Using your glossary definition of “churn”: customers who cancelled in the month ÷ active at month start.
Talk to your data. Get the answer.
Type a question in plain English. The wizard already knows your business glossary, your aliases, your filters. The chart appears in seconds.
- Connect any data source — PostgreSQL, MySQL, MSSQL, BigQuery, Redshift, CSV/Excel
- AI Wizard — type the question, get the chart
- Anti–blank canvas — 3 suggestions waiting when you open a dashboard
- Plain-language semantic layer — your words, not the database's
- Business glossary — KPIs match your meetings, not the engineer's
Five live database connectors. Plus CSV and Excel.
Test, introspect, and connect any production database in under a minute. Cryptic columns get AI-suggested aliases automatically. Credentials are AES-256-GCM encrypted before they touch disk.
One connection. Many workspaces. You decide who sees what.
IT connects a data source once at the organization level. Then a visibility matrix says which workspaces (Finance, Sales, Operations…) can use it. The Cube query layer enforces the matrix server-side — no one can query a source they aren't enabled for, even by guessing the cube name.
| Data source | Type | Tables | Finance | Sales | Operations | Marketing | Customer Support |
|---|---|---|---|---|---|---|---|
CRM (production) | PostgreSQL | 47 | |||||
ERP — billing | MSSQL | 28 | |||||
Warehouse — sales | BigQuery | 14 | |||||
Field operations | PostgreSQL | 22 | |||||
Analytics — Redshift | Redshift | 18 |
Eight chart types, eight clicks away.
KPI, bar, horizontal bar, line, area, pie, donut, sortable data table — all rendered by the wizard the moment it understands your question. Every chart below was generated from the same demo dataset (ACME — synthetic data, real cube fields).
- Maintenance40.0%
- Installation27.8%
- Repair19.2%
- Inspection13.0%
- Barcelona32.1%
- Madrid23.2%
- Sevilla18.8%
- Valencia15.4%
- Bilbao10.4%
| Technician ⇅ | Region ⇅ | Jobs ⇅ | Invoiced ⇅ |
|---|---|---|---|
| J. Martínez | Cataluña | 41 | €48.2k |
| A. López | Madrid | 38 | €42.1k |
| M. Sánchez | Andalucía | 35 | €39.8k |
| C. Ferrer | Valencia | 32 | €36.4k |
| P. Romero | Norte | 28 | €31.2k |
zenital builds the widget you need.
Some questions need a layout no off-the-shelf chart can express — workload + drill, alert thresholds with one-click drill-down, two-level expandable totals. These ship as permanent custom widgets, marked ◈ zenital. We build them with you in the pilot.
The semantic model is the product.
Every zenital selfBI workspace starts from your raw schema. zenital audits it with /review-source, scores it 0–100, then enriches every column with an alias, a description, and aggregation rules. The same AI that hallucinates on raw columns answers on the first try once the model is curated.
sk_tecnicohdr_amt_xcod_ent_faccf_pct_xfc_tipo_t
sk_tecnico→Technician(nombreTecnico)The technician who carried out the job. Use for workload, alerts, and per-person breakdowns.hdr_amt_x→Total Invoice(totalFactura)Gross invoice amount (€), tax included. Sum for revenue. Never SUM a percentage column.cod_ent_fac→Branch(nombreSucursal)Branch that issued the invoice. Use to group revenue by location.cf_pct_x→CSat %(cfPorcentaje)Customer satisfaction score (0–100). Average — never sum. Threshold for alerts: 80.fc_tipo_t→Job Type(tipoTrabajo)Maintenance, Installation, Repair, Inspection. Use as dimension, not as a measure.
- Billable jobAny closed work order with CSat ≥ 80 and a signed-off completion form. Excludes warranty work.
- Active technicianTechnician with at least 1 completed job in the last 30 days.
- Norte regionBranches in Bilbao, Pamplona, and Santander. (Different from how the CRM defines it.)
Ask, choose, or describe — your dashboard answers back.
Every team works differently. zenital selfBI surfaces three creation paths in the same wizard, all powered by the same semantic layer.
Recipients click a chart — and understand it.
The "Ask AI" button is on every widget the wizard renders. One click → a plain-language explanation using your business glossary. So the chart you shared with the regional director makes sense without a meeting.
Every chart explains itself.
When you share a dashboard, the recipient sees the same Ask AI button you do. One click — a plain-language summary explaining what the chart shows, what's normal, and what to watch.
- Uses your business glossary, so the explanation speaks your team's language
- Available on every widget except custom alert summaries
- Every explanation is logged in /admin so IT sees what was asked
- Powered by your own AI key when BYOK is enabled (Sprint 4)
selfBI vs the workarounds you have today.
Most companies juggle two or three of these. zenital selfBI replaces that mix with one governed surface.
SUMMARIZE(
Sales,
Sales[Region],
"Revenue",
CALCULATE(SUM(Sales[Amount]))
)
| A | B | |
|---|---|---|
| 1 | Region | Revenue |
| 2 | EMEA | 412,300 |
| 3 | NA | 298,500 |
| 4 | =SUM(B2:B… | #REF! |
Anatomy of a question.
No raw SQL ever touches the LLM. Every chart goes through a validated semantic layer that prevents hallucinations.
What we don't do.
Most BI tools over-promise. We'd rather tell you up front.
We don't replace Power BI.
We sit above it as the self-service layer. Some clients replace, others run both.
We don't generate "insights".
We generate charts. Insights are still the human's job.
No predictive analytics.
Not the product. We do BI well — not forecasting, not ML.
No real-time streaming.
Cube caches. Freshness depends on your source, and we tell you when.
No "auto-magic" data modelling.
The semantic model still needs care. zenital builds and maintains it as a service. We don't pretend otherwise.
The honest answers.
Power BI is a tool for IT. Self-service in PBI means handing a SQL-less user a DAX editor — most managers won't. selfBI is built for the SQL-less user. The IT side is the bonus, not the core.
ChatGPT doesn't connect to your warehouse with row-level security. It also won't show IT what employees pasted into it. zenital selfBI is the governed alternative.
The org admin chooses the privacy mode in /admin/settings: full telemetry, or "metadata only" (which hides prompt content while keeping cost and usage signals). Either way, every employee knows what is logged.
Every chart has a "this looks wrong" button that triggers a correction flow in plain language. The AI never generates raw SQL — it goes through a validated semantic layer that prevents hallucinations.
Encryption at rest (AES-256-GCM). Per-org data isolation via row-level security. A unified event log captures every AI call, file upload, dashboard action, and admin change — queryable from /admin. BYOK and SOC 2 path are on the roadmap; everything else is live today.
Yes. zenital selfBI runs in Docker. We offer SaaS, managed deployment, and self-hosted with a license. Many clients start managed and migrate to self-host once they trust the product.
A typical pilot is two weeks. Week 1: IT connects 1–2 sources and we build the semantic model together. Week 2: management runs the wizard against real questions. We help with the model in week 1 so you see real value in week 2.
Two weeks. One of your data sources. Your real workload.
We do a pilot: week 1 we connect a source and build the semantic model. Week 2 your team uses it for real questions. You see — on your data — exactly what we just described.
Roughly, what would your pilot look like?
Three answers. A ballpark scope. No email required. Numbers stay in your browser.