Brikz Labs
Each example replicates, at reduced scale and with synthetic data, an operational layer of the LFDM. Pages render directly in the browser, with no server, and generate fresh data on every run to reflect the variability of model behavior.
Individual demo pages are in Portuguese. Brikz operates primarily in the Brazilian regulated financial market.
Directed graph with hundreds of transactions and planted money laundering patterns. Budget slider controls the top-k% inspected. Recall, Precision, IPI and Lift metrics update live, with curve showing the trade-off.
QSA resolution with computation of ultimate beneficial owner above the 25% threshold, ownership cycle detection, PEP and sanctions tagging. Randomly generated structure with fictional entities.
Twenty-four months of originator cash-flow with six-month forecast ahead. Selectable scenarios: healthy, seasonal, deteriorating, sudden shock. Uncertainty band visible.
Paste a fund regulation and see structured extraction into SQL rules applied per receivable. Pre-loaded examples covering Private Credit, Agribusiness and Factoring funds. Clauses linked to source citation.
TabICLv2 and TabPFN-2.5 predict default probability over tabular originator features without fine-tuning. Slider per feature, live prediction, nearest cases from a synthetic prior.
GraphAny and GraphFM detect structural communities in a synthetic originator-payor network with no graph-specific training. Stress test simulates systemic risk propagation across a community.
How to use
Runs in the browser
Each demo is a single standalone HTML file. No server, no backend, no login. Reloads generate fresh data.
Synthetic data only
No real transaction, CNPJ or name appears in any demo. Planted patterns reproduce topologies known from the literature.
Qualitative behavior
Heuristics approximate the output of GraphSAGE, Mamba-2, Document AI, TabICLv2 and GraphFM without loading model weights in the browser.
In production, on Google Cloud
The same model classes that power the demos run on Vertex AI with per-institution LoRA adapters, with TPU v5p for pretraining and Cloud Run for serverless inference. Observed throughput: 740K edges per second on NVIDIA A100 80GB.
Open source stack
The demos show the shape of the answer. In production, the LFDM runs on Vertex AI with per-institution LoRA adapters over regulated data in a dedicated São Paulo region.