Asking you to trust an AI-based solution with your data and your processes is a real ask. This page is the deeper proof behind the commitments on the home page. If something here is not specific enough, ask us in the prototype, that is what it is for.
Where it lives. What we do with it. What we do not.
Where does our data live?
In your environment. The mosaic deploys inside your cloud account or on-prem footprint, depending on what you operate. Your data does not travel to MosaicalAI servers. We do not host it. We do not retain it.
Do you train AI models on our data?
No. We use enterprise tiers from frontier model providers, configured so inputs are not used for training and not retained beyond the call. Every model call is private and ephemeral.
What happens to our data if the engagement ends?
It stays with you. The mosaic, the evidence, and any decision memory are yours. The deployment can be paused, rolled back, or removed without losing what you already had. No vendor extracts, no exit penalties.
How is access to the mosaic controlled?
Access integrates with your existing identity provider during deployment. Only authorised users on your team can call the mosaic. Every call is logged with user, timestamp, inputs, and outputs.
Are you compliant with privacy regulations?
The architecture is built to support compliance with the Australian Privacy Act, GDPR, and major sectoral regulations because your data does not leave your perimeter. Specific certifications (ISO 27001, SOC 2, IRAP) are scoped per engagement, ask us during the prototype.
Where it works. Where it does not.
Is this just ChatGPT or Claude with a wrapper?
We use frontier models from major providers at enterprise tiers. The IP is in the orchestration layer, the pattern library, and the deployment, not in the model. The mosaic is what the AI is grounded against, your data, your context, your question, not the public internet.
What if the AI gets something wrong?
Two safeguards. Every output is reviewed by a human on our team before it reaches your leadership. Every output cites the source data it came from, so disagreement traces back to source. Errors are caught before they are consumed, not after.
Where does AI do the work?
Pattern matching across fragmented evidence. Aggregating signals across tools. Drafting first-cut narratives. Calculating, ranking, and surfacing decisions. Generating reports at speed humans cannot match.
Where does AI not do the work?
Final decision-making for executive-grade output. Edge cases that require institutional context. Anything that needs reading the human dynamic of your specific organisation. Our team handles those, not the model.
How is the AI grounded in our data?
The mosaic deploys against your data sources read-only or via export. The AI sees only what you grant it. Outputs reference your data, not training-set artefacts. No hallucinations slip through unsourced.
Where it runs. What it needs. How it gets there.
Where does the mosaic run?
Your cloud (AWS, Azure, GCP) or on-prem, depending on what you already operate. We deploy into your perimeter, not ours.
What does the prototype need from us?
Read-only access or exports from three to five existing data sources for the Mosaic you have picked. NDA before anything moves. Then we agree on the question the mosaic should answer.
Can we audit what the AI is doing?
Yes. Every call is logged with inputs, model identifier, output, and timestamp. Logs live in your environment. Independent audit of the deployment is supported and welcomed.
Is the deployment reversible?
Yes. Every deployment can be paused, rolled back, or removed without losing the data, the evidence, or the outputs you have already produced. No vendor lock-in. No data extracted.
Why we are not a fully-autonomous AI solution.
Why is there a human in the loop at all?
Pure-AI solutions miss the parts of every organisation that are not written down, the political reality, the edge cases, the individuality of your team and processes. We use AI for what AI is good at and keep humans in the loop for what they are better at.
Who is the human in the loop?
Our team during the prototype and the early deployment. Your team progressively as the mosaic stabilises. The handoff is deliberate and documented; you end up owning the mosaic operationally.
Will this replace our team?
No. The mosaic frees your team from stitching reports together so they spend their hours making decisions from the complete picture. Same team, sharper picture, less rework.
What you sign up for. What you walk away with.
What is the 48-hour prototype?
Not a POC. A working mosaic, assembled from your data in 48 hours, in your environment. The specific output depends on the Mosaic (Cybersecurity, Finance, Risk, etc.). You keep the prototype whether or not we continue.
What if the prototype does not land?
You owe us nothing. You keep the prototype. We part ways on the same terms either way.
How do you charge?
Engagement-by-engagement, scoped to the work. No subscription. No platform fee. The mosaic stays with you when the engagement ends.
What if we want a Mosaic you have not built yet?
The Cybersecurity Mosaic is built end-to-end. The other 14 are in design, with the methodology, pattern library, and deployment IP ready. Same 48-hour clock applies.
Still have questions?
The prototype is the cheapest way to answer them, in your environment, on your data, with your team, in 48 hours. hello@mosaicalai.com for anything specific to your environment.