AI Transparency Notice
Version 2026-04-20.1 · Effective 20 April 2026
1. You are interacting with an AI system
Parts of the Synaptico platform — notably the Discovery chat, the Classification engine, the Gap-Assessment engine, the FRIA assistant, and the document-insight extractor — rely on large language models and other AI techniques. Where you type into a chat window or click “Classify”, “Assess” or “Generate FRIA”, you are interacting with an AI system within the meaning of Article 3(1) AI Act.
2. Intended purpose and limitations
The platform is intended to help qualified AI-governance professionals draft, organise and audit compliance work under the AI Act. It is not intended to replace a lawyer, a DPO, a compliance officer or any qualified adviser, and it cannot issue a conformity assessment, a regulatory determination or any form of certification.
AI outputs may be inaccurate, incomplete, outdated or biased. Regulatory interpretations change as guidelines and case law evolve; your deployment context is unique to you; and the underlying language models occasionally produce confident-sounding but wrong answers (“hallucinations”). All outputs should be reviewed by a qualified human before being acted upon.
3. Models and providers
The Service currently uses [Google Gemini Pro + Gemini Flash — TBC at the provider contract level] through [Google Cloud Vertex AI — TBC]. Model selection may change over time; we maintain an up-to-date list of sub-processors in the Privacy Policy.
Prompts and outputs exchanged with the Service’s AI models are not used by those providers to train general-purpose models, except where you explicitly opt in.
4. Data used at inference time
To produce outputs tailored to you, the Service sends to the model provider: your chat messages, the content of documents you upload (or excerpts of them), the structured metadata of your AI-system inventory, and the reasoning context that the platform builds from those. These are processed transiently at inference time under provider terms that prohibit storage for training purposes.
5. Human oversight
Every AI-generated classification, assessment and FRIA draft is routed to a human-review queue with an explicit validation state (“pending review”, “validated”, “overridden”). You are responsible for assigning appropriately qualified reviewers. The platform does not act on its own output without a human step in between for any decision that has legal or similarly significant effect on a person.
6. Known risks
- Under-classification:an AI-Act borderline system (e.g. Annex III but arguably narrow procedural) may be labelled “minimal risk” on first pass. The platform forces human review for anything touching Annex III signals, but the ultimate call is yours.
- Shallow reasoning on thin inputs: with very little information about a system, outputs will be shallow. The platform now refuses to classify until enough evidence is present, but you may still receive a low-confidence output that merits deeper inquiry.
- Outdated knowledge: the AI models have a training-data cut-off and may not be aware of the most recent guidelines or delegated acts.
- Prompt injection: content uploaded to the Service can, in principle, try to manipulate the AI. The platform applies defensive measures (sanitisation, fenced prompts, refusal rules), but no such defence is absolute.
7. Your right to explanation and review
Where an AI Output affects you or a data subject whose data you process, you and that data subject may request a meaningful explanation of the reasoning steps and of the main input factors. The platform records every classification reasoning chain and gate-by-gate evaluation for precisely this purpose. Requests can be sent to [ai-transparency@synaptico.com — TBC].