Warnvo uses large language model (LLM) technology to analyze the documents and text you submit. When you upload a bill or enter text, our AI reads the content, identifies patterns consistent with overcharging or fraud, and generates a structured analysis and — where applicable — a dispute filing template. The AI does not have access to external databases, real-time pricing information, or your personal financial history unless you explicitly provide it.
Like all AI systems, Warnvo's analysis is not 100% accurate. The AI may produce false positives (flagging legitimate charges as suspicious) or false negatives (missing actual overcharges). We continuously improve our models, but you should treat AI analysis as a helpful starting point — not a definitive verdict. Always verify findings before taking action.
Warnvo does not make automated decisions that have legal or financial consequences without your involvement. All AI output is presented to you for your review. You decide whether to act on it, modify it, or discard it. We do not automatically submit dispute filings on your behalf.
We do not use your personal documents to train our AI models without your explicit consent. Aggregate, anonymized usage data (e.g., which types of bills are most commonly scanned) may be used to improve the service. The underlying LLM technology is provided by a third-party AI provider under a contract that prohibits them from using your data for model training.
AI models can reflect biases present in their training data. We are aware of this risk and work to ensure our AI does not produce analysis that is unfair or discriminatory. If you believe our AI has produced biased or unfair output, please report it to [email protected].
If the AI produces an analysis you believe is incorrect, you can flag it within the app. Your feedback helps us improve accuracy over time. We review flagged analyses and use them to identify systematic errors in our models.
Questions about this policy? Email us at Contact us