On August 14, 2025, we announced that we intend to fine-tune and use our own customer support specific AI models for Fin. The AI models are trained using anonymized Customer Data, which helps Fin deliver faster, more accurate answers for your team. Your data remains private, secure, and is used solely to improve model performance.
Although the AI models and the training data are fully anonymized, we understand that some customers may prefer not to have their anonymized Customer Data used for AI training and fine-tuning. For this reason, we offer an opt-out option. Customers can request an opt-out at any time by following the process outlined below.
How to request an opt-out
If you want to opt out of AI model training and fine-tuning, you can request an opt-out through Settings > Security > Workspace.
For customers on a contract, your Relationship Manager will take you through the opt-out process, as well as supporting any additional questions you may have.
We will process all customer opt-out requests within 30 days of the submission of your request. Once the opt-out is granted we will not use your Customer Data for the creation of any further training data.
Note: By opting out you may not benefit from Fin performance improvements delivered by our Al models.
FAQs
What if I previously opted out of AI model training and fine-tuning?
What if I previously opted out of AI model training and fine-tuning?
Customers who were previously granted an opt-out will have their data excluded from any training data for AI model training or fine-tuning. This includes all existing customers who have signed a Master SaaS Subscription Agreement (MSSA), Business Associate Agreement (BAA), and customers who are on our EU or AU regional hosting plans and trial accounts.
Can I opt-in to AI model training and fine-tuning in the future?
Can I opt-in to AI model training and fine-tuning in the future?
Yes, you can opt back in through your workspace security settings, or by contacting our support team.
Can you provide more detail on what you mean by anonymized data?
Can you provide more detail on what you mean by anonymized data?
In our announcement on August 14, 2025, we referred to the use of anonymized Customer Data to fine-tune our AI models. The meaning of “anonymized data” can differ depending on the relevant data protection laws that apply to your business. When we refer to anonymized Customer Data, we mean that we will remove direct and indirect identifiers of natural persons, so that individuals cannot be identified from the training data, as well as removing any customer specific references such as company name or address. We will continue to treat the anonymized training data as being subject to GDPR, and subject to our obligations to you as Processor of your Customer personal data.
Are the fine-tuned models capable of reproducing training data?
Are the fine-tuned models capable of reproducing training data?
From our own internal analysis, our position is that the AI models that we’ve announced are themselves anonymous. It is not possible, using reasonable means currently available, to either: a) extract any personal data about individuals directly from the model’s architecture; or b) prompt the models in a way that would re-produce any personal data from the training data, for the following reasons:
The training data is put through an anonymization process prior to being used to fine-tune an AI model, thus reducing the likelihood of any personal data being contained in the training data.
The models are part of Fin’s internal infrastructure - they are not capable of being prompted by our customers, their customers, or other third parties. They have no user interface that is available to the Fin user and the model architecture is protected by the security measures that apply to all our proprietary technology.
These are task specific models not capable of generative, text based output: The Retrieval model will analyze and identify the most relevant documentation from the customer’s authorized RAG database, in order to deliver the articles that are most semantically similar or relevant to the end-user query. The re-ranker model assigns a relevance score to each of those retrieved articles to further refine the selection of which articles will be sent to one of our Third Party AI Providers in order to deliver the final output to the customer.
The models only retrieve and re-rank documents from the same workspace in which the user is interacting, ensuring no cross-app data access or leakage.
Who should I contact if I have further questions about AI model training or fine-tuning?
Who should I contact if I have further questions about AI model training or fine-tuning?
Please contact your Relationship Manager or speak to our support team for further information.
Need more help? Get support from our Community Forum
Find answers and get help from Intercom Support and Community Experts
