AI Chatbots Data Privacy Security

Privacy, security, accuracy: How AI chatbots are handling your deepest data concerns

ChatGPT is an amazing tool – millions of people are using it to do everything from writing essays and researching holidays to preparing workout programs and even creating apps. The potential of generative AI feels endless.  

But when it comes to using generative AI for customer service, which means sharing your customers’ data, queries, and conversations, how much can you really trust AI? Generative AI chatbots are powered by large language models (LLMs) trained on a vast number of data sets pulled from the internet. While the possibilities that come from access to that much data are groundbreaking, it throws up a range of concerns around regulation, transparency, and privacy.

Since we launched Fin, our AI-powered bot, we’ve seen unprecedented levels of excitement for AI’s potential in customer service. But we’ve also encountered lots of questions, many of them falling under two overarching themes:

  1. The security and privacy of the information customers provide the AI chatbot.
  2. The accuracy and trustworthiness of the information the AI chatbot provides customers.

Here, we’ll cover the most important things to understand around how AI chatbots are affecting data security and privacy across industries, and the way we’re approaching these issues when it comes to Fin. 

Data security and privacy

No company can afford to take risks with customer data. Trust is the foundation of every business-customer relationship, and customers need to feel confident that their information is being treated with care and protected to the highest degreeGenerative AI offers endless opportunities, but it also raises important questions about the safety of customer data. As always, the technology is evolving faster than the guidelines and best practices, and global regulators are scrambling to keep up. 

The EU and GDPR

Take the EU, for example. The General Data Protection Regulation (GDPR) is one of the most stringent regulatory forces covering personal data in the world. Now that generative AI has changed the game, where does it sit within the GDPR framework? According to a study on the impact of GDPR on AI carried out by the European Parliamentary Service, there is a certain tension between GDPR and tools like ChatGPT, which process massive quantities of data for purposes not explicitly explained to the people who originally provided that data. 

That said, the report found there are ways to apply and develop the existing principles so that they’re consistent with the expanding usage of AI and big data. To fully achieve this consistency, the AI Act is currently being debated in the EU, and a firm set of regulations, applying to deployers of AI systems both within and outside the EU, is expected at the end of 2023 – more than a year after ChatGPT was released in November 2022.

“While regulation catches up with the rapid progress of generative AI, the onus is on AI chatbot providers to ensure they maintain data security as their top priority”

Meanwhile, in the US

The US remains in the early stages of regulation and lawmaking when it comes to AI, but discussions are in progress and seven of the largest tech companies have committed to voluntary agreements concerning areas like information sharing, testing, and transparency. An example is the commitment to add a watermark to content generated by AI – a simple step, but important for user context and understanding. 

While these steps mark some progress, for sectors like the health industry, the unknowns may represent an obstacle to the adoption of AI. An article in the Journal of the American Medical Association suggested that the technology can still be employed as long as the user avoids entering Protected Health Information (PHI). As a further step, vendors like OpenAI are now developing business associate agreements which would allow clients with these use cases to comply with regulations like HIPAA and SOC-2 while using their products.

In short, while regulation catches up with the rapid progress of generative AI, the onus is on AI chatbot providers to ensure they maintain data security as their top priority and are upfront and transparent with their customers. 

How Fin handles data security and privacy

Here at Intercom, we take data protection incredibly seriously, and it has been a major component of every decision we’ve made since we began to build our AI chatbot. Here are the most pressing questions we’re getting from customer service teams about the way their data, and their customer’s data, will be collected, handled, and stored.

How will Fin handle my support content?

Fin is powered by a mix of models including OpenAI’s GPT-4, and will process your support content through these LLMs at specified intervals to serve answers to customer queries.

How will Fin handle customer conversation data?

During each customer conversation, all conversation data will be sent verbatim to OpenAI, including any personally identifiable information within the conversation.

Will my support content or customer conversation data be used to train or improve models? 

This is a common question. Lots of AI bots do incorporate the data they work with to train new models or improve existing ones. With Intercom, your customers’ secure conversations and feedback won’t be used to train any of the third-party models we use to power Fin.

Eric Fitzgerald data privacy security (1)

Will my data be retained by OpenAI?

No – we have signed up to the Zero Data Retention policy, which means none of your data will be retained by OpenAI for any period of time. 

Will my data hosting region affect my ability to use Fin?

Currently, Fin can only be used by customers hosting their data in the US. Under Intercom’s EU Data Hosting terms, we agree to store our customers’ data (including any personal data) within the EU. OpenAI does not currently offer EU hosting, so any personal information which is sent to them as part of their integration with Intercom must be processed in the US and may not be compliant with Intercom’s EU or AU data hosting terms. We’re working to make Fin accessible to more regions in the future. 

Accuracy and trustworthiness of the AI bot’s answers

Different large language models have different strengths, but at the moment, OpenAI’s GPT-4 is generally considered one of the top LLMs available in terms of trustworthiness. At Intercom, we began experimenting with OpenAI’s ChatGPT as soon as it was released, recognizing its potential to totally transform the way customer service works. At that stage “hallucinations,” the tendency of ChatGPT to simply invent a plausible sounding response when it didn’t know the answer to a question, were too big a risk to put in front of customers. 

“An AI chatbot is only as good as the data that it’s trained on”

We saw hundreds of examples of these hallucinations peppered across social media in the wake of ChatGPT’s release, ranging from hilarious to slightly terrifying. Considering ChatGPT’s training data source was “all of the internet before 2021,” it’s not surprising that some details were incorrect. 

Essentially, an AI chatbot is only as good as the data that it’s trained on. In a customer service context, a low-quality dataset would expose customers to answers that could damage your company’s brand – whether they’re inaccurate, irrelevant, or inappropriate – leading to customer frustration, decreasing the value the customer gets from your product, and ultimately, impacting brand loyalty.  

The release of GPT-4 in March 2023 finally offered a solution. As our Senior Director of Machine Learning, Fergal Reid said in an interview with econsultancy.com, “We got an early peek into GPT-4 and were immediately impressed with the increased safeguards against hallucinations and more advanced natural language capabilities. We felt that the technology had crossed the threshold where it could be used in front of customers.”  

“Companies need control over the information their customers receive to ensure that it’s accurate, up-to-date, and relevant to their product”

Despite the incredible accuracy of GPT-4, it is not suitable at first for customer service “out of the box.” Companies need control over the information their customers receive to ensure that it’s accurate, up-to-date, and relevant to their product. By adding our own proprietary software to GPT-4, we created guardrails that limited the bot’s available information to a specific source nominated by our customers’ teams.

So, once you’ve ensured your customer data is safe with Fin, you’ll want to be totally confident that Fin will pull information from trusted sources that you control, to provide the right information to your customers. 

What LLM is Fin powered by? 

Fin is powered by a mix of large language models, including OpenAI’s GPT-4, the most accurate in the market and far less prone to hallucinations than others.

Can I choose the content that Fin pulls its answers from?

Fin draws its answers from sources that you specify, whether that’s your help center, support content library, or any public URL pointing to your own content. That way, you can be confident in the accuracy of all the information Fin uses to answer your customers’ questions, and, as you monitor Fin’s performance, you can expand, improve, or elaborate on the content that powers the AI bot. 

What will Fin do if it doesn’t know the answer to a question?

Fin is like every good support agent – if it can’t find the answer to a question, our machine learning guardrails ensure that it admits it doesn’t know, and seamlessly passes the conversation to a support rep to ensure a consistently high-quality support experience. Unlike ChatGPT, or some other AI customer service chatbots, Fin will never make up an answer, and will always provide sources for the answers it provides from your support content. 

Can my customers access a human support rep if they want to?

Your support team knows your customers better than anyone, and it’s crucial that your customers have easy access to them. Fin offers the option for customers to immediately direct their query towards a human support rep. If the customer is happy to try Fin, but it doesn’t know the answer to their question, we’ve built machine learning guardrails to prompt Fin to ask clarifying questions, triage the query, and hand it off to the right team to solve. 

Find out more about how Fin works, or check out our help center for more information about our data security and privacy measures. 

Fin AI Copilot CTA (Horizontal)