Cookies
This website uses cookies to enhance the user experience.
Patricia Lucas | 25 Oct 2024
One of the public's top concerns about AI is the safety of their information, and it is something we are asked about a lot. People worry about feeding their personal information, thoughts and ideas into the AI machine. They are concerned about hacking, data theft, and how their information might be used. Before people jump into a conversation with an AI tool, they want to feel confident they know what will happen to their private information.
It doesn't look as if AI-specific regulation is coming soon. The UK AI Bill which had safety and security at the forefront safety and security at the forefront didn't make it into the King's speech back in July. But organisations shouldn't wait for regulation to put good practice in place.
At Colectiv, our team's research background has helped us think through our data security processes for the AI space. We want to work in the open, and we hope that publishing the four simple safety rules that we use for our AI interviews will reassure users about the steps we take to keep their information secure.
You can't leak what you don't have. So our first line of defence is to avoid collecting or using personal or private data that we don't need.
In Colectiv interviews, we don't ask for names or addresses, and we can't link the responses you provide in your interview to any information that identifies you, such as your name, IP address, or location.
We are sometimes asked about whether participants can delete their interview data. The limitation of super-anonymity is that we can't do this, because we have no way of knowing which answers are yours. We're sorry this isn't possible, but more privacy for everyone seems better to us.
It is really important to get the privacy settings right when using AI tools like Chat GPT, Gemini, or Claude.
We collect people's words and ideas in Colectiv interviews, and we want to use them in a way that respects their privacy. When we use AI-tools, we dial-up the privacy and security settings to the maximum available.
That means we are confident the content of our interviews are not being fed into AI learning models.
We believe all your data are stored in confidence. But there is a limitation to the privacy guarantee hidden away, and we want to be open about it. Open AI (and some other processors) retain data for 30 days to allow authorised staff to investigate online abuse. We, and others, think this rule is not clear enough. This is why we prioritise rule 1; anonymity is the most important privacy protection. We also have zero-retention policies in place where we can.
Where we do ask information about you in our interviews (for example if we ask about your age, ethnicity, or other demographics) we do this outside of the AI part of the interview, and we hold this data on secure servers and folders, with encryption and access processes in place to keep them safe. We review each interview carefully to make sure that it would never be possible to identify individuals from the combined demographic information we hold or share.
We are soon to release our interviews on WhatsApp, and we have taken extra care to make sure we can do this without holding or storing phone numbers on our servers at all.
It isn't ok to have privacy policies that are impossible to find or hard to understand. We think building trust starts with being honest and open, including about the limits of what you can promise.
We have tried to make the information about how we use data in our AI interviews as clear and plain as possible, but we welcome feedback and suggestions for improvements hello@colectiv.tech.
For organisations, safely guiding the operational use of AI is now the only option. We hope our four simple rules will also help others to think about how to do this and encourage others to share their ideas for best practice too.
For individuals, we cannot delete your data because we don't know which data belongs to you!