Skip to main content
All CollectionsWondering overview
Our approach to developing a safe AI-first research platform
Our approach to developing a safe AI-first research platform

Our approach to developing a safe, private and compliant AI-first platform that helps your scale your research.

Updated over a month ago

This article focuses specifically on our approach to deploying AI safely, and ensuring that you can use our AI-first research platform with peace of mind in your research. If you want to learn more about our overall approach to privacy, security and compliance, check out our Trust Center.

How can I be sure that AI-generated questions on Wondering are good quality?

We've put safeguards in place to prevent harmful AI generations in your studies. Firstly, we've designed our product to prevent users from being able to directly manipulate the tasks that are executed by the AI models Wondering use to conduct AI-led studies.

Wondering's AI is designed to follow best practices when generating studies, asking participants questions and completing analysis. This includes, but isn't limited to, best practices like:

  • Asking questions in clear language.

  • Only asking one question at a time.

  • Avoiding leading questions.

  • Avoiding biased questions.

To make sure AI-generated questions are asked are relevant to the study, we also instruct the AI to only ask questions that are relevant to the study you create, and to not deviate from the instructions you give in each study block. You also have full control over to what depth you want Wondering to ask follow-up questions for each study block.

To further ensure that Wondering's AI-generated follow-up questions are of a good quality, we also instruct the AI to:

  • Always remain polite, and always formulate questions in a friendly tone.

  • Never answer questions from the participant, or repeat what they say verbatim.

  • Block any responses that are in conflict with OpenAI’s usage policies.

It's worth noting that the AI models like the ones powering Wondering are still a new technology, and it's not always possible to guarantee that the questions it returns are formulated exactly the way you intend.

To get a good understanding of how questions are formulated in your study, we recommend that you preview your study a few times using the Preview functionality in the study builder.

Your data is not used to train AI models

We use OpenAI, the market leader, as our AI provider. When you use AI in the Wondering platform, your data is not used to train OpenAI's models. For more details on how OpenAI handles your data, see their Enterprise Privacy page.

For more details on what other (non-AI) sub-processors we rely on to provide our services, see this guide.

We do internal testing as part of our development practice

We're committed to continuously making sure that we build AI-first research methods that are safe and that can be used in a responsible and compliant way. As part of our development process, we continually test our product to make sure that we can identify and prevent ways in which our AI system can output data that isn't safe.

We design our AI-features to make them understandable and transparent

To make it easy for you to control AI-generations (such as AI-generated follow-up questions, studies and study results), we aim to design our AI-features in a way that is transparent and easily understood. This includes allowing you to guide AI-generated follow-up questions in your studies, previewing and testing how AI-generated questions appear to your participants and displaying what data points in your study data underpin the AI-generated themes we display to you in your study results.

We're continuously improving our platform

We regularly update our platform based on internal testing, testing with our design partners and based on customer feedback to make the our platform even better at providing effective AI features.

What compliance standards does Wondering meet?

Wondering is SOC2 Type II compliant. We're also committed to complying with the GDPR framework. For more details on our GDPR commitment, read this guide. You can also read more in our Trust Centre.

Safety and security is very important to us at Wondering. If you do notice any issues relating to the way Wondering presents outputs from the AI models we use, or any other issues relating to safety, quality or security in the course of using our platform, please email us directly at [email protected].

Did this answer your question?