All Collections
Creating your studies
Building a study
How can I be sure that AI-generated questions on Wondering are good quality?
How can I be sure that AI-generated questions on Wondering are good quality?
Updated over a week ago

Wondering's AI is designed to follow best practices when asking generating studies, asking participants questions and completing analysis. This includes, but isn't limited to, best practices like:

  • Asking questions in clear language.

  • Only asking one question at a time.

  • Avoiding leading questions.

  • Avoiding biased questions.

To make sure the questions are asked are relevant to the study, we also instruct the AI to only ask questions that are relevant to the study you create, and to not deviate from the instructions you give in each study block. You also have full control over to what depth you want Wondering to ask follow-up questions for each study block.

To further ensure that Wondering's AI-generated follow-up questions are of a good quality, we also instruct the AI to:

  • Always remain polite, and always formulate questions in a friendly tone.

  • Never answer questions from the participant, or repeat what they say verbatim.

  • Block any responses that are in conflict with OpenAI’s usage policies.

It's worth noting that the AI models like the ones powering Wondering are still a new technology, and it's not always possible to guarantee that the questions it returns are formulated exactly the way you intend.

To get a good understanding of how questions are formulated in your study, we recommend that you preview your study a few times using the Preview functionality in the study builder.

Safety and security is very important to us at Wondering. If you do notice any issues relating to the way Wondering presents outputs from the AI models we use, or any other issues relating to safety, quality or security in the course of using our platform, please email us directly at [email protected].

Did this answer your question?