Dualo chevron
Back to blog
Navigating generative AI in UX Research: a deep dive into data privacy
AI

Navigating generative AI in UX Research: a deep dive into data privacy

The integration of generative AI into UX research is not without its challenges, particularly in the realm of data privacy. Yet with a safeguarded approach, organisations can turn these challenges into opportunities and build a competitive advantage.

Nick Russell
April 23, 2024
“The execution of UX projects will undergo a transformation with AI tools ... It's not AI that will snatch your job, but the individual leveraging AI to outpace your performance.”

- Jakob Nielsen, UX pioneer and Co-Founder of NNGroup


In the ever-evolving landscape of UX research, researchers are constantly seeking innovative ways to harness the power of technology to maximise the effectiveness of their research, and the overall impact and ROI of their team.

Generative AI is the (relatively) new kid on the technology block and offers many benefits to teams when it comes to conducting research (creating screeners, interview guides, etc) but also when it comes to knowledge retrieval and meta-analysis for insight driven organisations.

However, as exciting as this prospect of generative AI may be, it comes with a set of crucial considerations when applying this to your existing research findings, primarily revolving around data privacy concerns shared by both UX research and legal teams.

The iceberg analogy: what lies beneath the surface

Imagine your UX knowledge repository as an iceberg, with the visible tip representing the synthesised and anonymised insights accessible to the wider organisation. 

This is the realm of generative AI, where valuable knowledge is extracted and distilled for seamless retrieval. Beneath the surface, however, lies the raw data, akin to the submerged part of the iceberg, hiding the complexities, intricacies, and potential risks.

Illustration by the Dualo Design Team


To mitigate these risks and safeguard sensitive data, we’ve found that the best approach is to construct a dedicated knowledge repository exclusively for your synthesised and anonymised knowledge. 

This separation ensures that the risks associated with sharing raw, sensitive data are significantly reduced.

Best practices for knowledge management: synthesised data as the anchor


To manage knowledge effectively, best practice dictates that synthesised data resides in the repository, accessible to the wider organisation. While links from this repository to analysis tools and raw data ensure additional access requirements and safeguards are in place. 

Leveraging existing security access and permissions instils further confidence in who can access what, mitigating concerns about the wider distribution of personal data.


The data privacy wake-up call: a reminder on the consequences of mishandling data


In the pursuit of knowledge dissemination, it's crucial to highlight that reports shared across the wider organisation should never contain personally identifiable information (PII). Teams that overlook this crucial step not only jeopardise data privacy but also risk breaching GDPR and other data protection regulations.

To drive this point home, consider the fines levied against companies for such breaches. 

The 2024 edition of DLA Piper’s GDPR and Data Breach Survey has revealed another record year for GDPR enforcement. Supervisory authorities across Europe have issued a total of EUR 1.78 billion (USD 1.94 billion /GBP 1.55 billion) in fines since 28 January 2023, which is an increase of over 14% on the total issued in the year from 28 January 2022.

But these fines are not just corporate setbacks; individuals can also be held liable, so the need for a cautious and meticulous approach to knowledge management and data privacy cannot be overstated.

Overcoming hesitancy with pragmatism


Many UX research teams have been understandably hesitant to open up their research analysis tools to wider teams and stakeholders. Beyond the overwhelming nature of these tools for non-researchers, there's a significant risk of inadvertently sharing sensitive data and facing legal consequences.

However, organisations that embrace the synthesised data approach for their cross-team repository, with the right level of security and safeguards in place, find that navigating Data Protection Agreements (DPA) becomes less of an obstacle and are able to gain more of a competitive advantage. 

Imagine the ability to retrieve specific insights from across thousands of documents in seconds, or asking real-time questions during stakeholder meetings — this is the power of leveraging generative AI effectively, and it’s very much already a reality for many teams.

Take the UX Research and Service Design team at Sisal for example:

"Prior to Dualo, finding specific insights was very laborious, ineffective, and inefficient. This resulted in wasted time and missed opportunities, due to disjointed information and a lack of cohesion. Now we’re able to maximise the value our research efforts and rediscover key insights – when we need them"

Chiara Lenna, Service Designer & UX Researcher @ Sisal

"For me, the optimisation of retrieving and sharing insights with our stakeholders is the biggest benefit. Sisal is a large organisation with a lot of business units, so lots of colleagues come to us asking for insights and information on specific topics. Now we can expedite this process, and even provide access to our repository – allowing stakeholders to make decisions faster, and be autonomous in searching and navigating our past research”. 

Alessandro Salvo, Service Designer & UX Researcher @ Sisal

AI safeguards and customisable terms


At Dualo, we understand the importance of striking a balance between innovation and data protection. Our public terms cover AI usage, but we also offer customisable terms and agreements to ensure that every party involved is fully comfortable. And feature flags provide full control, allowing AI features to be toggled on or off for entire workspaces or individual users.

In an effort to take data privacy a step further, we’re developing an AI PII checker feature. This tool scans documents for information that breaches data privacy laws, adding additional safety guards to flag potential risks before content is added to your repository.


Behind the scenes: the OpenAI connection and data security


Many AI integrated tools leverage Large Language Model (LLM) technology, the most well known of which being OpenAI’s Chat GPT. 

At Dualo, we’ve integrated OpenAI’s technology into the Dualo platform, and as part of our commitment to transparency, we are always open about how the technology is used. 

For example, the OpenAI infrastructure is hosted in Microsoft Azure, with stringent measures in place to comply with GDPR and adhere to ISO 27001 and SOC 2 security standards. 

And rest assured, contrary to popular opinion, OpenAI does not use Enterprise API data for training their models, ensuring that your data is handled securely and with care.

Ensuring accuracy: preventing AI hallucinations with ‘temperature control’


One common concern in the realm of generative AI is the accuracy of responses and how to avoid ‘AI hallucinations’. 

Whilst more challenging to navigate when asking AI to analyse raw data (after all, it’s being asked to come to its own conclusions), it’s actually very achievable when using pre-synthesised data to provide an accurate response. 

At Dualo, we address the challenges of such hallucinations by controlling the 'temperature' of our generative AI features. Setting it to 0 ensures responses are as factual and deterministic as possible. 

All responses are grounded in source materials added to your repository (and linked to a specific page or slide within a document), and the AI only provides answers it can support with evidence.

This commitment to accuracy enables real-time responses and a high degree of confidence in the output for both researchers, as well as stakeholders, who may be self-serving knowledge from your repository.

A call to action: Join us on the journey of secure AI integration


In conclusion, the integration of generative AI into UX research is not without its challenges, particularly in the realm of data privacy. 

However, with a safeguarded approach, customisable terms, and secure and innovative features, organisations can turn these challenges into opportunities and build a competitive advantage.

We encourage you to reflect on your own data privacy practices and consider how a dedicated knowledge repository, combined with the power of generative AI, could maximise the retrievability and impact of your UX insights moving forward. 

The reality is that resistance to using AI is often down to fear of the unknown, and a lack of hands-on experience with AI.

In fact, in a recent survey of over 13,000 professionals, Boston Consulting Group found that hands-on experience with AI tools makes business professionals far more likely to be optimistic about the use of AI:

Source: ‘What People Are Saying About AI’ by Boston Consulting Group


In fact, 62% of business professionals with experience using generative AI are optimistic about its future, a stark contrast to the 36% of respondents devoid of AI experience who share this optimism. 

Conversely, 42% of nonusers express concern about AI, whereas only 22% of regular AI users echo these concerns.

So if you’d like to start leveraging AI to support with any part of your research role, but you're facing resistance, here’s what you should do.

Consider shifting your approach from theory towards hands-on learning, allowing those who are hesitant to experience its benefits firsthand.

This can be safely achieved using one of the many 1000s of AI powered tools out there – just remember to maintain data security by refraining from using any sensitive information during testing. Instead, opt for public or dummy data to mitigate any potential setbacks in the initial stages of these discussions.

When these individuals have the opportunity to experience the possibilities of AI for themselves, you may find their mindset quickly shifts from 'resistance' to 'how might we...'

In our experience from working with 100s of organisations at Dualo, many “hard no” Leadership and Legal teams take just 10 minutes to see that with the right guardrails, AI isn’t so scary after all. 

Curious to see AI in action within the Dualo Platform? Join us for our next live product demo to explore the future of secure and efficient AI-powered knowledge retrieval.

ABOUT THE AUTHOR
Nick Russell

I'm one of the Co-Founders of Dualo, passionate about research, design, product, and AI. Always open to chatting with others about these topics.

Insights to your inbox

Join our growing community and be the first to see fresh content.

You're subscribed! Stay tuned for insights to your inbox.
Oops! Something went wrong while submitting the form.
Dualo checkbox

Repo Ops ideas worth stealing

Dualo checkbox

Interviews with leaders

Dualo newsletter signup