Landing Experience

🤔️

#foundational research #interview #user segmentation #SEO #PLA #E-commerce

Background

The design team I worked with is redesigning the Wayfair landing pages for customers who come from external sources (e.g., Google Shopping), with specific concerns on the conversion rate and the bounce rate.

As the primary researcher, my goal is to conduct a study to understand customers’ expectations when shopping online, and generate insights to improve the customer experience on the landing page.

 

Approach the Problem Space

Understand the Problem

I got the request from my manager as it is planned in the UXR roadmap.

To understand the problem space, first, I communicated with my manager to get the background information. During the whole process, the regular 1-1 meetings and the ad-hoc project meetings enabled me to make the alignment with my manager.

I reviewed the existing documents to gather the high-level general big picture, as well as the related projects and the existing insights - thanks to the Ops team for keeping the repository and the wiki pages live!

At the same time, I proactively reached out to my co-workers to discuss the problem. While the direct stakeholder helped me understand their request, the broader stakeholder that I found through snowballing enabled me to draw a bigger picture as well as uncover the hidden knowledge - in some cases the decisive knowledge about the background, like how something was paused at that time.

All the preparation work enabled me to collect “what we have already known” and “what we don’t know yet”, so that to identify the knowledge gap.

Re-scope?

What I did not expect at the beginning was, that by consulting my colleagues, I further identified a broader group of stakeholders and connected the dots in the organization. While I am working on the problem space within the XD-lead team, there is another PM-lead team working on the same problem with different approaches. We found each other, which confirmed and strengthened the project value for both of us. But here also came a question: should I re-scope the project to also try to address their needs?

Further, reviewing the documents and talking to people made me realize the conflict between quant data and qual data (i.e., while quant data suggests A is better, the qual data suggests B is better). Should I try to take a step further to explain the conflicts?

The short answer to both of the questions above: yes, and no.

After doing the desktop research, communicating with my manager (thanks Sarah!), and confirming the bandwidth, I set the project scope and developed several levels of success measurements: the baseline is to answer the questions asked by my team and as documented in the UXR roadmap; if condition permits, I will also try to answer more questions and explain the conflicts between qual and quant.

Identify the Roadblocks

As I dived deep, I further identified a potential problem that would affect the research. There was a live A/B testing that would potentially affect recruiting qualified participants. Should I neglect that, pause the study until the test ends, or try my best to take advantage of it, if possible?

I escalated the problem to my manager, proactively consulted the data and the engineering team working on the project, and came back to my manager with several possible solutions. After discussing with the ResearchOps team, we decided that we will tweak the recruiting methods to address and possibly leverage the A/B testing.

 
 

Design the Research

Research design is an iterative process. During the whole process, I made sure that (1) my work is done, and (2) the stakeholders are aligned.

As I was collecting the background information and conducting the desktop research, I was also drafting the project brief. As soon as the high-level information (e.g., project goal, timeline, etc.) seemed clear to me, I shared the document with stakeholders, and asked them to make comments and/or add the questions that they’d like to be answered. Based on their input, I further proposed the research questions and methods, and shared them with stakeholders to make sure alignment.

I created a dedicated slack channel to share the updates and store all project-related messages. Anyone who feels interested in the project could join the channel and be posted.

When designing the research, I made data-driven decisions. For example, as the users of the landing pages come from multiple sources (e.g., paid search, paid social), should I include them all? I consulted the product manager and data scientist, who directed me to the landing page-related database. Based on the data, I identified that users coming from X source on Y channel should be my target group because of its large percentage and poor performance.

I also made decisions depending on the business priority. Wayfair sells millions of products, what product category should I include for the research? By communicating with my manager and my co-workers, reviewing the documents, and attending all company meetings, I was becoming more familiar with the business. I decided to focus on Product A and Product B because of the potential high impacts.

Besides, I also made decisions based on the existing internal and external research. For example, based on the model of the consumer buying process, I limited my focus to customers in certain shopping stages.

The Project Brief/Research Proposal

 
 

Conduct the Research

Recruiting

Participants were recruited on UserTesting.com based on (1) the recruitment screener, and (2) the unmoderated interview.

Based on the demographic information provided by the Consumer Insights team, I developed the recruitment screener. I worked with my manager and also the Research Ops team (thanks!) when developing the screener.

The unmoderated interview was originally leveraged to recruit for Product A but not for Product B because of the live A/B testing which would potentially affect recruiting. Regardless, I developed the unmoderated interview script for both Product A and Product B.

Moderated Interview

As the unmoderated interview recordings came back, I reviewed the recordings, took notes, and selected those who qualified for the moderated interview.

As I was reviewing the unmoderated interviews, I was (not really) surprised to find that the recordings could already answer a large portion of the research questions. Besides, there already came repetitive responses (triangulation reached). Should we proceed to recruit as originally planned?

I escalated my question to my manager. We discussed for a while, and decided that (1) I would start data synthesis and document the questions hadn’t been answered yet, and (2) I would reach out to qualified participants for the moderated interview as long as the timeline permits.

Finally I reached out to several participants and got 4 conducted for the moderated interview. The moderated interview took place on Zoom (UserTesting’s Live Conversation). Each lasted ~45 minutes. Before the interview, I developed individualized modetation guide based on the general one and each participant’s responses to the unmoderated test.

Data Synthesis

First, I went through the unmoderated tests (30+ videos), made clips, and made notes when reviewing.

Second, when conducting the moderated interviews, I did keep notes. I documented my notes and my impression after each interview.

With all the data, I tried to transcribe the whole videos (based on auto-transctiption). Failed.

I tried to put all the data into a spreedsheet and then make a pivot table. Partly succeed. I made the pivot table with 3 participants, then gave up, consiering the workload. If time permits, for this project, I would still feel it worth trying - participants did mention the specific feature when they making comments; a complete documentation of their response would reveal not only the potential improvement but also the priority information.

I tried affinity diagram, which worked out. I put participants quotes, my notes, and my observations into the Miro board. I arranged the sticky notes based on the main research questions, grouped them together if they were under similar themes, and draw lines and made diagrams if they are connected in some sense.

During the whole process, I communicated with my manager in terms of sharing updates, sharing insights, and dicussing roadblocks.

Yes - I did try data synthesis in several ways. After completing the project, I shared my learnings and celebrated wins in the XD co-op meeting. This screenshot documents my trials and errors and successes and was shared with my co-workers.

Trials and Errors and Successes in Data Synthesis

 
 

Make Impacts

I have driven impacts on several levels. First, I shared my study report with my direct and broader stakeholders - research questions got answered, research got consumed.

When sharing the report, I also tailored my read-out sessions depends on the audiences. For example, after sharing the insights with the XD-lead team, I was requested to do another read-out session with the PM-lead team. When sending out the invitation, I also included a document and asked them to put their questions, which I could try to answer and/or at least address during the session. During the session, there did come some questions that I can not answer, but I am glad that I at least created an environment for discussions to happen within the large organization between cross-functional teams.

The Study Report

Second, my manager shared with me later that my work was highlighted in the quarterly meeting, and some of the specific suggestions were incorporated into the vision work for future design and/or future testing. Glad to see that! When developing the reports, I intentionally made the Takeaway page combined with (1) higher-level questions for discussion, (2) high-level insights, and (3) actionable suggestions - I am glad to see that this structure worked.

Third, after completing the project, I took ownership in taking the insights further.

For example, my design partner came back asking if I could do a further investigation on an issue uncovered in the study, specifically on its severity. I leveraged data (in this case, FullStory, the digital experience analytical platform) to investigate the issue.

For another example, I realized filtering as an interesting problem space when conducting this research. Later, I did a secondary research on filtering, combined existing cases and research findings and shared them broadly on Customer Connection, an empathy building program within Wayfair. Dots connected. While this landing page research is more customer facing, some audiences of the Customer Connection come from other business. We shared our knowledge on this shared problem space.

Moreover, the value of UXR and my team was recognized during my read-out session with the broader stakeholders, as well as the Customer Connection session. Besides showing what I found in the research, I also proactively showed what UXR can do to help the business. An evidence is, as I was doing the research, the business was investing and building a team for landing experience. There was a designer, but not a researcher at that time. My work is one of the first for their repository. Later, my manager was even moved from our original team to the landing experience team and became their first UXR.

Oh and one more! I feel flattered, and also proud to be recognized with “Relentless Customer Focus” - thanks my co-worker!

 
 

Reflection

What have I learned from the process?

If I have to name one thing, that would be dots-connection.

Twofold learning - the value of connection, and my strength in making connections.

Also twofold dots-connection in organization - (1) between the business units and the problem spaces, and (2) with people, in other words, collaboration.

 

Part of the Stakeholder Reviews, from my Performance Review

 

What if I would do this project again?

Things I will do the same?

Things I will do differently?

Aren’t you curious to know more? Let’s chat!

 

Project Gallery