UX Research & Usability Testing for European Neobank

Usability testing for a leading European neobank to uncover UX gaps and establish a research-driven culture for their product
#Case study
Usability Testing Case Study Cover

This case describes a complex usability research project conducted by Craft Innovations for a fast-growing European neobank in 2024. Our goal was to assess the existing UX and validate future design hypotheses using both qualitative and quantitative usability testing methods.
Logo
  • Client
    name undisclosed under NDA
  • Location
    Europe
  • Industry
    Banking / Neobanking
  • Timeline
    75 days

The project focused on identifying usability gaps across the client’s mobile app and demonstrating how structured user research can support product evolution in the long run. It combined deep-dive moderated sessions with large-scale unmoderated testing to provide a clear, evidence-based view of how customers interact with key banking features.

About the Client

A fast-growing European bank approached us to explore ways to continuously improve its mobile banking experience. The app already had a strong steady growth, an expanding user base, positive feedback in app stores, and solid performance metrics. 

However, the team had reached a point where analytics and internal product hypotheses were no longer enough. They wanted to learn directly from real users where friction really occurs in their digital user journeys, and build internal user research expertise for long-term improvement.

The bank approached us not only to conduct a comprehensive usability evaluation but also to learn Craft Innovations’ research methods and practices, so that they could later build their own user research processes and culture.

Project Goals

Together, we outlined several objectives:

  1. Identify usability gaps across critical user flows – onboarding, payments, card management, and daily banking – through both moderated and unmoderated testing.
  2. Evaluate the client’s app using UX metrics, such as SUM (Single Usability Metric) and MIUS (Mission Usability Score).
  3. Provide clear recommendations on how to address identified UX gaps and improve the overall product experience.
  4. Demonstrate how usability testing works in practice – from planning and recruitment to synthesis and recommendations – and deliver a clear roadmap for internal UX research adoption.
  5. Educate the client’s product and design teams on how to interpret findings, prioritize fixes, and integrate UX research into their workflow.

Challenges

The main challenge was not something critical in the app itself, but the lack of visibility into user behavior. The product performed well, yet subtle issues – like unclear microcopy, missing context during KYC, inconsistent navigation, or friction in card-related actions – had gone unnoticed.

Internally, the client had no established user research practice. Designers and product owners relied on analytics, support feedback, and assumptions. The project needed to prove that user testing isn’t just “nice to have” – it’s a measurable tool for better business decisions.

Another layer of complexity was timing: the client wanted to conduct a large-scale evaluation without delaying other product initiatives. This required a solid research setup that was both comprehensive and easy to replicate later – and, at the same time, able demonstrate its business value to the wider team.


Process explained

  1. We began with the research design phase, defining the methodology, participant criteria, and scope of testing. Our team outlined how moderated and unmoderated studies would complement each other, what metrics to apply, and how to ensure results could be used effectively by the client’s team.
  2. Then we held scoping sessions to identify the most important user scenarios – from onboarding and KYC to P2P transfers and card locking. Each task was designed to reveal not only usability but also users’ perception of trust and transparency, using the Single Usability Metric for moderated testing and the Mission Usability Score for unmoderated sessions.
  3. The recruitment of participants was handled by Craft Innovations based on screening criteria agreed with a client’s internal team. We were in touch with the client, providing regular updates on the recruitment process. This approach allowed the client to stay closely involved in the research process while ensuring that all participants met the study’s objectives.
  4. During moderated usability testing, participants performed tasks while our researchers observed their behavior and asked probing questions. This method helped uncover the “why” – for example, why users felt unsure during digital agreement signing or confused by service plan selection screens.
  5. In unmoderated testing, we validated those findings at scale. Using Maze, 200 participants completed tasks independently, generating large-scale data: misclick rates, completion times, and navigation paths. This allowed us to validate patterns discovered during moderated sessions and confirm whether they reflected broader trends. 
  6. We combined both data streams into a unified analysis framework, linking UX issues to product KPIs – such as activation, engagement, and retention. The results were presented through an actionable report, containing visualized task maps, quotes, and prioritized recommendations. Each recommendation was visualized and presented during a collaborative playback session with the client’s design and product leads. 
  7. Beyond insights, we guided the team through the structure of the research itself – showing how to design scenarios, measure usability, and interpret SUM and MIUS results. 

Methods we applied

  • Moderated usability testing with SUM scoring (task completion rate, task time, error rate, and customer effort score)
  • Unmoderated testing via Maze with MIUS scoring (direct path completions, indirect path completions, misclicks, task time)
  • In-depth user interviews to capture “Why” behind an action
  • Behavioral and emotional observation
  • UX issue clustering by journey phase and impact on key metrics
  • Synthesis and insight mapping

Results & Impact

  • 68 usability issues identified, from unclear onboarding steps to hidden navigation in P2P transfers.
  • 82 actionable UX recommendations prioritized by business impact and development effort.
  • 40 redesigned prototype screens validated with real users post-report.
  • 76% SUM, 57% MIUS – We benchmarked the app’s current usability, giving the client a clear baseline for continuous UX improvement and measurable progress over future design cycles.

For this Bank, the project became more than a usability audit – it became a starting point for building a research-driven culture. Through our collaboration, the team gained a clear understanding of their customers’ real behavior and learned how to continuously test, measure, and improve their product using proven research methods.

Team behind the project

The project was delivered by the Craft Innovations research team with extensive experience in fintech. Our expertise includes more than 2,000 usability sessions, 40,000 hours of research, and products impacting over 51 million users worldwide.

This specific project was led by a dedicated unit focusing on banking experience research – the same team that supports our long-term clients across retail, premium, and SME banking.

Core roles on this project:

  • UX/CX Research Lead: research design, methodology, and insight quality; guided SUM/MIUS usage and synthesis.
  • Quantitative Research Analyst: data modeling for SUM/MIUS, metrics calculations, data QA, and dashboards.
  • UX Researchers: moderated sessions, interview facilitation, task analysis, insight mapping, recommendations.
  • Research Assistant: participant coordination, note-taking, transcript management, and research artefacts.
  • Project Manager: scope, timeline, stakeholder comms, and playback/workshop facilitation.