Proteus Motion

Performance Testing Insights

Helping prove the value proposition for both Proteus and it’s customers.

Organization

Proteus Motion

My Role

Principal Designer / Design Manager

The Team

Steven Bazarian, Andres Gonzalez, Brendan Kelly, Justin Maskell, Doug Moore, Jason Shaev, Jaskar Singh, Will Waterman

Proteus is a one of a kind software enabled fitness diagnostics and training robot. It uses 3D resistance—constant resistance in all dimensions to create the feeling of training under water. Our product has applications in sports performance training and physical rehabilitation with customers including professional sports teams, physical therapy clinics, and bio-hacking facilities. Proteus Motion’s mission is to become the universal standard for physical strength and power measurement.

As the director of software product design, one of my key focuses has been shaping the way our users interface with Proteus via our software. This includes during active sessions as well as away from the physical hardware. One key area of interaction is performance testing and reviewing an athlete’s test results afterward. This case study will showcase the year-long journey to deliver impactful testing insights to our customers, resulting in a 24% increase in usage of this mode.

prot_test_01

Testing is our bread and butter

Proteus is marketed foremost as an assessment tool for measuring peak power production in the human body. To facilitate those measurements, trainers administer a test from our Performance Testing catalog. Hardware sensors record data as an athlete performs a series of guided movements, which our software interprets to display power, acceleration, velocity, and imbalances for the tested muscle groups. Testing is the only mode that records this data, so increasing testing uptake is incredibly important as it helps us grow our database of demographically associated performance norms.

We recognize that our customers' success directly impacts our own success. Testing is critical to helping athletes achieve their fitness and performance goals, as it allows them to set an initial benchmark and measure progress from working with a trainer. When we can showcase test results in a way that helps athletes understand how they stack up competitively and how they’re progressing, we provide our customers an effective way of proving their own value and expertise. This in turn manifests in increased revenue for them in the form of increased client retention, new client referrals, and potentially providing testing as a separate paid service.

prot_test_02
Proteus in use
prot_test_03
Trainer user persona

Enhancing understanding and providing guidance

This very large project (especially for the size of the team) spanned the greater part of a year. Our team broke the work down into four phases with the following goals:

Phase 1

Make data personally relevant via athletic cohorts

Phase 2

Provide basic guidance through a new results classification system

Phase 3

Frame results simply with recommendations to assist in program building

Phase 4

Provide a digital training guide for athletes to reference between tests

Implementing the experience in phases allowed us to release more frequently and test out new concepts that would eventually make their way into the more fully realized experience. Further iterative work continued throughout 2023. For simplicity in this case study, some sample artifacts will reflect the most recent version, but the core feature set and principles remain consistent since the initial launches.

Make data personally relevant via athletic cohorts

In our original reporting views, performance data was either relayed in their native units, for example watts for power and m/s² for acceleration metrics, or as a percentile score. Many users didn’t understand what these native values meant in terms of practical sport application or if they were good or bad results. The percentile scores that we did show, framed results against all Proteus users, which ranged from little league baseball players, to NFL quarterbacks, to retirees using our product for physical rehab. This meant that global percentiles weren’t an accurate and meaningful way of conveying performance due to the huge breadth of users.

I worked with our human performance subject matter experts to provide a way of personalizing the test scores so that a user could understand whether their results were good or not. As we were already collecting demographic information within a user’s profile, we could frame data via a user’s physical attributes like age, gender, height, and weight, as well as what sport they played and their skill level (high school, professional, amateur, etc). I suggested that we define a user’s athletic cohort via those attributes and use it to filter our vast database of performance data. This would frame their results only against other users like them, providing greater relevancy to both the native unit scores and the derived percentile scores.

Wireframe approaches for setting an athletic cohort
prot_test_04
prot_test_05
prot_test_06

Anticipating the need for setting an athletic cohort in other reporting views, I advocated for a modal approach which could be launched wherever required. This would also allow for more screen real estate to be allocated to displaying results data and save engineering effort in the future. Acknowledging that for certain sports or age ranges, we lacked ample data sample sizes, I suggested that we limit the granularity of the cohort settings. This meant choosing inputs for age, weight, and height via predetermined ranges (say every 15 lbs, or 4”, or 5 years). Additionally I deemed it crucial to offer prompt feedback if the user selected settings that wouldn't yield statistically significant results.

prot_test_07
Redesigned Comparisons view
prot_test_08
Cohort setting modal
prot_test_09
Cohort selection applied to a Power Report

The aptly named Session Comparisons was the first reporting view to benefit from cohort filtering. This view originally only displayed the user’s power and the average power for all users by movement. The redesigned view allowed a user to see their power and velocity scores as in native units and personalized percentiles, and granular data such as cohort mean and sample sizes. The old Power Report (more on that later) received this cohort filtering utility a couple of months later.

Provide basic guidance through a new results classification system

While developing the Comparisons reporting view, we initiated a trial partnership with Equinox Fitness Club, deploying a single Proteus unit at their Flatiron NYC location. This was our first unit in a "general fitness" multi-location chain. With a different client demographic and more variability in trainer expertise than was typical of our customers at this time, Equinox provided a compelling test site for the next phase of reporting work. Convincing Equinox trainers to extend Proteus usage beyond training to testing was essential. A key hurdle to testing uptake was trainers' lack of understanding on how to adjust client training programs based on test results, a challenge not unique to Equinox.

To address this, our human performance SMEs devised a system for classifying athletes based on test scores. These classifications in turn recommended appropriate training adaptations and sample exercises. As a low cost approach to test understanding of this system and foster our relationship with Equinox, I quickly designed an abbreviated explainer for our Power Reports, laminating and attaching it to the hardware on site. This artifact, coupled with Proteus Motion led in-person training sessions, jumpstarted Proteus usage at the site. The classifications and recommendations below became the backbone of our redesigned reporting platform.

prot_test_10
Low Power

Does not generate enough force to produce sufficient power

Below the 50th power percentile in your particular cohort

Speed Dominant

Does not possess power to use speed effectively

Above the 50th power percentile AND acceleration is greater than power by more than 5 percentile points

Strength Dominant

Does not possess appropriate speed to use power effectively

Above the 50th power percentile AND power is greater than acceleration by more than 5 percentile points

Balanced Power and Speed

Has an ideal balance of speed and power

Above the 50th power percentile AND power and acceleration are within 5 percentile points of each other

Frame results simply with recommendations to assist in program building

After providing a way to personalize data via cohorts and developing a nascent classification system through which to frame results, we were ready to leverage these capabilities to prove out our product’s value prop and in turn our customers’. At this point, the CEO’s direction was to place a generalized training recommendation based on a user’s overall power and acceleration percentile at the top of the old Power Report. This may have seemed like a low effort and obvious ask, but I had suspicions that it wasn’t as straightforward as believed. I proceeded to audit the existing reporting experience, noting all of the states, data provided, and relative traffic from the top of the funnel.

Through this process I identified four key drawbacks with the proposed solution:

Due to the brittle nature of how these reports were built (optimized for printing), updating or adding content to them is very difficult.

There were only two Power Reports; individually built and tied to a specific test. Building and maintaining new bespoke reports for each of our additional 15+ tests would be considerable effort from a design an engineering standpoint.

Customers who had designed their own custom tests would never be able to receive the newly proposed insights and recommendations for those tests.

The existing Power Reports were already notoriously too dense and difficult to understand by most trainers, so adding more data would exacerbate that issue.

Current state audit
prot_test_11
prot_test_12

Instead, I proposed a reporting platform upon which we could build further capabilities over time. It would leverage the customizable athletic cohorts and results classification developed previously, but in more powerful ways than the CEO envisioned. This proposal conferred three main benefits:

Benefit #1

Simplify the architecture by consolidating displayed data and removing redundant views

Benefit #2

Allow users to more easily parse the provided data by simplifying the views through progressive disclosure mechanics

Benefit #3

Easily apply the new capabilities and generated insights to potentially any test, regardless of origin

In order to test the affinity for our proposed reporting platform, my junior designer and I designed two directions to evaluate with customers. I guided my designer in writing a research plan and moderation guide in order to understand how trainers would prefer to navigate the reporting experience, interpret results, make sense of our new classification system, and apply the recommendations to their client’s programming. Each direction included a high level overview of the user’s overall scores, before breaking results down into movement categories (umbrella groupings of similar movements activating the same muscles) and individual movements (leveraging the Comparisons work from above). Both directions applied the new classification system, provided recommended training adaptations, sample exercises, and a mobile friendly training “pocket guide”.

We conducted 5 interviews with existing Proteus customers. I led 4 of them, with my designer observing and taking notes. I made sure that he had the opportunity to conduct the final interview after shadowing my sessions. I synthesized the responses, noting trends in understanding and affinity, which I assembled into a topline research report for the greater team.

prot_test_13
Fig Jam board for collecting respondent sentiment

I leveraged our research findings to merge high affinity ideas from the concepts into a single wireframe direction. This design split data readout and analysis into 2-4 segmented views. Depending on the on test administered the Bilateral Balance or Power Report view might not be available. Test Results emerged as the default view, providing raw data and cohort derived percentiles for the overall test down to each tested movement. Insights focuses on analysis to provide understanding of which areas are in balance or have a deficiency in terms of power or acceleration. Below that, the trainer is empowered with guidance on how to apply the results. Depending on power and acceleration scores, each movement category is placed into one of four training adaptations supplemented with recommended exercises. Finally trainers can easily access share recommendations with their clients in the form of a simplified mobile-friendly training “pocket guide”.

Test Results view
prot_test_14
Insights view
prot_test_15
“Pocket Guide”
prot_test_16

Following stakeholder approval of the mid-fi wires shown above, we began detailed design of the new experience. While I focused primarily on new states (Test Results, Insights, “Pocket Guide”), my designer tackled the holdover views (Power Report and Bilateral Balance) and progress graphing styles.

Two key elements in the interface proved to be challenging at this point in the project. The movement category cards performed the heavy lifting in terms of communicating key metrics, progress, and training recommendations. Due to allowances during the testing procedure, users can skip a movement, add a set, or modify the default resistance for a movement. This required many permutations of the cards and their embedded visualizations in order to frame data in an apples-to-apples manner. While showing progress we couldn’t directly compare results where certain movements were skipped or performed at different resistances.

prot_test_17
prot_test_18
prot_test_19

To give users a sense of their overall performance and introduce the movement categories performed in the test, I suggested a simple visualization to anchor the Insights view. This was intended to provide visual interest and give a snapshot understanding of how the user scored on each movement category before drilling into more granular detail or recommendations. Developing a “hero” visualization is always a challenge due to balancing stakeholder asks for utility and marketability and user-centric concerns such as immediate understandability and personal relevance. Between myself and my designer, we explored over 30 directions navigating conflicting stakeholder desires while still aiming for something that would be relevant to the end user and feasible for our engineers to implement. We ended up trying a lot of weird approaches before we arrived at a solution that appeased all parties.

prot_test_20
prot_test_21
prot_test_22

The detailed designs below represent a more recently updated version than what was originally launched, but the core features and layout remain true to the original launch. Notable updates from the wireframe to detailed state include the two final anchor visualizations for the Insights view plus other quality of life additions such as sample size ranges, contextual explainer content, and more precise recommendations around training adaptations.

prot_test_23
Overall test results by movement
prot_test_24
Insights and training recommendations
Provide a digital training guide for athletes to reference between tests

The final release for the new reporting experience was the aforementioned “pocket guide”. I proposed a fully responsive but paired down experience to enable athletes to benefit from the insights and recommendations provided in the full experience in a easily accessible format. As noted earlier, most software views are optimized for the attached touchscreen, but still receive a limited responsive treatment (roughly down to a mid-sized tablet). This would be the first view to be fully responsive.

Most customer sites leave Proteus in a logged in state using a trainer’s credentials, negating the need for athletes to log in themselves. Another category of users designated “supervised athletes” (typically rehab patients or pro athletes) lack log in credentials. This means that most athletes will not, or can not log in to view their own reports.

With that understanding, I needed to ensure that any athlete, regardless of their user type or ability to access Proteus, would be able to receive their training recommendations. My approach required the “pocket guide” exist as a web view that anyone with the link could access. To mitigate concerns about data privacy, a long obfuscated url would make it difficult to access a user’s recommendations unless it was explicitly shared with them. We also built in a 6-month expiration date for each shared recommendation to further safeguard against future unintended viewing.

prot_test_25

The most common scenario for sharing the “pocket guide” is at the Proteus hardware after finishing a test. In some sports performance gym settings, athletes and trainers do not carry their smartphones while training, so sending recommendations to the athlete’s email address is typically the most convenient method. Otherwise, with the on-screen QR code, the athlete can scan for immediate access.

The “Pocket Guide” for Training on the Go
prot_pocket_guide_frame_01
prot_test_27
prot_pocket_guide_frame_02

My original intent for the “pocket guide” was for a simple, lightweight display of the prescribed training adaptations and exercises based on the athlete’s test results. Just the instructions and very little else. Over time it expanded into a miniature version of the reporting experience, including progress charts and performance metrics from the reference test. While this extra information provided little utility for an athlete’s training, it was intended to convey the trainer’s value proposition by showing the progress to the athlete and their sponsors.

Impact and customer reception

The quarter after the release of the “pocket guide” we saw a distinct uptick in the average number of test sessions per customer site. Furthermore, our bi-annual trainer survey suggested that the share of trainers who reported testing as their most used mode also increased.

20%

Increase in testing at sites that were frequent testers

24%

Increase in testing across all sites

10%

Increase in trainers who reported Performance Testing as their most frequently used mode

According to the survey, the top reasons cited for frequently using our testing mode included “tracking progress”, “informing programming”, and “data and metrics”. This implied they found value in our new reporting approach.

Some anecdotal post-launch feedback from customers:

“We started selling Proteus performance tests to athletes and in just two back-to-back Saturdays we generated $3,750 of new revenue!”

“The new software gives my trainers superpowers — in minutes they can identify specific areas needing attention and work that into a personalized program.”

“Having worked most of my entire career in golf evaluation programs, I can absolutely say that this is the first time there is actual quantifiable analysis available to help you with your golf game. No opinion. No guesses. Just analysis and results! Game changer! THX Proteus!”

“Most people get performance testing once a month, I make my own programming for clients based on what proteus says they should work on.”

In addition to impacts to key customer-centric metrics, the work conveyed benefits to our own software engineering team. By consolidating data into fewer views and building an extensible platform, I eliminated a considerable amount of future engineering effort around creating new Power Reports and maintaining existing views.

4

Legacy views deprecated

0

Hours spent working on Power Reports in 2023

Nil

Amount of engineering effort required to enable functionality for new tests
Related Work

Proteus Software Experience

Proteus Software Experience

A collection of software features designed for a one-of-a-kind hardware product.

Proteus Software Experience

Proteus Motion

A collection of software features designed for a one-of-a-kind hardware product.

© 2023 Will Gabrenya