Biometric Fusion Engine
Android Application
A set of three Android applications that were shown at CES 2017 to demonstrate a newly announced biometric fusion engine by Synaptics. The engine combined fingerprint authentication and facial recognition to increase system security and also offer more flexibility to users.
The Problem

The SentryPoint Security Suite by Synaptics includes several technologies for securing user biometric data and enhancing the usability of devices. For example, our match-in-sensor technology is part of the SentryPoint offering. As an additional offering within this package, Synaptics created a fusion engine for combining multiple biometrics into one authentication score for both improving security and user convenience. For example, the fusion engine could take in scores from voice, iris, facial, and fingerprint authentication.

The first implementation of the fusion engine was combining fingerprint authentication with facial recognition. It allowed us to augment our fingerprint technology by using it along with the front-facing camera on a device, which is pervasive on mobile devices. I was tasked with creating three demos to showcase the fusion engine for customer demos at CES 2017. The team decided to create three applications: an application locker, a store, and an enhanced KeyGuard.

The goals of the project were to:

  • Brainstorm use-cases where multi-factor authentication would be useful.
  • Develop three demo applications to show our use-cases at CES 2017.
  • Develop a Java library that customers could include in their own applications.
  • Provide the source code of the applications and Java library to our customers.
Biometric Fusion Engine
The home screen for the application locker in its default state (left) and with entries (right).
Design

For the three applications, we wanted to show how users could pick whether they wanted flexibility or higher security using the fusion engine. For example, users may allow a messaging application to be unlocked with either their fingerprint or their face. This is helpful in situations where they cannot easily use facial recognition because they are in a dark room or can't easily use the fingerprint sensor as they are wearing gloves on a cold day. However, to open their banking application, the user may want to have both fingerprint and facial recognition scores together to authenticate.

The application locker allowed users to select different authentication policies depending on their needs. The three policies were fingerprint and face together, fingerprint with an optional face (if the fingerprint score was too low), and either fingerprint or face. The store application operated a little bit differently. In that application, the policy was determined by the cost of the items in the shopping cart. For example, the user could authenticate with a fingerprint or face if the total cost was less than $50. For a purchase over $100, they had to authenticate with both the fingerprint and a face. Lastly, the KeyGuard application was enhanced allowing the user to select which policy they wanted to use for unlocking their device.

Biometric Fusion Engine
The payment dialog (left) and confirmation screen (right) in the store application.
Implementation

All of the applications were developed on a rooted Nexus 6 running an AOSP version of Marshmallow. This allowed us system access in order to implement security policies on other applications through our application locker. It also allowed us to replace the KeyGuard on the phone with our modified version that had the enhanced SentryPoint features. The project had teams from around the world contributing, so we used Git for version control throughout the development of the demo applications, system library, and native code.

The Java library was compiled into an AAR that contained the source code for the authentication dialogs, the visual resources, and an API layer that hooked into the native fusion engine code. This would allow application developers to use the dialogs we designed in their applications without needing to implement any of the logic or user interface. The developer would call our library and then wait for either the success or failure message to determine whether they should proceed in their application. The developers could also implement their own dialogs using their Android themes and just use the API layer for the authentication system calls.

Biometric Fusion Engine
The lock screen showing the fingerprint plus face (left) and fingerprint or face (right) policies.
Results & Reflections

The demo applications and Java library were delivered before CES 2017 and the project was officially announced in a Synaptics press release a few days before the trade show. The demo applications were very well received from customers who visited our booth and then the demos started to travel with our marketing team to our customers worldwide.

After the initial development, our team added persistent authentication, completed a round of rebranding, and implemented anti-spoof features such as requiring the user to tilt their head or blink their eyes. The software package was distributed to customers as a patch to an AOSP image that our customers could use if they were embedding our fingerprint sensors in their devices. The three applications are still being used for demos at customers and our system architecture team has continued to refine the algorithms used for determining the matcher score. In the future, the goal is to collect behavioral data from the user and feed that into the fusion engine to assist in persistent authentication of a user.

My Role

Interaction/Visual Design, Prototyping, and Android Development.

SmartBar Keycap Redesign
Windows Application
A research study to determine whether a keycap redesign could improve the touch performance of the released SmartBar product. The primary goal of the keycap redesign was to lower the error rate related to sensor recognition. The results of the research were published at CHI PLAY 2016.
The Problem

Synaptics acquired Pacinian in 2012, which was a maker of keyboards that used magnets and a small ramp to generate desired typing feel. Two advantages of the ThinTouch product was that it allowed laptop keyboards to be designed incredibly thin (about the height of two credit cards) and the typing feel (actuation force) of the keyboards could be adjusted by swapping out the magnets with more or less powerful versions.

A few years later, Synaptics used what we had learned refining ThinTouch and launched the SmartBar product. SmartBar was a small, capacitive sensing module that could be added into the spacebar of the keyboard to enable gestural input using the thumbs. The sensor allowed users to scroll, zoom, bring up menus, and activate preset macros without moving their hands off of the keyboard. However, many of the companies that we were partnering with did not have experience with capacitive sensing, so we wanted to work with them to design a spacebar keycap that would be effective.

The goals of the project were to:

  • Design several alternative keycaps based on an existing spacebar design.
  • Retrofit the keycaps into several prototype keyboards so that they could be tested by research study participants.
  • Collect data on how the different keycaps affected user comfort, accuracy, and speed through a macro activation task.
  • Provide customers with the research results for improving future SmartBar products.
SmartBar Keycap Redesign
Profile view of the five keycap designs that were 3D printed and tested during the research study.
Design

With the help of the Product Designer on our team, we designed five spacebar keycaps that were retrofitted into five of our prototype SmartBar keyboards for user testing. The spacebars were 3D-printed on a Stratasys Objet 260 3D printer using VeroBlackPlus. The geometries of the spacebars were designed to match the back, front, and side edge slopes of the original production spacebar in our test apparatus. Although the sensing surface geometry was varied across the spacebars, all of the spacebars fit the original spacebar's height, length, and width envelope.

The switch height and sensor design made it necessary to raise the keys 2.2mm to enable them to sit at the same relative height on the keyboard. We also raised the "Alt" keys on both sides of the spacebar by 3.8mm to match the spacebar height. This was done so that the user would not feel that the height of the spacebar was taller relative to the surrounding keys.

The design of the keycaps are shown in the above graphic. Keycap A was a taller version of the original keycap design from the test apparatus. Keycap B maintains the surface curvature of the original, but makes the surface flat. Keycap C was slightly sloped at -8 degrees and rounded at the front edge. Keycap D was chamfered at -22 degrees and included a flat, inactive area on the surface for resting thumbs. Keycap E sloped across the entire surface at an angle of -22 degrees with no affordance for the inactive area.

SmartBar Keycap Redesign
One of the 3D-printed keycaps retrofitted into a prototype SmartBar keyboard.
Implementation

Two applications were developed for the performance data collection. An application that prompted users to press specific zones on the SmartBar was written in Java and was in the foreground for the user study. A preliminary user study showed that participants preferred graphical prompts for the SmartBar zones versus text prompts, so graphical prompts were used. Another application was written in C# that collected raw data from the capacitive sensor in the background using the Synaptics SDK. I utilized this data to get precise information on the contact geometries of the thumbs that I later provided to our internal teams.

During the user study, the foreground application first prompted the participant to press a randomly selected WASD key. This was to reset the hand location of the participant to a common position typically used when playing first-person games. When the key was registered, the SmartBar zone prompt appeared after a 2000ms timeout. Once the spacebar input was registered, another 2000ms timeout occurred before another WASD key was prompted. This sequence repeated until fifty spacebar inputs were registered. After completing the performance task on a particular keyboard, participants were asked to rate the comfort, accuracy, and speed related to the keycap design. After rating the keycap design, the participant moved to the next keyboard in their sequence.

Keycap Reaction Time Error Rate (ER) Recognition ER Placement ER
A 858ms 14.7% 6.2% 8.5%
B 855ms 6.5% 2.2% 4.2%
C 825ms 6.8% 0.6% 6.2%
D 843ms 13.7% 1.6% 12.0%
E 865ms 15.8% 3.8% 11.9%
A table listing the performance data for the five keycap designs.
Results & Reflections

The objective results from the research study can be seen in the table above. When looking at the subjective results, participants disliked Keycap A and Keycap B because they felt they were hitting the top-front edge of the spacebar more often than on the others. Participants also noted that Keycap D was less preferred because of the small ridge at the top. Some participants stated that the large slope of Keycap E made depressing the key switch more difficult. Keycap C was rated as the overall favorite because participants felt that it was the most familiar to them and also that it worked well.

Overall, the objective and subjective results of the user study showed that Keycap C (rounded with a -8 degree slope) was the most preferred and provided the best recognition error rate, which were the measurements that we considered most important to a final product that utilized our SmartBar module. The results made a case that the physical design of the spacebar keycap can notably impact user preference and performance related to spacebar designs with embedded capacitive sensors.

The results of this research were published at CHI PLAY 2016 and were also distributed to customers through a company white paper. The first SmartBar product was released in 2016 and can be purchased from Thermaltake as the POSEIDON Z Touch keyboard.

My Role

Interaction Design, Prototyping, Windows Development, User Testing, and Statistics.

Fingerprint Enrollment
Android Application
An application that shows Synaptics' partners how they should design enrollment user interfaces when including our fingerprint sensors in their devices. The demos were initially used for research purposes, but were later utilized by our marketing teams and then distributed with our SDK.
The Problem

Synaptics acquired Validity Sensors in 2013 adding fingerprint sensors to our portfolio of human interface technologies. Validity had no user experience team, so our team was tasked with researching and designing interfaces for fingerprint enrollment.

Fingerprint enrollment is the process of initially creating a template of the fingerprint to authenticate users as needed. The goal of enrollment is to try and maximize the capture of fingerprint area to account for variation in usage environments and finger positions. Enrollment consists of user interface components and matching algorithms that work together to create the best possible enrollment template.

The goals of the project were to:

  • Design user interfaces that produce the highest quality fingerprints for authentication.
  • Create scalable interfaces for various usage environments and different sensor types.
  • Research the best way to keep users engaged during the enrollment process.
  • Interface with the Synaptics SDK to develop our UI using the same tools as our customers.
Fingerprint Enrollment
The introduction graphic that was created by the product designer on our team.
Design

Fingerprint sensors were common on enterprise laptops, but having a fingerprint sensor on a smartphone required us to rethink how to design the user interface within a mobile context. Our team started by looking at the user experience of enrollment on swipe sensors, but quickly moved to area sensors once the technology became available.

We started with designs for three different types of enrollment experiences: fully guided, partially guided, and unguided. After completing some initial brainstorming related to the different enrollment experiences, we settled on doing a deep dive related to a fully guided enrollment. We started by iterating on a design geared around a small, rectangular sensor on the front of a mobile device. The shape and location were the same constraints of our next-generation fingerprint sensor that had been selected by Samsung for the Galaxy S6.

Our initial design included a significant amount of instruction as we expected most Android users would be completing a fingerprint enrollment for the first time. These instructions were placed on an introductory screen and also provided dynamically during the enrollment. Feedback was an important component during the enrollment process as well. We were comfortable with the visual feedback we had designed (a fingerprint changing in color intensity), but tried different combinations of haptic and audio feedback throughout the design process.

Fingerprint Enrollment
One of the early enrollment user interfaces that we tested in our usability studies.
Implementation

The first step was to build a prototype framework that allowed us to iterate on designs quickly and have multiple enrollment user interfaces within one application. We knew that there were going to be other enrollment designs we would have to implement for different sensor shapes and locations. I initially developed low-fidelity prototypes on Android using a PandaBoard and our external development sensors before moving to a custom AOSP build on a Nexus device.

A detailed Configuration screen allowed us to build tailored enrollments by choosing individual sensors and locations, toggling interface elements, and selecting different feedback types. Since each project required a custom SDK, I created an emulator that allowed us to design, prototype, and test interfaces when the software wasn't available. Once the SDKs were released, it was a very quick process to validate the designs.

Fingerprint Enrollment
The enrollment user interfaces were tailored to each of the device mockups that we were provided.
Results & Reflections

We completed five usability studies, each one with between eight and ten participants that we recruited internally, providing us with both quantitative and qualitative feedback. We provided fingerprint images that were captured by the sensor to the team that was working on the matching algorithms. We continued to move to higher-fidelity prototypes and used all of the feedback to improve the user interfaces before making a final recommendation prior to the Samsung Galaxy S6 launch.

In general, we decreased the amount of instructions provided in the application due to feedback from users. We also scaled back the text feedback that was provided to the user and moved those feedback elements into our visual design. We have since tested interfaces that ranged between realistic and abstract with varying amounts of animations. Our design explorations resulted in users stating that our UI was easy to follow, helpful, and visually appealing.

The application was very successful internally and has since been used by our marketing teams for product demos. Originally the source code was only available to internal development and marketing teams, but is now provided in our SDK to customers for reference and to utilize in their enrollment interfaces. Our team continues to use the application to prototype new enrollment user interfaces that relate to our next-generation fingerprint sensors. As the application is now provided to customers, I have implemented many of the features that were originally in our engineering applications. These include features such as fingerprint management, displaying fingerprint bitmaps, and viewing debug information from the sensor.

My Role

Interaction Design, Prototyping, Android Development, and User Testing.

UX Suite
Android and Web Applications
A testing tool used in evaluating the touch performance of a mobile device. The tool was created for conducting usability evaluations of touch sensors at Synaptics. The application was later published at MobileHCI 2014 and became available for public download from the Google Play Store.
The Problem

I wanted to create a testing tool in order to measure the performance of our touch technology on mobile devices against our competitors. Our team had testing tools to measure our touch performance on Windows, but those tools were not compatible with Android. Our team thought it would be useful to conduct research related to the real-world usability of our devices and get results that reflected user performance across all of the applications on a mobile device.

Touchscreen performance on mobile devices is especially important due to issues like non-linearity across the surface, poor edge performance, industrial design limitations, varied usage environments, and a wide range of form factors. Having an extendable platform to develop different types of tests would allow us to fulfill our research needs and also help our marketing teams in showcasing the performance of our touchscreens.

The goals of the project were to:

  • Design a usability platform for tests evaluating touchscreen performance on mobile devices.
  • Implement features allowing for unmoderated usability studies to be conducted.
  • Add networking features to help expedite and simplify the data collection process.
  • Provide data to internal marketing teams about our touchscreen sensor performance.
UX Suite
This application was my first time working within Google's material design language.
Design

One of the important goals of the application was to simplify the process of conducting user studies. I believed that this was a situation where I could design an application that would allow for mobile user studies to be run with little oversight directly from a test moderator. This kind of design would allow our team to complete these competitive studies in the background and let us focus on more in-depth studies that required a moderator to be present.

After brainstorming different platforms to develop the application on, UX Suite manifested itself as a web application and an Android application that worked together. The Android application would house all of the usability tests and collect the research data. The web application would allow for remote test configuration and management of results. Configuration values would be sent to the Android application from the web server and the Android application would upload test results to the web server.

I started with the design of a test related to the most common task on a smartphone, which was selection. The selection test was based on one of the most widely accepted predictive models for testing the accuracy of target selection (Fitts' law). The web control panel was designed to allow for easy access to all of the options included in the test with the same user interface as what was seen on the Android device. I included the ability to manage results, assign default test values, link custom surveys, and edit the text for instructions from the web interface. Lastly, device-specific settings could be sent down to the application including the test background color, default audio level, default haptic strength, and whether results were shown to the participant at the end of the test.

UX Suite
The web control panel that allowed for management of the unmoderated usability studies.
Implementation

The web interface was built using HTML, CSS, JavaScript, and PHP. I also utilized the jQuery library to make UI elements interactive and dynamic. The values that were implemented in the selection test included typical Fitts' test values such as widths, distances, angles, and replications. There were also settings for selecting orientations and error feedback. The user-facing messages inside the application were allowed to be updated including the description and instruction cards that are shown on the main page of the test. Other optional messages that could be set were an introduction and survey dialogs. All of these settings were able to be sent down to all devices or customized on a device-by-device basis.

The Android application targeted devices running an OS version of 4.0 (Ice Cream Sandwich) or later, which accounted for 84% of active devices at the time of development. Detailed results were saved as CSV files with headers dedicated to session identifiers, device information, test configuration, and the test results. The CSV files allowed for easy importing and analysis in most statistical applications. The application also hooked into Google Drive to handle post-test surveys in order to ease survey creation and analysis. The results were saved both to the internal storage of the device and uploaded to the server for easy management of results.

UX Suite
One of several boxes that I created containing materials for the unmoderated usability studies.
Results & Reflections

Two internal user studies were conducted on the application to assess its usability and the self-moderated study process. For both of the studies, participants completed the selection test within UX Suite. Subjective questionnaires were administered after the test through an embedded Google Drive survey within the application. The first study had participants complete the remote usability study using a device that was dropped off on their desk. The second study had participants install the application on their personal devices.

The results of the research showed that participants completed the usability studies much faster when utilizing their own device. More than half of the participants in the second study completed the test on the same day the instructions were sent out via email. The results of that study showed that 83% of participants believed that the instructions provided inside the application were "very clear." Additionally, 89% of participants stated that completing the study without a moderator present was "very easy." Every participant in the research study said they would complete more self-moderated studies in the future. The results of both user studies were very encouraging and justified the continued refinement of the application and the self-moderated usability research process.

The application has continued to be used internally and has also been provided to several of our hardware partners, such as HP and Lenovo. The results of the user studies were published at MobileHCI 2014 and the application was released to the public on the Google Play Store. My colleagues also utilized the application for their research on proximity-based touchscreen interactions that was published at CHI 2015.

My Role

Interaction/Visual Design, Prototyping, Android/Web Development, User Testing, and Statistics.

Image Suite
Windows Application
Our team at Carestream was tasked with redesigning the user interface of the Image Suite product due to feedback from customers. We spent more than six months working on an overhaul that included a fully functional prototype that we put through several rounds of usability testing.
The Problem

Image Suite is an acquisition modality that is targeted at clinics worldwide that do not need the powerful DirectView system from Carestream. Due to poor feedback from users in the field, our team was tasked with investigating the reported issues and to recommend solutions. Some areas of concern were the inefficient workflow, the amount of pop-up windows, and outdated design patterns.

Our team spent more than six months working on an overhaul that completely reimagined the user interface of the product. This was an iterative process that included designing and developing interactive wireframes, performing usability tests, conducting management reviews, and having discussions with an offshore development team.

The goals of the project were to:

  • Redesign the acquisition workflow to make technicians more efficient.
  • Add the most requested features, such as switching between patients and search.
  • Bring the product visuals and interactions more in line with the DirectView product.
Image Suite
A screenshot taken from the previous version of Image Suite, which our team was tasked to redesign.
Design

One of the most important aspects of the redesign was to make sure that all of the current features would make it into the new revision of the software. This required us to make sure that all of the features were still discoverable and usable when we reduced the complexity of the interface. The focus of the redesign for our team was reducing the amount of steps required by the technician. We wanted to increase their efficiency in many of their common tasks, such as a placing image markers on the exposure. We designed an interface that allowed them to place image markers immediately after scanning in an image versus having to visit a post-processing screen.

The redesign also allowed us to add shortcuts in the workflow, such as starting the exam immediately after creating the order for an X-ray. Lastly, we removed all unnecessary dialog boxes and pop-up windows that even our expert users found confusing. When the redesign was completed, the user interface was reduced to only three screens (Worklist, Order Entry, and Acquisition).

Image Suite
One of the early wireframes that improved the technician workflow and experience.
Implementation

An offshore development team was responsible for the final implementation of our redesign. However, our team completed the redesign process utilizing Axure RP. It allowed us to develop an interactive prototype that we could ship to the team that was developing the software revision. We were also able to export a design specification and helpful documentation directly from the prototype. Axure simplified the workflow of our team and we continued to utilize it for other projects due to the positive feedback.

Image Suite
A screenshot of Image Suite after the redesign (without the final icons).
Results & Reflections

The major obstacle for our team during the redesign process was how to evaluate our different ideas and prototypes. We did not have consistent access to users of the Image Suite product for frequent usability tests. As a result, we were only able to complete a few usability studies that gave us positive feedback towards the end of the design process. The qualitative and quantitative feedback we received from the usability studies reinforced the improvements to the product.

When the redesign shipped in 2013, the marketing materials focused on the more efficient workflow and the new touch-friendly interface.

My Role

Interaction Design, Prototyping, User Testing, and Statistics.

DirectView Configuration
Web Application
A redesign of the System Configuration menu that is included on the Carestream DirectView product. The redesign focused on simplifying the deep menu hierarchy and including highly requested features such as search and user favorites.
The Problem

The System Configuration menu on our DirectView product was the number one source of frustration from our customers and field engineers. Our users had concerns about both the overall navigation method and the confusing content of the individual configuration pages. Technicians stated that the menu was intimidating overall and that menu item locations were unnecessarily confusing and cumbersome.

When our team reviewed the menu, we decided that the biggest issue with the design was the large list of buttons that needed to be scrolled through. There were multiple levels of button-only menus that would eventually lead to the configuration page that a user was looking for. However, we decided that the menu needed a complete redesign as redoing the menu hierarchy would only solve some of the issues.

The goals of the project were to:

  • Redesign the navigation of the menu to solve the issues related to poor discoverability.
  • Add new features that reduce the complexity of the configuration screen.
  • Rework the individual configuration pages with modern design patterns and widgets.
  • Reduce the time needed to configure the product for first-time use.
DirectView Configuration
The original System Configuration menu that I was tasked to redesign.
Design

The first task I completed related to the redesign was to conduct a card sorting exercise with our internal subject-matter experts. The redesign utilized the results of that exercise and reduced the menu hierarchy to only two levels. After the new configuration categories were created, I worked on combining similar configuration pages within the interface. For example, the original menu had four separate pages related to configuring bar codes on patient wristbands that I combined into one page.

I decided on the accordion design pattern for the new navigation menu to reduce the amount of items on the screen at once. One of the other improvements was to simply alphabetize the pages under each category. Previously, each new configuration page was just added on to the end of the configuration menu. Other improvements I added to the design included the ability to search, browser-like navigation buttons, thumbnails, and user favorites.

After the navigation menu was completed, I went into each configuration page to reduce the complexity. I also updated the contents of each page with modern design patterns and widgets. The last task related to the design of the menu was to complete a design guide for developers. The goal of the design guide was to make sure that developers would be consistent with the redesigned pages when adding new pages to the menu.

DirectView Configuration
One of the early wireframes that shows the new navigation method and structure.
Implementation

I initially developed low-fidelity prototypes of the new navigation method within Axure RP. After completing the redesign of the navigation, I moved to redesigning the individual configuration pages within Axure as well. When I moved to a high-fidelity prototype, I started developing the user interface on a web server using HTML, CSS, and JavaScript. I relied on the jQuery UI library for interactions on the new interface. At the end of the design process for a specific page, I would run a script that I developed which imported the Axure redesign directly into the high-fidelity prototype.

DirectView Configuration
The high-fidelity prototype with the final menu hierarchy that was handed off to the product team.
Results & Reflections

During the redesign, I met frequently with our application engineers. Our engineers often traveled to the headquarters for training on new features and upcoming products. I made it a priority to meet with new groups that were arriving to get consistent feedback throughout the process of the redesign.

During the project, I met with three groups of about fifteen users to show the new user interface and to solicit feedback from them. The technicians were excited by the redesigned navigation and appreciated that our team was working to make their work on-site at our customers easier. Feedback from users during the process was unanimously positive. Due to the time constraints of the visiting application engineers, I was never able to put the redesign through any formal usability testing with them.

The high-fidelity prototype was handed off to the DirectView product team for implementation in the next major version update.

My Role

Interaction/Visual Design, Prototyping, and Web Development.