What we have learned from Federal customers in FY20
By Angelo Frigo, Feedback Analytics team
PMA quarterly-update
Collecting and interpreting public feedback is a critical first step to improve agency services. The cross-agency team focused on improving customer experience (CX) recently reached a significant milestone in its efforts to solicit feedback from the public and learn from it together: over 2.2 million completed surveys through the first half of FY2020 have been received and analyzed for insights. And performance.gov/cx/ has been updated to enable sharing this information more publicly in 2021.
It’s been a journey. We convened this group of CX professionals, reached agreement to survey the public in a consistent manner in accordance with the Paperwork Reduction (PRA), and operationalized this in fifteen high-impact agencies. We are now coming together as a collaborative to take a sober look at the results and committing to lasting change. In this post, we’d like to share an update on: what are we doing with this data, what have we learned so far, and where are we going next?
What are we doing with this data?
This data is being used by agency CX operations in three ways: 1) customer satisfaction and trust are useful metrics to evaluate overall program performance and investments in CX. For example, Veteran trust in VA outpatient care reached an all-time high this year, va.gov. 2) Feedback from the public identifies problem areas for deeper inquiry through human centered design. And 3) collectively, the data is revealing patterns across agencies which are informing capability improvements and budgeting at the highest levels of government.
What have we learned so far?
Overall satisfaction and trust are pretty good. The total overall satisfaction score of 4.33 out of 5 in Q2 was nearly the same as Q1, 4.30. This is a good score; over 4.25 (85%) is generally good across industries. In most cases, questions were also asked about how Trust, Effectiveness, Ease, Efficiency, Equity, and Employees. See section 280 of circular A11 on managing CX and improving service delivery for more info.
Regression analyses are too early to be conclusive but a few hypotheses are worth noting.
- Ease and efficiency are the leading drivers. Ease of completing a service online and Efficiency, or amount of time to do so, both strongly correlate to satisfaction and trust in websites and services completed online. Both had a r value of 0.99 (scale of -1 to +1).
- Human interaction matters more in person than by phone. The overall Employee score was highest among all factors (4.6). Regression analysis suggests that front-line staff influence Satisfaction and Trust of in- person services (r of 0.99) such as VA outpatient care more than call center interactions (r of 0.65), which suggests Contact Center interactions are presently more transactional and less personal.
- Digital modernization efforts improve both trust and satisfaction. A few agencies offered contrasting data on different digital channels and on unidentified/anonymous versus authenticated/signed-in users. We’re seeing higher scores from mobile web than desktop and low utilization of native apps. This highlights the importance of responsive web design standards and progressive web applications rather than native applications. Regarding authentication, accessing protected data by logging in resulted in higher satisfaction. Personalized experiences, using login and profile solutions, are common in the private sector and agencies using these technologies appear to be benefiting from them.
- A 4 (agree) on a five-point Likert scale is actually a negative result. Likert scales are generally defined on a five-point scale from strongly disagree to strongly agree. A score of 4 is defined as ‘agree’ or ‘somewhat agree’ but our data suggests that a 4 indicates that a minor problem occurred. We know this because we have similar datasets that ask the same question of the same experience using a two point good/bad scale and the sum of 1-4 on the five point scale equaled the percentage of ‘bad’ responses. The lesson here is that top-2-box analysis should not be used with five-point Likert scales.
Where are we going next?
There are so many great things happening in CX across government. As our data changes over time, it will indicate improvement and the degree to which the government is keeping pace with expectations. This data is also guiding our own improvement priorities: we are working with agencies to 1) make feedback more useful internally, 2) broadening the definition of a ‘customer’. And 3) scope around end-to-end services even when those services cross organizational boundaries.
Interpreting Verbatims
While survey scores are useful indicators of performance they are less well suited to identifying what to fix.
For that we must interpret the open text responses.
New skills, time, and tools are required to make this feedback more useful to agencies.
Toward that we are exploring new ways to use AI to interpret, categorize, and distribute unstructured
feedback data from users collected through surveys, calls, and publicly on social media.
We are also considering new mechanisms to make it easier for the public to give actionable feedback such
as awareness campaigns, a persistent place to give feedback on all services, and recently made changes to
the A-11 Section 280 guidance to allow the user to identify the best or most problematic parts of the experience.
Broadening the Definition of ‘Customer’
Over 1.1 million complete responses per quarter is a lot, and suggests that the public is willing to give their
time to inform service improvements. However, those responses cover a fraction of the Federal Government
and are not evenly distributed: seven services were responsible for over 90% of the total survey responses.
And those services only represent the public. Many agencies serve industry or other agencies and they can
also reduce system costs by making use of CX principles and practices. Collaboration across agencies
could be facilitated by focusing on meaningful public subgroups such as taxpayers, veterans, retirees, or the
unemployed.
3. Orienting toward End-to-end Service Delivery
The majority of our data collected is focused on website interactions and contact center support calls. Only a
few consider the full workflow of a task such as doing one’s taxes or the entire journey from learning about,
applying for, to receiving a passport. Organizations that deliver great customer experiences optimize not just
for their own costs but also for the burden and effort of their users. As stated above, ease and efficiency are
the leading drivers of satisfaction, so our hypothesis is that a more holistic measure of burden placed on the public to
complete a service or a customer effort score might be a more natural fit with agency
operations and budget cycles, than a long-term relationship score like trust, which changes more slowly and
is less within the control of the development and delivery teams doing the work.
Please reach out with thoughts and questions. We would love to hear from you and take feedback ourselves!