Skip to main content

Using instant feedback from customers to improve COVID-19 content


Maria Leonardis is the DCS Digital Channel’s team Content Manager. Here, she explains how the content team used feedback to respond to customer needs during the height of the pandemic.

User research at the content planning stage only gets you so far. Once the content is live, it’s what we hear from our customer-sentiment checking tool that gives us instant, real-time feedback on what we need to fix. 

How we gather customer feedback

One of our primary sources of customer feedback is currently through the Thumbs Up Thumbs Down (TUTD) symbols that appear in the footer of every page on nsw.gov.au. TUTD data feeds into metrics that help us monitor satisfaction and identify specific actions for content development and improvements.

Here’s what’s important to know for content owners on nsw.gov.au:

  • Thumbs Up (TU) button: this registers the action, but it doesn’t allow the user to log a comment.
  • Thumbs Down (TD) button: once clicked, this gives the user an opportunity to enter a comment.

As with any tool, TUTD has advantages and disadvantages in terms of what and how it provides data.

The good

Since January 2020, we have received 163,000 TUTD clicks with a 67% positive rating (Thumbs Up) across the site.

This data, alongside the use of tools like Google Analytics, Siteimprove and Google Trends, has been of critical importance to the team, particularly during COVID-19.

In March 2020, our site was designated the hub for NSW Government COVID-19 content. We went into crisis communications mode and ensured all daily content updates aligned with public policy announcements.

Moving fast wasn’t an issue for us but we did have to deviate from our usual human-centred design process. There was no time for a discovery and user research phase to support our content solution. We received a brief, we drafted content, it was reviewed, it went through multiple agencies for approval, and it was published.

TUTD was our discovery, user research and user acceptance phase all rolled into one. It became our go-to for how our content was delivering as an information service, and it enabled us to quickly detect any problems.

How we use it

Our new website went live in March 2020 just as the need for COVID-19 content ramped up, so we have been refining our use of the TUTD tool over the past 12 months with live content.

Our current process for weekly reporting is as follows:

  • data match against the URLs
  • categorise feedback into broad subject areas; for example, content updates, user experience, policy commentary
  • distribute the report to product owners and share sitewide insights
  • weekly report on the comments received through TUTD is categorised into subject areas of commonality.


We share this with the wider team to see if anyone can identify areas for UX improvement, information gaps or any content that needs to be updated.

From a content team perspective, we use TUTD daily for more direct feedback.

  • After we’ve made COVID-19 content updates, we check TD closely for the next 24 to 48 hours to see if we need to clarify language or seek additional information from stakeholders.
  • Any functionality or user experience issues we note are logged as bugs or fixes with our platform team.
  • Broken links and incorrect information (for example, COVID-19 testing clinics) are fixed as identified.

Examples of TUTD improvements

Refining language and search engine optimisation (SEO)

Monitoring TUTD has helped improve our SEO by enabling us to change our content and language to reflect the terms that our customers use in search.

Based on initial research, we used the term ‘novel coronavirus’ in the title of COVID-19 and in the metadata. We tracked comments coming into TUTD and watched the language that people were using. As COVID-19 became more broadly known and referred to, we switched our terms.

We also saw this happening with ‘social distancing’ and an acceptance of ‘physical distancing’ as an interchangeable term.

Information gaps

Face masks generated hundreds of comments. Some of the comments were negative sentiment or policy commentary, but we did receive questions that we could act upon (actionable insights). For example, one user asked how they could wear a mask if they had a beard. We worked with NSW Health to develop more detailed content on wearing a face mask.

Functional improvements

Our testing clinics content started as lists in accordions.

The content was organised by local health districts (LHD) and relied on manual updates.

User feedback indicated people wanted an easier way to search and our content design became unsustainable as more testing clinics opened.

One of our product teams took on the job of providing a better solution and built a clinics finder tool that was connected to Google maps, allowing users to search by postcode or suburb.

The postcode finder solution resolved the location complaints, but another issue soon snowballed. When clusters developed and pop-up/mobile clinics were set up as a rapid-response measure, the issue of maintaining up-to-date content became a problem.

In response, our product team made enhancements to enable instant updates to the clinics data.

The future of sentiment checking (fixing the not-so-good aspects)

The current process of compiling and categorising feedback is manual and requires a nuanced approach. For example, sometimes feedback provided on a topic page relates to global navigation rather than the topic, so this is categorised as user experience (UX) feedback.

This isn’t ideal as it’s resource intensive, and we are looking at how we can remedy this.

We’ve also identified issues and challenges that affect the quality of the data. The issues are not unique to us but have required us to rethink the feedback tool we will use in the future.

Our current challenges:

  • Negative sentiment bias: Users can only submit comments if they click Thumbs Down
  • Policy commentary vs content quality: Unhappy users can spam the feedback with repeated negative ratings about government policy rather than our content, skewing the data and adding to our data-cleaning workload
  • Privacy considerations: The channel isn’t two-way. However, users expect it to provide a ‘live chat’ service and personal information. We are currently unable to action either of those.
  • Yes/no is a blunt instrument: Some feedback doesn’t neatly fall into either category.

Thanks to co-writer Jennifer Weiley.

How does your team use feedback? What is your process for reporting and acting on feedback? Please share your thoughts and examples below.

Top of page