Recently, I spent the day at a local amusement/theme park with my family. One of the things that immediately struck me, aside from how much fun we had (more on that later) was how little effort had gone into exploratory testing at the park. Now they probably don't call it exploratory testing at theme parks they probably phrase it as mystery shopping or something similar but it isn't quite accurate. You see there is a big difference between experiencing the theme park as a user and testing the theme park. Much like app and website testing, the nuances are quite clearly delineated. Essentially, it boils down to not knowing what you don't know.
The park did a commendable job actually on their automated testing of the various rides. They know and knew how long it was taking for rides to move passengers through the queue system and they knew if the ride was working or not. Their systems were built to pipe this information to an app that park goers could access. So far so good right? As far as the park operators and managers know, at this point, their core critical paths were being executed or not and they could take action to rectify the situation. In much the same way, automatic testing helps us to identify repetitive actions, test them for completion and provide feedback when an error occurs. The problem with this, however, is just how little context it provides to the human users. Or in this case, the park goers.
The park was clearly covering their known potential issues:
- The ride was functioning
- The ride was not functioning
- The ride capacity
- Approximate wait times for guests
What the park was not able to do though, was to measure their unknowns. And this is the crux of where the park was failing. They were heavily leaning on automated testing of their known potential issues and critical flows but essentially ignoring the park experience itself.
For instance, as a guest on this particular day, I waited in line dutifully for a ride which was new and had a short wait time (the benefits of excited kids waking up early). As we waited in line for 20 minutes out of the expected wait time of 25 minutes I had noticed we had not moved too far towards the front. After a short while, a voice came on to say that the ride was delayed. I checked the app and the wait time was still 25 minutes. After another 10 minutes the ride closed and we all left. The app updated to indicate that the ride had closed so the automation was working as intended.
The disappointment of being kicked out of a ride is unmeasurable via computer or automated methods. What happens after a ride closes is that all the users migrate to another ride which impacts that rides flow. In this instance, the next ride wasn't designed to handle the flow and wait times ballooned there 3-4x. Later in the day, the first ride reopened to much joy and we returned. The line was much longer by now but we waited. And waited. And waited. Can you guess what happened next? It failed again. The system was flawless in its execution of updating to advise that the ride had been closed but it was unable to measure the sentiment of 100 people who were queued up and overheated after waiting for 45 minutes.
While I could regale you with more stories surrounding the failure of this park, the biggest take away was their over reliance on automated systems to measure the health of their park or product. Ultimately it wasn't the rides in the park but the experience we had. The same applies to your products, apps or websites. While it may seem as though the app or website is the product, it is the actual experience your users have within it that counts. In this situation, the park could have used people to determine the unknowns. I know that this was not happening because the staff at the park did not provide, interact or work in a way or manner that indicated quality was a mindset for their guests.
Ultimately the park, and you, need to have a blended approach to testing your products, apps and websites. With the park, the issues were serious enough that we would never return again and the same holds true for your users. In a greater context, your users are only going to give you one or two chances before they uninstall or refuse to use your service and it doesn't matter how well you've automated your unit tests or critical flows.