So...I'm a business analyst and I'm getting ready to start UAT for a tool that has been developed for my team to capture data from a number of end users. UAT will start next week. I need to write UAT scripts. I've done technical writing before and I'm the person who captured and communicated all the requirements for the tool so I am familiar enough with what needs to be tested by my end user UAT participants but I could use a little advice and guidance on format and structure, if anyone can help.
I have googled this so I have some googlish info but I thought I'd come see if anyone here has any advice.
Posts
Blizzard: Pailryder#1101
GoG: https://www.gog.com/u/pailryder
If you meant the former, then you need to pull the users in and help them. As the BA, you should generally have a point of contact with your internal business customer, and generally that person can and will help you get a UAT script set out. Only they know the business processes well enough to actually do this, though you can certainly stream line it for them.
No, not automated scripts. "UAT scripts" is a keyword for UAT and that's basically what was dumped on my lap. What I think it means is a step-by-step guide that explains every step an end user needs to begin and complete a fully-realized test scenario, such as recording a sale in a sales recording system: "1. Click 'Add New' 2. Enter your first and last name with no spaces in the Sale Name field." and so on.
That's what I believe UAT scripts are, but let's say your UAT testers are the actual sales reps that will be recording sales in the system. While a script will certainly help us ensure bugs don't exist, it won't really test the user experience because the user experience is tightly controlled by a list of steps.
So either I'm wrong about what a UAT script is, or management is using the wrong term in describing what I should be designing for the end users in this particular UAT session.
Thoughts?
That may be slightly jaded.
A UAT script should be reflective of a use case. The script (in my opinion) should describe the following:
1. The use case being tested - it's official use case name
2. The success criteria - broad description of what the test is supposed to achieve and how that fits into the grand scheme
3. Pre-requisites - Steps or procedures that the user must have completed before executing the test (i.e. any standard login functions or anything they have to do to prepare the test environment)
4. Any known behaviours which may affect the users ability to complete the script (i.e. any intermittent bugs or undefined behaviour)
5. A step by step script, in tabulated form, with instructions on how to execute the test. with the following columns:
(1) Step No.
(2) Step Description
(3) Requirements mapping (if applicable - put the actual requirement that maps to this step, not all steps will map to a requirement)
(4) Comments - place for UAT tester to mark any pertinent comments (i.e. "I could not find that option/ could not click that button)
(5) Pass / Fail - the result that the user got when trying to carry out that line of the script
Criteria for passing UAT is obviously then a completed set of test scripts, all with passes. Or where the passes failed, some sign off that the customer still accepts...Every use case (including exceptions as well as the known good situations) should have a script.
Does this help?
My understanding is that User Acceptance Testing is exactly what it sounds like - you put the end user in front of the software and they are literally playing around with it in some direct fashion. How detailed or directive this work is for the end user is a product of the development process that has been implemented.
If you are not the end user, then you should not be performing UAT. It sounds like you're being asked to write the directions (a.k.a. "the manual") for users who will be doing the testing.
It's a bit like teaching someone how to drive. The user should have described in the requirements that she wants to be able to turn left. "Turn Left" should be a behavior that has been spelled out in the software in some way. The steps the user must go through in order to turn left ("Put your hands on the steering wheel. Tilt the steering wheel along the perpendicular axis towards the angle at which you wish the car to travel.") are the UAT scripts.
Something I am experiencing first hand is that it is important to get the end user involved in this sooner rather than later. They don't necessarily need to be writing out the scripts (because they have no idea how the system works yet). But, you should have a well fleshed-out series of scenarios which adequately explain the depth and breadth of what the typical user expects to encounter within the system. To use the car-driving context, sometimes the user will be turning left in the rain, or over rocky ground, or at various speeds. Each of those require elicitation and assessment in order to determine if there is some sort of alternate pathing that is required within the scripts themselves. In most cases it won't make a difference, but in some it will have a huge effect on what the user (and hence the business) will find acceptable (hence the term "acceptance testing").
1. Do you have use cases or requirements?
1A - if use cases test every flow combination available (including exception flows) to ensure it works as expected. Include a few exception cases where you put in bad data or weird crap to make sure it doesn't screw stuff up.
1B - if requirements, test every requirement and edit the requirements if they aren't in a testable form
2. Look/Feel and usability - test to ensure functions are in the right places, the application is responsive with a number of users on it and nothing is fucktarded (technical term, honest).
If you want more info, feel free to PM more or whatnot, as I've been a biz/sys analyst in various forms for the past 8 years and have a lot of release and testing experience.
UAT scripts CAN make excellent automated scripts as a secondary role - I have built smoke tests out of UAT scripts before, to run after every night's build. The idea is, when you come in in the morning, if the automated scripts built from the UAT didn't run, you have obviously broken something.
User Acceptance Testing, in my experience, is for two functions:
1) Testing in the wild
2) User Acceptance.
Presumably, other testing has been completed, i.e. making sure the functionality is there. So UAT tends to be a slightly more relaxed setting, where users are brough in, shown how to perform certain functions, and then asked to ensure that it works for them, and performs as expected (note, as expected from a user's perspective, not a developer's)
The second point is often overlooked, and it drives me nuts.
The outcome you're looking for is for the identified individuals to formally assess and indicate that the system performs the functions as required by the system. It can be thopught of as a "final approval".
So, with that in mind - go back to the user stories/functional requirements, and work your way back from there, creating a series of tasks for the users to complete that would prove that the system developed meets the stated needs of the business.
So, for the scripts, I'd list out the steps, and put a "tick box" by each, then the user can sign at the bottom of the script to indicate that they are satisfied they can complete (or not) each function as shown.
Scheduling UAT in the days prior to go-live. It indicates that there's no real commitment to addressing any issues uncovered. UAT in name only. Testing and Training often get squeezed.