An otherwise perfectly estimated project can be undone by optimistic estimating of the testing or QA phase.
In my project management days, working in a busy digital agency, we would frequently get towards the end of the project and have to really rush the testing in order to pick up and fix as many bugs as possible before handing over to the client and subsequently launch the site.
If the development was running late then this already tight schedule got squeezed even more and I would be browser testing late at night with one hand over my face whilst the other clicked the mouse, as I didn't want to see how broken the site was in IE7 (or, as we're going back a few years, IE6). I needed to get it done in order to prepare a prioritised list of bugs for the developers to work through in the morning.
We put more testing time in the next set of project estimates and also tried to limit the amount of functionality we were building, to reduce the development time and therefore also the testing time. This was better but still invariably had a rush towards the end of the project, which meant that we tested the priority areas and had to quickly run through the rest of the testing.
As you can imagine, when testing is rushed, quality can suffer. The project is launched in a state of 'good enough' quality rather than great quality.
To start with, the approach is similar to estimating the development phase of the project. Take the specification, wireframes/designs and business requirements and start to create a list of what items need to be tested.
For the business requirements, you may have statements such as, 'needs to load quickly' or hopefully more like 'page load needs to be within an acceptable amount of time, which is xx seconds'.
If that is the case then put performance testing on your list.
You may also have, 'must be easy to navigate and find information'. Therefore, some usability testing needs to be carried out in order to verify this statement.
The specification will set out the functionality requirements for the project and so these can be used to work out what each item of functionality is that needs to be tested. If you list out all the main bits of functionality then this can also start to form your test plan.
There will be other areas, such as which web browsers need to be supported, which mobile devices the site should work on (whether that is the main site, a responsive design or a dedicated mobile site), what web standards are being followed and what accessibility level you are aiming for.
All these items can go onto your list.
You will also have items that every web project should have, which are probably agreed across your organisation. Items such as the existence of a 404 page, which has a number of items on it, a terms of use page, a privacy policy page, a cookie notification panel, a copyright statement on each page, contact details on each page, a returns policy, etc.
Whatever those standard items are, they should go onto the list.
You may also be responsible for the SEO of the website or need to handover part of the project to an SEO or inbound marketing person. Things like making sure Google Analytics is installed, that there is an XML sitemap in place, that title tags and meta description tags are populated, etc.
You should have a fairly long list now and it is best to organise it a bit before you start putting estimates against each area.
I tend to group all the functionality items together, then browser compatibility, mobile device compatibility, web standards and accessibility, performance (if required), load testing (if required) and SEO/Analytics.
You should then be able to put time estimates against each group (you can go into more detail if you need, perhaps the bigger functionality items can have time estimates of their own plus I tend to estimate more time for testing in older web browsers, as you usually find more bugs) and, using your normal project management estimation skills, factoring in contingency time and risk, you should arrive at a total amount of time needed.
Unfortunately, we're not finished yet. Testing is often carried out in cycles, so that you carry out your test plan once, hopefully all the way to the end (there might be some blockers that prevent finishing the testing) whilst raising a number of bugs.
So the estimated figure you arrived at above is for the first cycle all the way through.
You will then need to estimate some time for repeated cycles to finish any testing that couldn't be done first time around and retest fixed items.
This estimate is tricky because it is based on how many bugs you found in the first cycle and how much testing you couldn't complete due to any blockers. Generally it will come down to your confidence in the project, in the team you have around you, that goalposts aren't going to move, no scope creep, all those things, as to whether you can estimate a relatively small and precise amount of time or need to have a larger amount of time set aside.
Because obviously you need to have time for the developers to fix the bugs as well as time to retest the bug fixes.
Finally, there is regression testing. That is, testing the website or application again once all the bug fixes have been done to make sure those bug fixes have not broken anything else.
This could follow the original estimate, as you are testing the whole project again. However, invariably there won't be enough time for that and so a regression cycle is often a subset of the original test plan.
Again, it comes down to confidence and the type of project. If you are building a mission critical application then you will do a full regression cycle, probably more than one. For smaller projects you will not need to carry out a full regression cycle.
So now you probably have a much larger amount of time required to test the project than you expected. And the budget required is possibly much larger than expected too.
There is always more testing that test analysts would like to do but that there isn't time for.
As with any project, you need to work out what the priorities are and what is less important. So do you really need to test in IE7 or can you drop that browser now it is less than 1% market share? That might reduce some development time too.
Can you get some of the testing done at an earlier stage? If the developers are building functionality in stages then can the functionality testing follow those stages with a testing phase at the end to carry out all the browser testing and site-wide testing after development has finished?
Or, if you are building a proof of concept or prototype then can some of the business requirements testing (must be easy to navigate, etc.) be carried out at this point? This gives the opportunity to raise any concerns earlier instead of waiting for all the testing to be done near the end.
Is it possible to have multiple testers working on the project at the same time? One tester to carry out browser testing whilst another tests on mobile devices for instance.
If the budget for the testing required simply isn't there then determine what budget you have and work out the priorities, based on your estimates, calculating how much testing you can get done for your budget.
You will know that you can't test the entire project in the level of detail it perhaps needs but this risk can then be documented and identified to the project stakeholders.
As with estimating web projects in general, you will probably not get it right first time but will learn from each project. I certainly learned a lot from managing each web project and built those learnings into my next project. The difficulties I faced became the inspiration and ambition to start WebDepend just over 3 years ago.
Good luck and if this approach helps you or perhaps doesn't float your boat or you have any stories to tell then please let me know in the comments.