testing - Encouraging management to scrap manual tests and do things the proper way -


I am working in a project that is quite complex in terms of size (this is to become a web app). The first problem is that no one is interested in products that can solve the problems surrounding the project (lack of time, no adjustments in the timetable in response to the ever changing needs). Keep in mind that these products are not expensive (millions of dollars to build a company) and not such products, which require a lot of configuration (although this project is needed, such as building automation tools, such as Time to evacuate) Anyway, this means that the test is done manually because the documentation is a deliverable - it means that the actual technical design, execution and site testing are full (whether we developer or document What are the questions we are trying to do here are those who come to mind) This site is very large and complex (not on Facebook or some things like that), but Man Mutual despite the test instructions to the instructions (my warnings) it says that it does not test the high quality me and so is not a high quality product that get out.

How can I explain to the relevant people (who they know I can apply) to encourage automatic testing? I know that it is possible for Windows to change the resolution via CMD with 3rd party app, so it can all be part of an automated build. Instead, I probably have to run browsers, screen resolutions, and window sizes manually through all of these passions. Apart from this, where the tests recorded have come down? Do they break when windows decrease? The big problem with this is that I am working under the supervision of the test and the PC is not doing all the work, which is my job (do all the work on the PC). And seeing the lack of resources, it prevents a god box - yes, it is used for development and then for testing by me. When the box is free, it is better to automate it to run the night.

Thanks

Talking about money usually takes care of the management The best way to do this is to have some suggestions:

  • Estimate how long it takes to test your current manual.
  • Get a list of important bugs that customers have met - Ideally the idea of ​​impact cost (a bug fixing is always more expensive than ever before), but it's only one or two Especially bad worms. Your manual test did not catch these customer's disturbances, so it's a good way to show that your manual test is insufficient.
  • Come with a pilot project, where you estimate the cost of the pilot project found in the test of a certain area of ​​the product. The pilot project has the advantages of scope and easy to estimate. . Then compare the ongoing costs of running the automation that tests every release manually; After some release, you should also break the automation tool plus the cost of plus test development. Choosing the automation area should be cautious - try to avoid areas like a complex UI which can change significantly between releases and thus require a lot of time to update automatic tests.

Comments

Popular posts from this blog

sql - dynamically varied number of conditions in the 'where' statement using LINQ -

asp.net mvc - Dynamically Generated Ajax.BeginForm -

Debug on symbian -