Recently I wrote about the QaTraq test management tool, and hinted that it could be used in a lightweight manner suitable for agile software development environments with very short release schedules.
The approach is to use only the features of QaTraq that are essential for a rapid release environment, and to give up several unnecessary assumptions.
One of the assumptions made in QaTraq is that a new test plan will be created for each release. For a live web site with releases every day, this is not a good assumption. In the time it would take to copy and customize a test plan for each daily release, there would be little time to test before the release went out the door.
An alternative is to use the "test plan" of QaTraq as a living repository of all the test cases appropriate to a particular application or feature area. When test cases need to be rerun, they are run in the same test plan.
In this approach, new test results overwrite old results, so the results database becomes a snapshot of only the most recent result for each test case rather than all results. In my experience, though, test results from past releases are almost never used, even in more traditional software development environments.
Another assumption in QaTraq is that Products are the major categories used for reporting purposes. But if test plans are used as living repositories of test cases rather than being associated with a specific release, Products become almost unnecessary.
Products cannot be completely ignored in QaTraq because test cases and test scripts in QaTraq must be associated with a Product, but there is nothing that requires that more than one Product be used. By defining only a single Product with a single Version in QaTraq, all new test cases will be automatically assigned to that Product and Version with no extra work required.
The built-it reports of QaTraq are based around Product, so it will be necessary to create your own reports to track test results by test plan, design, and script. This is straightforward using PHP, which must be installed to run QaTraq anyway. To build queries for your custom reports, PHPMyAdmin is very useful interface to MySQL. I recommend installing it on any server running QaTraq.
The above is the essence of what I call the "QaTraq Lite" approach. The keys points are:
* Use Test Plans as living repositories of test cases and results rather than for a particular release. Add tests when you need them, rerun tests when you need to.
* Create a single Product with a single Version. All test scripts and cases will be associated with this Product automatically.
* Report test results by test plan, test design, and test script instead of by Product. A test case shows up in reports wherever you put it. No need to maintain a separate physical hierarchy and reporting hierarchy. They are one and the same.
This approach reduces much of the management overhead of using QaTraq to track your tests, even in an agile environment with daily releases.
Open Testing is my software engineering blog. It is also a concept of testing software in an open and public manner.
Saturday, November 24, 2007
Saturday, November 10, 2007
Mac migration
Today I started converting and migrating files from our iMac G4 to my new PC. Our Mac has been collecting dust for the past couple of weeks since I made the PC my primary home computer.
The first step was to get the PC and Mac to network together. I have a wireless hub with Ethernet so establishing a basic network connection was straightforward, the two devices could ping each other right away. I tried enabling Windows Sharing and FTP Sharing on the iMac, and neither worked. I put the devices on the same workgroup, also nothing. I found a note that suggesting that resetting the user passwords might help, but no go. Finally enabled Remote Login on the iMac, and used WinSCP to connect via SSH. That worked.
Once I was able to move files, I started the process of converting them. Many of the files were in AppleWorks format and had to be converted to a format readable by OpenOffice before I could move them over. I converted all the text files to RTF and spreadsheet files to Excel format. I also had several old AppleWorks database files that I wasn't able to convert. I didn't find a free tool to solve that problem.
Next I had to get my music on the PC. I have a Mac-formatted iPod so tools like iPodCopy and iPod2Computer couldn't recognize the device. I had to copy the music files from the Mac over the network via WinSCP. It took a while.
Once I got my music files on the PC, I installed iTunes and authorized the PC at the iTunes Music Store. Next I had to restore the iPod and reformat it for the Windows version of iTunes.
The reformat wiped my Contacts from the iPod, but a while back I had imported them into the Thunderbird address book on the PC. But how to get them from Thunderbird to the iPod? I found MozPod, a Thunderbird extension that solves this problem. When I first downloaded MozPod, it tried to install itself as a FireFox extension, which failed since it isn't designed for FireFox. Once I got it to install into Thunderbird, it worked perfectly.
Overall the migration took about 14 hours, most of that time spent cleaning up, converting, and copying the last ten years of my family's digital life. I'll find out in the next few days how much of the data actually made it over successfully.
The first step was to get the PC and Mac to network together. I have a wireless hub with Ethernet so establishing a basic network connection was straightforward, the two devices could ping each other right away. I tried enabling Windows Sharing and FTP Sharing on the iMac, and neither worked. I put the devices on the same workgroup, also nothing. I found a note that suggesting that resetting the user passwords might help, but no go. Finally enabled Remote Login on the iMac, and used WinSCP to connect via SSH. That worked.
Once I was able to move files, I started the process of converting them. Many of the files were in AppleWorks format and had to be converted to a format readable by OpenOffice before I could move them over. I converted all the text files to RTF and spreadsheet files to Excel format. I also had several old AppleWorks database files that I wasn't able to convert. I didn't find a free tool to solve that problem.
Next I had to get my music on the PC. I have a Mac-formatted iPod so tools like iPodCopy and iPod2Computer couldn't recognize the device. I had to copy the music files from the Mac over the network via WinSCP. It took a while.
Once I got my music files on the PC, I installed iTunes and authorized the PC at the iTunes Music Store. Next I had to restore the iPod and reformat it for the Windows version of iTunes.
The reformat wiped my Contacts from the iPod, but a while back I had imported them into the Thunderbird address book on the PC. But how to get them from Thunderbird to the iPod? I found MozPod, a Thunderbird extension that solves this problem. When I first downloaded MozPod, it tried to install itself as a FireFox extension, which failed since it isn't designed for FireFox. Once I got it to install into Thunderbird, it worked perfectly.
Overall the migration took about 14 hours, most of that time spent cleaning up, converting, and copying the last ten years of my family's digital life. I'll find out in the next few days how much of the data actually made it over successfully.
Thursday, November 8, 2007
QaTraq lessons learned
One of the essential tools for a software quality team is a test management tool. Test management is simply the process of storing test cases in a database, and organizing them in a way that makes it easy to plan, coordinate, and measure the testing activity.
I've worked with many test management tools in my career in software quality, from home-grown tools to commercial ones. But one of the most useful I've worked with is QaTraq, which I used at my last employer, Haydrian Corporation.
QaTraq was a good fit a Haydrian for several reasons. We had a small team of 3-4 developers and we needed to be coordinate and measure our testing. We didn't want to spend a lot of money on commercial test tools, but we preferred an open source product with support behind it. Our product was an appliance with releases several times a year, and smaller patch releases more frequently.
QaTraq is built around the concept of test plans. Test plans contain test designs which in turn contain test scripts. Each level is required. This hierarchy is deep enough to be configurable for most needs, but the fact that you have to fill in every level of the hierarchy creates extra work. For projects that have named release versions (e.g. version 2.2.2) and which release no more often than once a week, the overhead of copying or creating a test plan for each release is manageable. For projects that release every day, the overhead may be too difficult to manage.
One practice I can recommend is using descriptive and distinct names for test plans and other entities. When your test cases get into the hundreds or thousands, test case names like INSTALL_00001 aren't going to be that meaningful. Names should also be short so they will fit into the QaTraq drop down menus.
Another consideration with QaTraq is the fact that once you execute a test case, you can't delete it because it is tied to a test result. You can only remove it from a script. This makes it necessary to have a strategy for marking test cases as deprecated so they don't get added to future test plans.
One approach is to have a "master" test plan which is a superset of all known good test cases. When deprecating a test case, it should also be removed from the test script in both the currently executing test plan, as well as in the master test plan. The Templates and Sets functionality in QaTraq Pro provides this functionality in a more built-in way.
The reports built into QaTraq are useful but are tightly tied to the concept of a Product and its Component, so it is important to think about this hierarchy as well and not make it too complicated. It may be necessary to write custom queries and reports external to QaTraq to get just the information you want.
Automated test cases cannot be run from QaTraq, but manual instructions for running an automated test case can be stored in QaTraq and the results can be entered manually. If the automated tests are constantly changing, it can take quite a lot of effort to keep QaTraq synchronized, but marking automated tests as pass or fail manually takes very little time.
Overall, I would recommend QaTraq for a software development organization that has defined releases no more than once a week most of the time. For products that do not have defined releases, or which release daily, as many Internet companies do, it is possible to use QaTraq in a "light process" that doesn't use some of the built in features. More on that in a later article.
I've worked with many test management tools in my career in software quality, from home-grown tools to commercial ones. But one of the most useful I've worked with is QaTraq, which I used at my last employer, Haydrian Corporation.
QaTraq was a good fit a Haydrian for several reasons. We had a small team of 3-4 developers and we needed to be coordinate and measure our testing. We didn't want to spend a lot of money on commercial test tools, but we preferred an open source product with support behind it. Our product was an appliance with releases several times a year, and smaller patch releases more frequently.
QaTraq is built around the concept of test plans. Test plans contain test designs which in turn contain test scripts. Each level is required. This hierarchy is deep enough to be configurable for most needs, but the fact that you have to fill in every level of the hierarchy creates extra work. For projects that have named release versions (e.g. version 2.2.2) and which release no more often than once a week, the overhead of copying or creating a test plan for each release is manageable. For projects that release every day, the overhead may be too difficult to manage.
One practice I can recommend is using descriptive and distinct names for test plans and other entities. When your test cases get into the hundreds or thousands, test case names like INSTALL_00001 aren't going to be that meaningful. Names should also be short so they will fit into the QaTraq drop down menus.
Another consideration with QaTraq is the fact that once you execute a test case, you can't delete it because it is tied to a test result. You can only remove it from a script. This makes it necessary to have a strategy for marking test cases as deprecated so they don't get added to future test plans.
One approach is to have a "master" test plan which is a superset of all known good test cases. When deprecating a test case, it should also be removed from the test script in both the currently executing test plan, as well as in the master test plan. The Templates and Sets functionality in QaTraq Pro provides this functionality in a more built-in way.
The reports built into QaTraq are useful but are tightly tied to the concept of a Product and its Component, so it is important to think about this hierarchy as well and not make it too complicated. It may be necessary to write custom queries and reports external to QaTraq to get just the information you want.
Automated test cases cannot be run from QaTraq, but manual instructions for running an automated test case can be stored in QaTraq and the results can be entered manually. If the automated tests are constantly changing, it can take quite a lot of effort to keep QaTraq synchronized, but marking automated tests as pass or fail manually takes very little time.
Overall, I would recommend QaTraq for a software development organization that has defined releases no more than once a week most of the time. For products that do not have defined releases, or which release daily, as many Internet companies do, it is possible to use QaTraq in a "light process" that doesn't use some of the built in features. More on that in a later article.
Monday, November 5, 2007
Long silence
I have been silent for a while. The reason is I was laid off at the end of September. I was sick or interviewing or both most of October.
Now that I'm working again I've been thinking about what to do with this blog. I started it as a way of getting my name out there, and it seems to have done that. I made a point to mention this blog on my resume. It seems not to have hurt me.
I'm not sure how much time I'll be able to put into promoting the concept of open testing or actually practicing it, but I'm still excited by this idea, so I'm going to try to keep this blog active.
Now that I'm working again I've been thinking about what to do with this blog. I started it as a way of getting my name out there, and it seems to have done that. I made a point to mention this blog on my resume. It seems not to have hurt me.
I'm not sure how much time I'll be able to put into promoting the concept of open testing or actually practicing it, but I'm still excited by this idea, so I'm going to try to keep this blog active.
Wednesday, September 26, 2007
Open source resources
I haven't had much time to write lately, but I have been doing a lot of thinking about the Open Testing concept and where I want to take it. More on that soon.
I've added links to open source projects that are actively seeking testers. I'll continue to add links to this area as I run across them.
I've also added links for several Open Source directories. One of the most interesting I found is Ohloh.
Ohloh is both a user community and a source code crawler and metrics tool. Ohloh users submit open source projects, list which one they use, and Ohloh collects code and developer metrics automatically. For example, here is the listing for the Linux kernel and Linus Torvald's contributions to it. Wow.
The combination of community and code crawler such as Ohloh is a powerful one that would be useful for creating a community around the idea of open testing as well. I can easily envision Ohloh or a similar tool being used to track tester contributions to test code and bug reports on open source projects, for example.
I joined Ohloh and added a listing for Watij, an open source testing tool I use. I'll add or contribute to listings for other open source testing tools I use over the next few days.
I've added links to open source projects that are actively seeking testers. I'll continue to add links to this area as I run across them.
I've also added links for several Open Source directories. One of the most interesting I found is Ohloh.
Ohloh is both a user community and a source code crawler and metrics tool. Ohloh users submit open source projects, list which one they use, and Ohloh collects code and developer metrics automatically. For example, here is the listing for the Linux kernel and Linus Torvald's contributions to it. Wow.
The combination of community and code crawler such as Ohloh is a powerful one that would be useful for creating a community around the idea of open testing as well. I can easily envision Ohloh or a similar tool being used to track tester contributions to test code and bug reports on open source projects, for example.
I joined Ohloh and added a listing for Watij, an open source testing tool I use. I'll add or contribute to listings for other open source testing tools I use over the next few days.
Sunday, August 26, 2007
Address Book Incompatible
My first tiptoes into open testing began this weekend when I brought home a used desktop PC and began setting it up. It came with Windows XP pre-installed so I didn't have to deal with setting up the OS, but everything else I'm doing myself.
My goal is to have all the software on the box other than Windows and the occasional game to be free and open source. I considered installing Ubuntu but since this box will be doing double duty as a family PC, I decided against that.
One of the first applications I installed was Mozilla Thunderbird, the email client. I gave up using email clients several years ago and have been using webmail clients since then because of incompatibility issues, especially for address books. Migrating from one email client to another was always a major hassle. I was curious how Thunderbird would approach this issue.
Setting up Thunderbird to access my GMail account was straightforward. Importing my address book into Thunderbird was another matter.
Thunderbird appears to support importing from a variety of email address formats, including LDIF. Unfortunately, my old address book was stored in Palm Desktop 4.0.1, which only supports exporting to CSV or text files and a custom (and therefore useless) format called Address Archive.
I exported to CSV format, but since the data was not self-describing, I had to work with Thunderbird's import wizard to tell it which data belonged to which fields. After a lot of work, I got close, but it was clear that much of the data just wasn't going to map to the right fields. I'm going to have to do a lot of manual editing to clean it up.
I don't blame Thunderbird, the real issue is the lack of an open, universal, human-readable, self-describing format for address books and contacts. LDIF may be an open standard, but like anything based on LDAP, it isn't exactly human-readable.
There's been quite of a bit of buzz lately about the need for an open standard for applications like MySpace and Facebook relationships. But the software industry still hasn't solved the more basic problem of how to get software to describe people in a way that every other program can understand. It should be possible for any email client, address book software, or webmail to import or export entries from any other by at least one direct method. Until we can do that we shouldn't be talking about open standards for relationships.
My goal is to have all the software on the box other than Windows and the occasional game to be free and open source. I considered installing Ubuntu but since this box will be doing double duty as a family PC, I decided against that.
One of the first applications I installed was Mozilla Thunderbird, the email client. I gave up using email clients several years ago and have been using webmail clients since then because of incompatibility issues, especially for address books. Migrating from one email client to another was always a major hassle. I was curious how Thunderbird would approach this issue.
Setting up Thunderbird to access my GMail account was straightforward. Importing my address book into Thunderbird was another matter.
Thunderbird appears to support importing from a variety of email address formats, including LDIF. Unfortunately, my old address book was stored in Palm Desktop 4.0.1, which only supports exporting to CSV or text files and a custom (and therefore useless) format called Address Archive.
I exported to CSV format, but since the data was not self-describing, I had to work with Thunderbird's import wizard to tell it which data belonged to which fields. After a lot of work, I got close, but it was clear that much of the data just wasn't going to map to the right fields. I'm going to have to do a lot of manual editing to clean it up.
I don't blame Thunderbird, the real issue is the lack of an open, universal, human-readable, self-describing format for address books and contacts. LDIF may be an open standard, but like anything based on LDAP, it isn't exactly human-readable.
There's been quite of a bit of buzz lately about the need for an open standard for applications like MySpace and Facebook relationships. But the software industry still hasn't solved the more basic problem of how to get software to describe people in a way that every other program can understand. It should be possible for any email client, address book software, or webmail to import or export entries from any other by at least one direct method. Until we can do that we shouldn't be talking about open standards for relationships.
Tuesday, August 7, 2007
The ferry to nowhere
While planning a vacation recently, I ran across this interesting Google Maps bug.
The search string "seattle bremerton ferry" selects a point midway between the Washington State Ferry terminals in Seattle and Bremerton. This spot happens to be on Bainbridge Island, which is interesting since the Seattle to Bremerton ferry doesn't actually stop there.
Update: Apparently Google has now fixed this issue. The search now asks you to choose either the Seattle or Bremerton ferry terminals.
The search string "seattle bremerton ferry" selects a point midway between the Washington State Ferry terminals in Seattle and Bremerton. This spot happens to be on Bainbridge Island, which is interesting since the Seattle to Bremerton ferry doesn't actually stop there.
Update: Apparently Google has now fixed this issue. The search now asks you to choose either the Seattle or Bremerton ferry terminals.
An Open Testing Manifesto
Open Testing is a concept I began thinking about seriously after attending the CAST 2007 software testing conference earlier this summer.
I've been working in software quality for almost fifteen years, in testing and development for large and small companies, on embedded software, desktop applications, and web applications, using both agile and traditional methods.
By now I've left behind a vast body of work in software quality: test plans, test cases, test scripts, test results, and defect reports. Unfortunately most of this work is not shareable. Most software test engineers are in the same position.
The reason is simple: most of us have spent our careers working for companies or other entities that consider the testing activity to be proprietary information.
This fact shapes the level of sharing and collaboration possible between software testing professionals. What tends to get shared is information that isn't proprietary.
We share success stories and horror stories. We collect these into articles, presentations, blogs, books, even distinct software testing schools. We debate among ourselves which testing practices, tools, and certifications are best or whether any of them are any good at all.
What is largely missing is the collective body of testing work that could be used to evaluate some of the competing claims, as well as to promote greater collaboration and innovation in software testing.
One possible response to this situation is a practice I call Open Testing.
Open Testing is the practice of testing software in an open and public manner. In Open Testing, test plans, test cases, test metrics, and defect reports are publicly available so that the quality of the software under test and the quality of the testing activity can be assessed openly.
Some of the potential benefits of Open Testing include: providing a collective body of testing work that can be analyzed; providing a way for testers to "get your work out there"; providing better information to users about the quality of software releases and products.
Open Testing is definitely not about sharing your employer’s test cases, test results, or other intellectual property without their permission. If your employer decides to open up their quality to the world, good for them. But many employers aren't going to do that.
Open Testing can still be practiced by testing professionals who can't share the testing work from our day jobs. Simply download a publicly available software release onto your home computer, test it for a few hours a week on your own time, and report any bugs you find. Then blog about it.
Open Source Software is a good choice to start Open Testing. Many open source projects actively encourage testing, and have public online bug databases.
In this blog, I will write about examples of Open Testing in industry, open source projects that are requesting testers, tools that support and enable open testing, testing professionals who are practicing Open Testing, and my own journey into Open Testing.
If you are already doing Open Testing, let me know about it and I would be happy to link to your page or article here.
I've been working in software quality for almost fifteen years, in testing and development for large and small companies, on embedded software, desktop applications, and web applications, using both agile and traditional methods.
By now I've left behind a vast body of work in software quality: test plans, test cases, test scripts, test results, and defect reports. Unfortunately most of this work is not shareable. Most software test engineers are in the same position.
The reason is simple: most of us have spent our careers working for companies or other entities that consider the testing activity to be proprietary information.
This fact shapes the level of sharing and collaboration possible between software testing professionals. What tends to get shared is information that isn't proprietary.
We share success stories and horror stories. We collect these into articles, presentations, blogs, books, even distinct software testing schools. We debate among ourselves which testing practices, tools, and certifications are best or whether any of them are any good at all.
What is largely missing is the collective body of testing work that could be used to evaluate some of the competing claims, as well as to promote greater collaboration and innovation in software testing.
One possible response to this situation is a practice I call Open Testing.
Open Testing is the practice of testing software in an open and public manner. In Open Testing, test plans, test cases, test metrics, and defect reports are publicly available so that the quality of the software under test and the quality of the testing activity can be assessed openly.
Some of the potential benefits of Open Testing include: providing a collective body of testing work that can be analyzed; providing a way for testers to "get your work out there"; providing better information to users about the quality of software releases and products.
Open Testing is definitely not about sharing your employer’s test cases, test results, or other intellectual property without their permission. If your employer decides to open up their quality to the world, good for them. But many employers aren't going to do that.
Open Testing can still be practiced by testing professionals who can't share the testing work from our day jobs. Simply download a publicly available software release onto your home computer, test it for a few hours a week on your own time, and report any bugs you find. Then blog about it.
Open Source Software is a good choice to start Open Testing. Many open source projects actively encourage testing, and have public online bug databases.
In this blog, I will write about examples of Open Testing in industry, open source projects that are requesting testers, tools that support and enable open testing, testing professionals who are practicing Open Testing, and my own journey into Open Testing.
If you are already doing Open Testing, let me know about it and I would be happy to link to your page or article here.