Sep 202013
 

During the development in TDD manner of some generic C++ code in our project I stumbled over the problem that I wanted to forbid the usage of integral, floating point and pointer types as a certain template argument. Only types with a default constructor made sense in that context. My very first attempt was to use a static_assert like:

I checked inside my file with the unit test, that the instantiation of Foo<int>() actually led to a compile time error. So fine so good. I made an #if 0 … #endif around this sample failure code and went on.

Later a colleague of mine reviewed the code and brought up the point, that a) this is not a real unit test and b) whenever someone changes the file containing the definition of Foo<> he has no knowledge about the commented code in the unit test file. He was right! But how to ensure that on one hand something does not compile and on the other hand how to guarantee that the unit test fails if the tested code gets changed in the future, e.g. during a refactoring? It seems to be a paradox. One could move this failure detection from compile time to runtime. But this would worse than the static_assert, because the advantage of early feedback to the developer would be gone. So I kept the compile time check.

Continue reading »

Dec 202011
 

The deepest impact on my side was studying the following books during the last year: Continuous Integration by Paul M. Duvall, Steve Matyas and Andrew Glover and Continuous Delivery by Jez Humble and David Farley. These books and presentations during the OOP 2011 in Munique pushed me into the direction that I am trying to describe below.

Before I go into details how we use Cucumber, I want to describe a little bit our environment: In our company we are developing a medical application that can run as a standalone application or in a client / server setup. Each configuration consists for the user of three UI processes and about fifteen background processes or services. The data persistence is realized with a PostgreSQL database, which is handled through one of the service processes. It is a radiological reviewing workstation that is highly optimized on high volume throughput. The image data for each patient can easily be about 2.5 GB and the technical challenge is to display any of next possible radiological images, which can have easily 26-50MB each in less than a second, because our customers are used to diagnose about 60-120 cases per hour. Depending of the current workflow step, the user has to work with a different UI process.

The application has about 2,000,000 lines of C++ code, currently about 10.000 single requirements and most of the functional tests are manual paper based or steered by our test management system SpiraTest. So it is obvious that we have plenty of work to do. Beside the fact that we have way too much requirements or be more precise that the granularity of the requirements is too fine, certain parts of the code are hardly testable in an automated way. Our transition from the V-model to SCRUM two years ago brought up dramatically the fact that test automation is one important step for us to stay productive and ensure quality.

Continue reading »

Mar 062011
 

Introduction

At the very beginning I should start with some background information: I am working in a small software company in a team with 13 developers. We continuously developing a software for the medical domain with Qt as UI framework and Visual C++. Our platform is MS Windows. Until autumn 2009 we were developing with the V-model.

As the requirement and priorities of our customer changed many times, we decided to change from the V-model to SCRUM. At the same time we changed from developing all features on the trunk of our SVN repository to feature branching.

As it is recommended in the literature we updated all the feature branches every day and merged the features back, after a story was done. And here the pain stared. SVN is a great SCM repository software, but it is not made for regular merging. So we started to look for alternatives. We put an eye on Git, Hg, Bitkeeper and Perforce. With four colleagues we made a list of requirements, use cases and started an evaluation phase for several weeks. At the end  Git and Hg were the most preferred ones. During this time I looked in the net and searched for pros and contras for both of them. From all the blogs, articles and our own experience i came to the conclusion that Git and Hg offer nearly the same functionality. At the end we decided to go for Hg because of the better support with TortoiseHg, the Windows Explorer integration and the less confusion interface.

In the next chapter I try to describe our way to convert the SVN repository.

SVN Preparation

Our source repository had at that time about 90.000 revisions and its size was about 7GB and the source code had about 500,000 lines of code.

It is recommended in the Mercurial book that one should start the migration from a repository clone. I strongly can recommend to follow that way. As I am more used to Windows I started to to create the mirror within Windows XP. But Subversion does not work stable under Windows XP. The socket layer crashed several time during the cloning process and I had to reboot the system. So I tried to perform the process on a Windows Server 2003 installation. The creation of the mirror was now going well, but later the needed SVN-Dump did not went through. So I installed an Ubuntu in a virtual machine and re-did the complete process.

So if not available in the Linux installation, install svn with

Now create an empty repository

Now initialize the mirror. Be aware if one does not restrict the path to a certain repository path, one gets the complete repository. The repository had about a size of 60GB. So it would need time and space to set this up.

The first time one is asked to accept the certificate and one has of course to enter ones password. Use the correct apostrophe! So not a ‘, but a `!

Now we initiate the sync.

This command can be repeated as often as one like. It always checks then the source repository for updates and appends them. The history and its version numbers are preserved. It is not known if copy from or copy to from outside of the given path are used in the repository.

Migration from Subversion to Mercurial

During the last years we had stored many binary files in the repository which caused the repository to grow to a size of about 7GB. So I decided that we transfer the complete file history of the last three major releases  into Hg and filter with svndump all not needed binaries from the repository. So we wanted to be able to build all versions since the last major release.

The following assumptions are made:

  • The Subversion mirror repository is here
  • The location for the new Mercurial repository is

Patching Hg conversion module

The Subversion conversion tool must be patched because of deficiencies inside the code. Be aware as soon one updates the Subversion packages one has to re-apply the changes! The files is under

Line 353
This line must contain all branches that shall be converted. All others are excluded.
Lines 354-363
These lines must be indended by 3 spaces.
Lines 366-368
These lines must be commented, otherwise the conversion will abort.

Execute the actual conversion command in a shell:

Here is the description of the command in detail:

svn2hg.txt
This file contains all paths which shall be excluded during the conversion, because they contains either binary files or branches, etc. that are not needed any more.
branchmap.txt
This file helps to rename branches to different, better names.
splicemap.txt
This file contains a description of additional parent and child connections. (Currently not working correctly, or I was not able to get it work)
{start revision}
This is the first revision that is taken from the Subversion repository.
user.txt
A file that contains a mapping from the Subversion user names to real names that are used in Hg.

Finalizing

Now it is strongly recommended to do a binary comparison of the head of the Mercurial repository against the latest revision of the Subversion repository to ensure that no conversion errors happened. Be aware that the DVCS do not support empty directories and the DVCS does not support file property makros as Subversion.

Experience

We did a training over several sessions for all users about the differences between SVN and Hg before we used it in production. Then we made a cut from using SVN and started using Hg between two SCRUM sprints and no major issues came up after that. Only once we had the problem that one user used the merging functionality as it is in SVN. So he just wanted to pick a single change set and transfer it to a different branch. So at the end it was a training issue. Today everybody is really satisfied with Hg.

In the next article I will described why we use Hudson / Jenkins as build system and how we get there.

At the end I want to apologize, English is not my first language and this is my very first blog. So any kind of critics or advices to improve is welcome.