77 unrequited votes
When there were no other tasks I've gathered a requests statistics on the GridBagLayout class changes. I considered both defects and RFEs.
By the way, we seem have to refuse that division and treat each defect now as RFE.
I was surprised by the fact that there are only 14 CRs inquire one or another change in the GridBagLayout class behavior. Believe, we have more defects in this class than are reflected in our defect database. Wow, it seems we should not take the joke about changing the name of that class by replacing first â€œaâ€ to â€œuâ€ seriously!
Some of known defects
have already went through a standard procedure on a
We already accepted the fix for 4222758 and targeted it into Dolphin. Thanks, LeoUser! :)
But the result of our collaboration with developers still can not be considered very successful.
*A couple of words about the compatibility*
The only reason why the patch that is fixing a critical issue can not be integrated into the JDK master workspace is the broken compatibility.
As usual a patch that is developed by an engineer passes reviews, approvals, and comprehensive testing. This process happens in several stages and accomplishes by different teams.
All these efforts are intended for the only one goal â€“ (surprise-surprise!) the programs that worked with previous versions of JDK must keep working with each new version.
And we are really trying to follow that rule :)
So, when a user submits a patch via the
Ideally, we should find 100% possible issues on the testing stage and don't pass them into release. In the end, this is what we expect expanding a testsuite with a new testcase for each bug-fix.
Thus, I mentioned a key term of this article â€“ a testsuite.
The more regression tests are in our testsuite â€“ the stronger possibility that we will not miss any issue just before the integration.
The test itself is assigned to verify in program that documentation is in accordance with the implementation. Sometimes it just verifies a reasonable behavior since the documentation for some aspect is absent.
Usually tests are written based on some existing documentation assertions. In particular, from JavaDoc.
But in some cases the documentation may not be considered good or complete. The GridBagLayout class is the case here. By historical reasons that implementation includes much more nuances than described in the documentation.
Some details may just be missed indeed.
So far, an ordinary developer should rely on the current JDK behavior and design GUI in accordance with his/her current experience.
Therefore a migrating to the next (or just to another) JDK version may cause some unpleasant effects.
A user falls into confusion and asks us to fix it. Or just fix the problem by himself in his particular JDK. :)
Thus, we have a poorly documented (lets call this so:) class and, as a sequence, a lack of test coverage.
For example, a recent attempt to integrate a fix for
led us to â€œunpatchingâ€ that fix after it was integrated into the master workspace because the result really impacted other applications.
We didn't find the solution that satisfies both sides.
That defect is still opened and awaiting for a contribution! :)
We know that many people use this layout manager. As a library developer I can't spend much time on writing industrial applications that use the GridBagLayout class.
Probably you develop such applications. That is why I came to the following idea.
If we could create a reliable testsuite for the GridBagLayout class this testsuite may compensate a lack of documentation.
Perhaps this testsuite will greatly help to make changes in GridBagLayout in future. This will also noticeably decrease the risk to broke the fix. We will verify the patch before the users (you) can do the same with released JDK. :)
I don't know how we will interact while creating that GridBagLayout testsuite. Most likely I'll
post a new defect on to the peabody forum. (try this one:
6450943) Every interested person would now be able to post a test. Also we need some rules how to design them.
The following rules are already in my mind:
- the test must be as short as possible
- one should be written on AWT not on Swing. I'm almost sure that any problem with layouting of Swing components might also be reproduced with AWT ones. Actually, I love Swing but it's too hard to look both into AWT and Swing code :)
But if you feel that the problem is Swing-specific - feel free to post that test.
- it must say something like â€œTest passedâ€ or â€œTest failedâ€ when finish.
What's your interest in this venture?
A functioning test that is included into the testsuite guarantees that none of future fixes will break the test's behavior which in its turn reflects your application! I just realized that there is actually no 100% guarantee to avoid that situation. To be honest we may change something and intentionally break some regression test, however this happens rarely. Nevertheless, the life is an unfair thing itself and
I should warn about it.
From our side we are interested in that all applications keep working as it was planned .
Moreover, I like that I can review peabody fixes faster. :)
At the very beginning I said that â€œBut the result of our collaboration with developers can not be considered very successful.â€
Should I call the result a collapse?
I think those attempts if even they were not very successful gave us an invaluable experience and we can apply it right now.