Thoughts about java, ruby, agile and other every developer stuff.

Friday, July 27, 2007


There some memory problems. IMHO there are two options which can help You. On of them is:


This option dumps heap state when out of memory occurs.

The second option is -XX:+HeapDumpOnCtrlBreak
This option activate on Ctrl+Break.
Also many tools provide dump on demand.

At production level heap has about 100000000 live object. So You must have time to look through dump file. It's no easy part, but it's worth.

In dump file You can find information about class, fields, primitive values and references the most important thing is information about objects defined to be reachable by the JVM software. A heap dump contains a snapshot of objects
that are alive at dump time And You must know that full GC is triggered before the heap dump is written.

When searching problems You should focuse on :
  • Inefficient data structures
  • Caches
  • Perm (class loaders leaks)
  • Model/Proxy-driven class generation


Thursday, July 12, 2007


Yep today I was thinking about estimations. What is an estimation, how we know that given estimation is ok, and of course how to estimate.

Estimation is in my opinion believing in something, so naturally it strongly depends on personal fillings, knowledge, and many others attributes. So why we estimate, the answer is “because we have to”. So estimation makes managers happy because they know something. We must somehow point a date in calendar. The questions are: When we finish, how much effort is required and total cost.

The easy part is total cost, it consist of hardware cost (such as computers, servers, power etc), training cost (trainings, travels, etc) and the hard part effort cost. The clue point is to estimate “effort”.

How can we do it? The first part to consider is productivity term. There are some people who do task in an hour, and the same task another man do in a day. Also the same man doing a task in an hour in one team does this task in two days in another team. Don’t focus on people, focus on teams, and measure productivity of your team not members. The simplest method is to count units of work and divide it by person-hours. However as always for any problem there are many different solutions, which have different attributes.

How big is the software I we just build? Answer:
1. Size: Count lines of code, kilobytes, etc
2. Function: number of functionality (function points, object points)

The first one has some drawbacks for example 5 lines in ruby language is much powerful as 5 lines in java language. In my opinion using function points is the best way to measure. Of course functions differ from each other, so we have to introduce some complexity weights (user interaction, external interfaces, input, output, entity use). One point to mansion here is that difficulty of functionality depends on developer skills (there is solution for this knows as miracle estimation).

And the last “object points” is somewhere between functions point and size counting. We can count some objects for example, user screens, user actions, database tables, reports count, and module count and so on. Every object has effort (user screens -> 2 man-days). Object points as I know are used by COCOMO II estimation model.

The advantage of object points over function points is that they are easier to estimate from a high-level software specification.

We’re close to know our productivity but there are many factors which changes productivity. Some of them:
• Project size : larger project need more communication, more people evolved
• Domain experience : people who write web applications won’t be so productive writing standalone application, people writing financial software will won’t be so productive writing medical software etc
• Environment : morale, quiet vs loud, private vs public area, etc

Ok we have productivity of our team. Now we must somehow estimate effort. There is no simple one way to estimate. Here some of them:
• Analogy – if we build something similar so we can assume that the cost is similar
• Algorithm – we have some algorithm (similar to measuring productivity)
• Expert judgment – my favorite, each expert estimates and after that is discussion panel about these estimates. Estimation process is iterate until experts agreed
• And the last one is “don’t estimate just say price”, it’s good when effort depends on client budget.

There are two approaches for this methodologies top-down and bottom-up. Top-down starts at system level, examining overall functions by contrast bottom-up starts at component level. The advantage of one method is disadvantage of the second and vice versa.

You should look at COCOMO model which is well documented, public domain.

So you estimate your effort, I show you picture

So you can be wrong in design phase by 800%, nice heh?

Now you sure that you estimate effort, haven’t you? No it’s not finished now you should consider some other things. Maybe you should lower your prices because of
• Market opportunity – learn new technology, code reuse in other project, marriage with client get future profits
• Position – financial difficulty (better gain contract), new to market (nobody want talk with you), Future reference (yep we write this software for Google Inc ;) )

Or maybe you should raise a price because of:
• Uncertainty – maybe estimation was too optimistic, requirements changes


Lately I'm involved in too much things and my blog is in the last place. I've just passed Sun SCWCD exam. The strange thing about it is that I’m using this technology every day, so when I decided to pass exam I was thinking it'll very easy. Than I started preparing and I realized that I didn't know many features of servlet 2.4 and JSP 2.0 technology.

I think that nowadays it's rare to use pure JSP; we have so many template languages, helpers from web frameworks and so on. Also other stuff is rarely used because of frameworks. For example did everybody know XML directive such so it was very refreshing to pass this exam.

From the past

Labels: ,