Wednesday, October 25, 2006

YouTube acquired....

I've heard about YouTube acquisition by Google.
What's noticeable in it?
It's the fact that Google is starting to spread its cash reserves to shop for internet businesses. This means that the "Google, the innovator" era is likely approaching to an end.
There is no complaint in it, it's a sign of the coming company maturity stage. In this phase the company reaches full maturity and becomes a serious competitor (for MS and IBM).

Thursday, October 19, 2006

1001 Books You Must Read Before You Die

I stumbled upon the now famous list 1001 Books You Must Read Before You Die.
Two issues rose immediatly.

1) Oh, gosh, they are all fiction, literature; no essays or technical papers or scientific ones!

2) It's amazing to observe how Internet can comprise each and every style. Enumeration, in fact, was a strong point for Christian Scolastic philosophy in the middle ages.

Internet Explorer 7 is here.

At last is here. I downloaded the betas in the first few minutes they where available because Microsoft's new browsers are events.
Now that's officially here I say "at last!".

At last we can restart a browser war!

I've seen, in the latest years, Firefox adepts rejoyce while IE kept losing market shares. Even Opera's passionate users enjoyed stinging IE common users.

It looks like browsers are a sort of religious choice. Is there any need to underline how absurd is this?

P.S. I do not like Firefox but I used Opera a lot before IE7 beta. This IE version seems really better than competitors, albeit resources hungry as usual.

Wednesday, October 18, 2006

ISV need sales!

Today I suggest you a visit to

Hal Carrol's site is a valuable resource for techies who startup a mISV. In my experience I've never seen such an easy and straightforward lesson on how software sales work. Download the pdf book and read it, there is much to learn in it.

Saturday, October 14, 2006

Painless Scheduling? with Poldina you can!

Scheduling is one of the most common activities that take place within a software company. Each one of us, being a developer, a consultant or a manager, faced it.
Poldina, too, studied the subject deeply and managed to craft a small theory that can be of help.

When the need to manage more than a few people and more than very simple projects rises, the Perfectly Educated Manager bores a number of holes in everyone’s patience and comes out with something like this:

Now we have complex activities broken down to simpler. For each activity duration and and owner are established. Precedence among activities is accurately defined. A number of milestones are set according to contracts.
Now everything can be kept under control and the Perfectly Educated Manager thinks that his job is done.

Cool, but useless, harmful and plainly wrong!

It is useless because each detailed Gantt is doomed to be subverted from basement by actual activities and probably only the final milestone will be respected. Given this probably it was worth only defining the milestones.
It is useless because it does not help the manager’s daily job, i.e. developers do not update the Gantt as tasks are over (the worst is when a group of people assigned to a task must asses the task overall percentage of completion…).

It is harmful because the solid, rigid, implicit task schedule work perfectly as long as they are respected to the minute. Each lead or lag too easily ends up in witch hunt to the faulty person who spoiled the harmony of the schedule. No need to describe the effects on people’s morale.
Maybe the Perfectly Educated Manager resolved not to be so stiff, but the fact itself of seeing each task changed in solid colored bar on the screen or on paper, stiffs his perception of the importance of schedules.
People can work themselves to death to meet an important deadline, and nobody takes lightly the decision to postpone a deadline. People can’t do this every few days to meet all the implicit intermediate milestones.

It is plainly wrong because, as everyone of us knows very well, the tasks do not occupy a solid period of time and precedence may be not so rigid. More, often there’s no time for the iterations required to fine tune a complex system or to cope with specification changes.

Paraphrasing a German leader of the beginning of the 20th century, “Gantts are only a piece of paper.”

So, Poldina had the occasion to carve out a quick guide to project planning and scheduling from her past experience. Let’s start from the egg.

First of all, have clear the goals and the final milestones. Often these are fixed by contracts. Then write down the skill set you’ll likely need. Try to find people who are interchangeable the most. I know that they’re more expensive than a bunch of chicken who are able to do one thing (i.e. one programming tool, one database etc.) but the right people, when glued together, can be incredibly productive and, sometimes, reduce your project’s duration to a point where there’s no need for a true schedule (this could solve the problem in such an elegant way…ah!).
Make sure also that specifications are written in a way that is adequate to your people. Specs will never be foolproof, nor they’ll contain all the information that the people working on the project will need. So they must contain the key information that are required to do the job or to figure it out, and this depends on the history and the culture of your people.

Now let’s face the duty of breaking down a large job in small tasks, this must be done because that’s the way the human brain analyzes problems.
Ok, a task does not occupy a solid period of time (if it is of any complexity of course, cooking hard-boiled eggs definitely do occupy a solid period of time…). The task related workload, in real life, is something like this:

It starts with a nose, usually when the task is informally discussed before a cup of coffee during breaks and lunches or some research is made from a general perspective. Then it rises quickly to a full or almost full occupation, then it slows down with a tail and may have some late bumps. While the bumps may represent late interventions to correct a bug or so, the tail is the most important part of the job. It is the period when the bulk of the effort lays on the back. It is at this point that dots on i’s and dashes on t’s are put and make an adequate job a great job to be proud of.
The area under the curve represents the total amount of work done. The height of the curve for a given point in time represents the person workload in that moment. No need to say that the solid block of time is the worst approximation possible of reality.

Given this, what is the graph that helps you the most to control the project and manage the workforce? In Poldina’s and mine experience is the following.

This graph has people on the y axis and time on the x axis, albeit the orientation is not the usual one. Unlike the Gantt, it focuses on people, who are the critical asset to manage. For each project member, there’s a bar that spans all the time the person is assigned to the project.
In this bar there are darker and lighter areas. Each dark area represents the period when the task almost fills up the time, each light area represents the superimposing of the trailing tail and the beginning nose and there is no fixed milestone between them. The individual tasks are a parameter written beside the line.
Precedence among tasks can still be drawn with an arrow from a light zone to a light zone.

It is clear that everyone can point his finger to a point in his/her bar and tell “I’m about here”. When everyone is done, just connect the dots and draw a line on the current date and, voilà, the project progress assessment is served.
In the perfect situation the two lines superimpose, otherwise all the points right from today mean a lead, all the points left mean a lag.

In this graph, operations like assigning a new task to a person or cutting it mean simply inserting or removing a segment of the line, with everything that shifts accordingly.
In the same way, the task completion level is not out of sight as the tasks are explicitly shown, anyway.
Punctual questions like, “what’s Joe doing now” or “Is this task done” can be easily answered. Individuals may update their situation by themselves simply moving the point of advancement and not entering weird information like “completion percentage”.
It is also useful to the individual member to have the situation available in a glance to manage their dependencies.

While the Perfectly Educated Manager is spitting in my face, I can assure you that this kind of representation has been very useful in the past. So, let me know what do you think about this.

P.S. To tell the truth I’m a Perfectly Educated Manager myself…

Friday, October 13, 2006

Let's start again...

I moved my blog to Blogspot.
I did this because, with the older platform managing user comments was terribly difficoult.
I moved all the long articles here.
The old blog can still be found here.
Thank you for your attention.

Thursday, October 12, 2006

I had two managers


I had two managers

I had a number of bosses in my working life. I reported to various people, some good and some bad, but I can consider myself quite an expert on being managed.
Today, I want to talk about two of them, the two best bosses I’ve ever had.

The first one was also my first boss. As a young developer, I worked at a global consulting firm. My team’s coordinator was the most productive developer I’ve always known. He was able to write pages and pages of code without a single bug. I’ve heard of Mozart writing music without corrections ever, he was the Visual Basic equivalent of Mozart. He was also bold, good-looking and very, very clever.
I implemented forms and procedures interlocked with the backbone that he created in previous projects. More often than not he used to write down a bit of pseudocode and let me connect the dots. In few months, I was able to do a lot by myself (usually I learn quickly) without hints. I also learnt very well the large system we were developing for an even larger firm. Given the high turnover, that will ride me down later, I quickly become the maximum expert on the system. Despite that, he used to discuss with me even the subtlest details of the implementations, improving my ideas. He had an insight that I was barely starting to parallel one year later.

The second one was the CIO of the Italian subsidiary of a large French multinational firm. By that time, I already had several years of experience but I did not make to be a project leader yet. I had quite a large expertise on Business Intelligence and planning on Microsoft technologies. I was, for the large part of the tasks, independent.
My boss, at that time used to give me a target, a budget, and then let me go my way. Sometimes we used to sync about the projects, usually at the coffee machine. When I needed help or I had something significant to tell, we interacted; otherwise there was no need. I enjoyed the largest freedom ever. That was the time when I learnt the most, improved my skills at the fastest pace and reached the highest responsibilities available in the company. In fact I experienced a row of positive outcomes.

They were both good bosses, but they could not have been more different. The first micromanaged me, he did what usually is thought to be a bad practice. The second, on the contrary was the prototype of the good manager (maybe it was because of the management courses…).

How comes I loved both? How comes I loved the first one?

Because the first was a person I could admire and learn from. He was the single person who contributed the most to my professional education. This means that micro management and detailed resource coordination are not always wrong. People with no or little experience actually need micromanagement. Micromanagement is what can take them out of their whiteboard condition. There are a number of bloggers who support my second boss’ principles, and there is a general agreement around them. I myself agree (after more than ten years experience, I too hate to be micromanaged), but let’s not forget that young people can become effective and experienced by managers’ constant care. Note that I say care opposing it to scare, which, in turn, is the most common method to manage young people forcing a Darwinian selection among the youngest.
So, managers, let your stars go ahead alone, and help the young move their steps.

The Worst Mistake


The Worst Mistake

I recently undergo the worst mistake that can be done in a business intelligence project. Of the large number of pitfalls there is one that must be avoided at all costs.It spoils developers’ and consultants’ lives, it makes the client terribly unhappy and helpless, it automatically makes the project go late.So never, never, never, never, never, never, never…
…build a business intelligence system (or a part of it) before the source system is fully deployed.
It is such obvious evidence that it should not be necessary to explain this further, but I keep seeing projects where the main transactional system starts together with the BI system. Along with this I keep seeing stressed consultants and disappointed customers.So, what are, once and for all, the reasons that prevent a BI system from going live right together with its transactional source?
If there’s no data to analyze there’s no way to analyze them.It seems too obvious but a system with only a handful of test data is not representative at all of the entire cruising situation.
Migrated/Imported data are not suited for developing.Maybe the new system is filled with migrated data. They are not suited for BI development as well. They are not because, often, systems work well with different patterns that are not the patterns produced by the operational system itself. That is, some test data fields may be filled while, in new transactions after go live, they’ll be empty, or will contain a different value. Some lookups may work only before, on migrated data and not after. Some phenomena may actually take place only after the go live. To build a consistent ETL process, to cleanse and rearrange data, all the data patterns and all the phenomena must be in place. Each early try is doomed to failure as unnecessary transformations will be put in place and necessary ones will be left.
Business Intelligence is related to bulk data analysis. Business Intelligence queries normally sweep large amount of data to extract aggregations, averages, metrics of any sort etc. This means that testing those operations on fake data is by no mean a sensible way to develop data. True values can be figured out only from true data.
Many Business Rules are implicit. Not all the business rules that are meaningful to the analysis are explicitly stated. Many are embed in the transactional system functioning. Often they are taken for granted by users and ignored in the requirements collection step. The database builder and the report builder will be called to a detailed work to carefully analyze those rules and transform them to the reporting/dashboard expected behaviour. Usually these rules do not apply to the generality but to particular cases. Test data cannot catch up these cases.
So, I hope you’ll think to this short article the next time you’ll find yourself in an “everything now” situation.

Interfaces and communication. - part 1

Interfaces and communication. - part 1

Joel Spolsky recently wrote an interesting
essay on the development abstraction layer. His point, in short, is: shield your developers from the complexity of a company, because their talent is too important to be wasted in other then development.
True, but Poldina says it is partial.Let's start from the beginning.
Keeping developers focused allows them to do what they do at best and what they've been hired for, create software.Even keeping salesman focused allows them to operate at best.Maybe also the Marketing is, and the accounting, and the customer service too. Every function works at its best when it is shielded by other functions details. There is no deep insight in this, it is pretty intuitive.
We can easily understand that what allows every function to work at best is a CLEAR interface with the others. This concept is familiar to everyone who’s involved in IT. In other words, each function, each office, each unit in the company must follow a proper
PROTOCOL to communicate with the others.
All the bureaucracy ruling within large companies often starts as a genuine effort to rationalize the communication. At a lower level, in a small environment, the protocol will appear much more simple and informal, but it will exist. Maybe it is simply a whiteboard on the wall, but it exists.
To fit the protocol each communication must be formalized as much as possible. Each message must contain all the information required to the receiving end to react accordingly. This means setting up procedures for each conceivable incoming message. The most a task is automated, the most efficient it will be. To maximize the efficiency, incoming messages should be queued and processed one by one sequentially by each unit in the company.
If you are still here, probably you’ll be thinking that I re-invented the
Taylorism and I’m suggesting applying it to every business, even the small ones.
In a certain sense, this is true.
I can hear you shouting. What happened to creativity, intuition, flexibility that are claimed to be the key assets of a modern company?They are still here with us, and this will be explained in the next episode.
Stay tuned.

Interfaces and communication. - part 2


Interfaces and communication. - part 2

We ended up with a question: what happens to creativity, intuition, flexibility when there is a strict protocol governing the interfaces?
Uh... nothing, what does a communication protocol has to do with the triggered actions? About nothing. Being creative in problem-solving, for example, means to do something different (and expectedly with an advantage) in reply to the usual imput. This choice is left intact but is critical that all the informations required flow correctly.
A great effort should be done to ensure that intercompany communication is formalized and complete. Everything, but informal coffe-machine talks, should be normalized. A classical view of the company focuses on the processes taking place. Information and goods flow through the company in order to achieve the ultimate goal, having cash flowing in in the proper way. This view has the merit to focus on the operations that are what keep a company alive (what operations to carry on is decided by strategy, but this is another tale). Keeping processes working keep cash flowing in.
What is the single most important factor for an efficient communication?The single point rule.Information should be conveyed to a single entrance queue to be processed. Receiving mails, phone calls, faxes, words from the hallway, it is all unavoidable. What should not be done is keeping the information in the original format. All the information should be put in a single place to be picked up and acted over. May be a specific piece of software or an in-out physical basket, but a single point is invaluable to organize the activity. There are a number of excellent books who teach this.
The process of sorting incoming information is time consuming, so whoever needs to communicate, had better knowing the best format for the receiving end. This, in a controlled environment, like a company, can be achieved.
So, we are back to the initial point.
In the next essay, we'll see a real world example of how an help desk efficency greatly improved through the adoption of few rules.
Stay tuned.

Interface & communication - part 3 "the helpdesk"


Interface & communication - part 3 "the helpdesk"

Now I'm going to tell an help-desk story that shows in detail what happens when clear communication occurs among company segments and, in this case, the customer. The company shall remain nameless and a few key points will be changed to make it not identifiable. The companyThere was a software house. It made a couple of successful consultingware applications for media agency and media resellers. In its niche, it was the dominating company. They achieved this position because of lack of competition and because of the huge expertise they could count on. In few years they really ate up the market in an important European country. The problemUnluckily, founders where all programmers, who lacked, by that time, the experience required for managing a real company. As their unparalleled products made their way in the market, management requirements grew as well. So did the necessity for user assistance. Being a bit naïve, products lacked the administrative and auditing features and the ease of configuration required to let the users manage them by themselves. After all, a real programmer can check logs, execute sql statements, test connections, debug code, etc. etc. If a user signalled an anomalous behaviour, the programmer, could log on remotely to the customer system and sort the problem out. This is ok, but what if 5000 almost untrained users phone in each time they start the application? It was soon evident that no real work could be done with all those calls and e-mails coming in. Programmers were overwhelmed by the amount of the support required. Solution #1The growing company hired a help desk assistant. A young girl joined the company and was appointed to answer to incoming calls. She was trained with the applications, this is normal, and she went out in the street. The idea behind the hiring was that developers had to focus only on technical related issues and she could answer all the other questions. Unluckily, they where wrong. She was able to reply to simpler questions about product usage, but the bulk of enquiries were about non obvious behaviours of the software that could not be investigated by her because she was not technical enough. More, she become an annoying voice in developers hears, always chasing them to have the tickets closed. Ah, yes, there was no trouble ticketing software for her, it was too much expensive. She was a cost, and service and workload did not improve. Solution #2Try to teach her how to execute sql, parse logs etc. and hire someone else to help her. The help desk crew grew to two and three and four, and one of the founders was appointed as the help desk supervisor. Maybe the activity was a bit more efficient, but the issues stayed there. Slowly, while the number of customers grew, the amount of work generated by help desk requests grew as well to an alarming level. There was no room left for new jobs. There was no shortage of new orders, there was the need to improve the products and, most of all, there was the need to complement products with those auditing and debugging features that could relieve the help desk workload. All of these tasks went deathly slow because of the distraction cause by assistance. More, being naïve as already said, the founders disregarded to sign consistent support contracts. Help desk costs were killing the company. Solution #3 the almost good trySlowly, the conscience that a modern process had to be set up rose and the founders had the humility to leave aside their previous experiences and begin to think manager-like. The first improvement was to provide the help desk operators with a home made agent-less monitoring software that allowed fast customers' log file parsing and execution of pre-built and parameterized queries against the databases. This step alone led to a great improvement. The operators were able to diagnose the bulk of the issues reported by the users by themselves. Even if they could not intervene directly, developers were provided with all the preliminary information required to fix the problem, thus sparing time. This may look like a techie solution but, if thought about, it focuses also on the quality of communication between the operators and programmers. If the responsibility for diagnose had to be moved to the help desk, it was also crucial that developers could receive complete information. The software was designed to produce a test log, which describes all the tests made, and that log is what developers receive. Always the same format attached or pasted in a mail or printed on paper. The second improvement was really a process improvement. A set of escalation rules was established. Company employees numbered in the range of 30's, split among developers, consultants and help desk operators. Two consultants were appointed to being the first escalation level. All the requests that the operators could not close by themselves were reported to those two, which, in turn, tried to close the requests or scale them again to developers. At the beginning was a bit of confusion while planning developers' time, reduced by blocking out in advance the time available for service requests. These two improvements greatly reduced the total workload required to assist customers but still two headaches were hanging around. First, all communications inside the company relied on e-mail and printed documents. People did keep long excel lists to monitor the requests flow, wasting time to keep them updated. There was no one-stop point where the status of a request could be assessed by everyone who needed. Some request got lost in the mail/paper clutter which plagues each company. Second, collecting precise statistics was extremely hard, since there was no central database for them. Solution #4 the good oneAt last, a help desk solution was chosen. It was a commercial web-based package whose name I'm not authorized to disclose, that allowed a fully integrated management of requests. The single feature that made the difference was its ability to collect all the data in a single central point. The software was able to collect service requests from mail, direct input and the web, and to analyze, organize and prioritize them in the way that seemed the most convenient. All the communication between the requester and the operator was managed within the package, as the requester was allowed to check a web page with the status of the request and each communication could be mailed directly from within the package. There was a single repository for all the knowledge required for the help desk process, which was kept under strict control. Lessons learnedGoing back to our initial argument, we can understand well that the real key point for each improvement is related to the quality of communication. A clear and standardized information flow streamlined the process to the point where it could flow with the least effort. Standardization of diagnose spared inquiry time. Standardization of escalation allowed better schedules. Standardization of communication interface shrunk the time required for requests management. There is no way in which a standardized interface has limited people's creativity or peculiar abilities. Definitely, there's a huge advantage in standardizing communication, but probably you already knew that.

Fog of War


Fog of War

Many of you will know the name of
Von Clausewitz. In the middle of the 19th century, the prussian general Carl Von Clausewitz wrote an encompassing treaty on the art of war. It is still a fundamental book on the subject, a textbook in all military academies.
Among the others, a fundamental point Clausewitz made is about the Fog of War. He states that war is the kingdom of uncertainity. The details of war conduct tend to blurr like in the fog. This means that commanders must inherently take decisions based upon incomplete information.
If you have ever led an IT project of any complexity, most likely you have faced the fog of war.The analogy is not complete, as it should be applied in a competitive environment, where both parties actively fight each other. None the less the continous bargain between the client and the consultant may well be regarded as clash. Each one of us hopes to work in a fully cohoperative environment, but, often, this is not the case.
What is the fog source? There are many, on different layers.
The first layer is an incomplete/inaccurate requirements collection. As IT people, too often we tend to automate our reply to the user need. If we are asked by a customer about, let's say, better customer knowledge, we reply automatically with the acronym CRM. We are ready to provide a contact database full of glamorous fields like "contact first son birth date", a bunch of insightfull reports which tell wether the client is likely to churn or not etc. Maybe the client only wanted to know which are the clients which have been visited by salespeople in a quarter. (This is not a fake example, this happened to me, as the number of covered clients was by far the most important indicator of business wealth in that case).
The second layer derives from the customer lack of IT experience. They often divert you to interface details, cluttering the horizon of the process which is going to be automated. This does not mean that the user interface is not important; more often than not the customer experience is the key to success. The point is that polishing the user interface is the last task to be accomplished. This is very hard to achieve as an ugly-looking system will often be considered just ugly in spite of any other consideration.
The third layer is ours. A project involving more than one developer is inherently prone to insidious details. If there are no clear rules on coding and naming conventions, is sure that you will end up with collisions that will cause an unnecessary waste of time and efforts.
At the top of all this there is the project manager. He/She will inevitably lose a correct view of what's coming on and risk to take uninformed decision. This happens when, like me, you are managing 3 or 4 projects a time.
What is the solution? The obvious one is being carefull, or paranoid, and pay the highest attention on these aspects. Probably I do not have anything to teach you on this subject.The second solution is, trust your istinct. Experience and thorough knowledge of the basic issues involved will often point you in the right direction. I listen to my istinct more often than I will, and it is often right.
Is this not scientific? Yes, but what do you expect from an hen-master?

A Morning at Work


A Morning at Work

This morning, at work, I've found bad news.
A client of mine made a bad mistake.
Each 3 months they extract from the accounting system two files with vendor's commissions and load them in their corporate datawarehouse. It's an old , Informix based, system. We made it, but I, personally, inherited the system. I cannot blame my predecessor for what happened: he worked like everyone else used to work at that time.
My client, as I told before, made a mistake. They picked 4 months to be extracted instead of 3. This meant that there was one month of doubled data loaded up. Luckily I made a one-shot backup of the fact table, from where some other fact tables are rebuilt from scratch for reporting. After a short analysis it was clear that the only viable solution was cleaning up the table and reloading data.
At this point
· I made a copy of the backup file
· I counted the records in table and files to be sure that nothing strange, after backup, has occurred
· I extracted a short selection of records
· I connected to our local test system to test the load
· Ouch, the test system table has not the correct structure
· I connected to the production system
· I prepared to load the small extract of data
· Wait a bit, is that the correct table?
· I checked the sql script which loads data to be sure it was the correct table
· I prepared again to load the extract
· Am I targeting the right database?
· I checked the connection and reconnected
· I loaded the extract and it was ok
· I prepared to delete
· A coworker asked me for an urgent advice
· What was I doing?
· I prepared to delete
· Am I targeting the right db and table?
· I re checked the scripts
· I answered the phone
· I re checked again the scripts
· I prepared to delete
· I checked the SQL
· Is it the right table and the right db? Maybe the script does something strange
· Script rechecked
· Prepared the SQL
· Stared at it for ten minutes
· Run the delete (while a cold drop of sweat run through my spine)
· Done, prepared to load the backup file
· Is that the correct file?
· Check the file
· Prepared to load the file
· Stared at SQL loading statement for ten minutes
· Loading... loaded.
An entire morning spent on this and the other tasks accumulated. If I had to have the tape backups restored the mess would have exploded.
What does this teach me?
1) Never create an incremental datamart without a mechanism to rollback the loading. I had no chance to identify correctly the extra rows, so I had to reload everything. From this point of view, sap BW is perfect. It allows to get rid of wrong data loads in a click. The data load itself is an object which can be activated, deactivated, compressed, archived etc.
In a less structured environment, adding a load identifier is invaluable to delete all the records. Simply add a field to the fact table and fill it with a unique identifier, the same for the entire process. Probably the best way is to generate it from the date and the time of data load start, so it becomes human-readable. Tagged records may be easily and selectively deleted in case of mistakes.
2) Write down a procedure to do so, test it and put a copy within the safe. In this way you'll not be compelled to re-tell the entire story from start.
3) There is no way of cutting away a human being from the loop, so be prepared to correct even the most trivial mistakes. Better, let the user correct everything by themselves.
Even in a minimally complex dw, probably, you’ll have to provide a side application to manage those data which are not managed by the transactional system that feed it. You have to sit down and code anyway. At this point, the overhead to manage also the loadings is not so heavy not to be faced.
It will spare you a lot of time while charging the same 20% of the license fee for maintenance;-).

Suburbia And Downtown


Suburbia And Downtown

Oh boy, how cold these evenings are. Today has been one of those days when the temperature drops below zero (Celsius of course) at five pm. Coming home in the cold enhances the feeling of tiredness

At home, Poldina noticed my state of mind, and asked me why. The answer was simple. That day I planned to go on with a project of mine, but I was unable because we received a lot of support request from our customers. Many of them were urgent, and I had to leave everything behind to sort out the calls.
As usual, Poldina dropped a pearl of wisdom:
“Do not complain about that, it’s only your fault.”
“Why?” I replied.
“Because you, as a consultant, can not take charge of your customer’s daily job.”
“Maybe you’re right, but in this way they feel that we provide a good service.”
“They would be happier if they had not to call you.”


Let’s start from the beginning.
Shrinkwrap software development is a difficult exercise to all aspects, but consultingware opens an entire new class of problems.
As usual a customer asks you to build a system (not a micro one, of course). With diligence, you go through the analysis. After that, you write down detailed specs. After that you plan carefully all your development and testing. Let us suppose you do your homework and start developing.
Of course, despite your best efforts, in this phase you will have to face unexpected issues and specs refinement or changes. This is normal.
Experienced project managers all agree with the immutable law that software is like wine, it requires an exact amount of time to be ready to be delivered.
The late shrinkwrap software developer has the choice to leave out some features from the next release and often the luxury to announce the release date when there is each reasonable guarantee to meet it.
The consultant who is developing a custom system must meet the deadlines, because the risk is not to be paid and start losing money while struggling to complete the project.
An experienced project manager does not fall in the trap of start coding like crazy throwing away the schedule or adding people to a late project, but scraps his head, browses the specs and the contract, and picks the tasks which can be left for last and can be delivered a bit late without spoiling the whole.
What are the candidates for such a choice?
Often, among the others, they are the tasks which pertain to the management of slowly changing data. They may be user security settings or some data quality constants. Maybe some translation tables are left unmanaged. Anyway, it is always possible to use a database utility or a file editor to change them, if required.
The final result is a system delivered in time, and this is good, but it is not under the customer’s complete control, and this is very bad.
Worst of all, this is bad for you, the consultant, not for the customer. The customer will simply call in and will ask you to do the job.
Often, once people start moving to the next project, these gaps are never filled and the customer calls remain an extra amount of maintenance for a long time. Worst of all, again, this activity often goes unpaid because what has been left should have been included in the system.
The net result is made of days like today, days of retard on your following projects. A small amount of applications like this may spoil your potential to complete more projects.

Then, what should be done to avoid this pitfall?
Well, the first answer is obvious: plan carefully enough not to risk being late.This is easier said then done, of course, but you should try to do it anyway, is it?
A second option is to accomplish these tasks first. Often these parts pertain to interfaces and security, and setting them up early may be considered as a part of your overall risk reduction strategy. Your boss, also, will perceive them as a preliminary and will not press you to produce “measurable progress” as the true work has not begun yet. Going late will not produce a lame system as you complete the core.

We may generalize this approach. “Top-down” and “bottom-up” approaches are often discussed, but the “suburbs-downtown” approach is hardly mentioned in development and project management texts.
“The kernel is ready; we must only trim and had a bunch of interfaces”
“The core works ok, I must only throw in only few functionalities and the system will be ready.”
I’ve head this phraseology so many times that I can’t count them. This sounds also as a reassuring statement for non experienced managers, both on customer and consultant side. Well, this does not mean for sure that the job is on the other side of the hill.
In well-organized town, with an acceptable life quality, you’ll find a downtown where the productive life is concentrated (the town hall, public and private offices etc) and suburbs where people live quietly and confidently with the services they require located near them.
So is any reasonably large system.
Do not expect customers be comprehensive about the roughness of system management and interfaces; they will expect it to run smoothly most of the time. Sometimes, when in hurry, they’ll say to be ready to accept roughness but be sure that they will soon start complaining about shortly after deployment.
A not-so-happy customer, of course, is likely not to ask you for more consulting twice.

So, as usual, Poldina was right. It was up to me and my coworkers to reduce the constant flow of incoming calls. I’ll keep in mind



Once upon a time, a young programmer was hired by a subsidiary of a large multinational company.
The environment was new and stimulating, projects were interesting, coworkers and the boss were nice and polite and, last but not least, there where gorgeus girls everywhere.
The young programmer did his best efforts to pass through the test period and made a number of fairly interesting things. The first project worked out was amazing. The international headquarter released an application to monitor customer's sell-out; that is, to monitor how much goods sold to retailers was actually sold to the end users. He quite hacked the international db structure and created a more efficient data-load procedure.
He proudly released the application to few end-users, and the news regarding the new system adoption escalated toward the headquarter. On the other side, the system rollout required a huge amount of T-SQL coding, was hard to test and to maintain but, who cared? It worked very well.
Time passed and the young programmer moved from project to project, always quite successful and was awarded with more responsibility. Actually, he became a project manager; he coded less but still dealt with all the small systems he created over the years. He disseminated within the company a number of stand-alone applications which addressed a number of business needs.
He did not know that catastrophe was behind the corner.
A time come when the foreign headquarter proposed a new datawarehouse. It was really powerful and a real improvement if compared with the old and semi-amateurish dw that was in place at that time. Unluckily, the old dw was the source of all the internal stovepipe application built by the young programmer.
He spent countless hours to make them back to work, rewriting the interfaces directly from the central system.

Probably you heard this war story many times in your career. Probably you've heard a lot about breaking down the walls among applications.

I learned the hard way that this is a central issue within a business environment. Many of us, as developers, often forget this lesson.
In my experience, developing a specific solution to address a specific business need must be accomplished keeping an eye to the environment where the application lives. Almost each application requires a data feed from other systems and may require feeding data elsewhere.

The best integration level, normally, is achieved through a consistent choice of the platform.
Microsoft environment is designed to natively integrate. For example, it's very easy to connect to data sources in excel, use VBA to manipulate them, publish the data in html format to a SharePoint site etc.
SAP has tight integration level, and, with the coming Mendocino, Office integration will be further improved.
But the platform matters to a different level. Simply pick one and stick to it. Microsoft, Oracle, SAP, whatever, each one of them has pros and cons, but in the medium-long run sticking to one will spare money and headaches.

2005 buzzword was SOA (Service Oriented Architecture). Someone sold it as the panacea to all the integration issues. It is not. It’s a sensible way to improve the heterogeneous system integration but it’s quite hard to develop and maintain. It is a good choice when real-time integration is required but does not solve everything.

Think of this the next time someone asks you to develop the next siloed application.

What's this?

Good morning to everyone and welcome to my place on the Internet.

My name is Robert Sevenoaks. I’m a former developer and a Business Intelligence and IT consultant. I’ve been around for 10 years working for many companies and on my own, in disparate business areas with different platform, although mainly on Microsoft’s.
I author this blog only because I was urged by Poldina to talk about my experience and to complement it with her vast knowledge of the IT world.

Oh, who’s Poldina, you’ll be asking.

She’s a hen I met many years ago. Since than, she’s living by me and helps my work. Despite her nature, she has a surprising knowledge of IT trends and, sometimes, she drops scrolls of wisdom. She has many friends who share her ability and you’ll learn to know if you follow this site.

Enjoy your reading.

Robert Sevenoaks

P.S. By the way, as in Italy is not legal to write anonymously, even on the Internet, my true name is Augusto Aldeghi