Friday 23 December 2011

Thoughts about The Occupy Movement (1/2)

Part 1: Few Observations about the Movement's Organisational Structure

The Occupy Movement has been spreading around the world rallying people to against economic and social inequality, and perhaps rightly so. For example, a little while ago EOCD published a study Divided We Stand: Why Inequality Keeps Rising that shows that the gap between rich and poor has been growing wider for decades in most countries (while in those few countries where the gap got smaller the gap had been ridiculously wide to begin with).

What I have found especially interesting is how the Movement is self-organising and self-directing: there seems to be no central leadership that is telling people what to do, where to go or how to handle matters big and small. The people as a whole do all that for themselves and quite effectively, too; everybody is allowed and even encouraged to voice their views and suggestions on various common issues as well as take part in getting things done. This model of horizontal organisation challenges the classic vertical organisation model that we all know well from the military, politics and corporate life.

So this go me thinking and wondering if the horizontal model could be applied to corporate business? Well, I'll cover those thoughts in the second part of this entry, but first let's have a quick look at horizontal organisation models.

Modelling Horizontal Organisations

Now let's be clear about one thing first: I have not personally attended any of the Occupy Movement camps nor had the opportunity to talk with people who have so my understanding is based on second and third hand information that I have gained by reading various articles about the topic. So please, do let me know if I've gotten something wrong and I do welcome all other comments, too.

As I understand it many Occupy Movement camps tend to be organised around a general assembly where each and every person has the right to be heard as well as has a vote. This basic form of direct democracy works well up to a point (as the citizens of ancient Athens probably would agree), but begins to falter as the number of people increases: soon there are so many people who want to be heard and so many competing opinions and proposals that proceedings take too long to make practical and timely decisions.

To mitigate this the general assembly can form working groups with clearly specified tasks and goals. Any and all persons are free to join a working group they feel important. Working groups are not only responsible for handling day to day operations but can also function as a specialist group which presents their findings / results / proposals to the general assembly, which then has a focused discussion followed by a vote between clearly defined alternatives, if necessary.

This kind of organisational model can be described as a network topology and when it comes to horizontal organisations there are several topologies that can be applied. The first and most obvious one would be the Star topology where the general assembly is the central hub that connects with 0..n working groups. In this model there is no interaction between individual working groups. Working group management and coordination is relatively simple.


Another common model is the Partially Connected Mesh topology. Basically this might be the case when an individual working group splits itself to one or more sub-groups that may interact with each other while the results are presented to general assemply through the parent group.


The third common model is the Fully Connected Mesh topology, which is at the same time the purest form of horizontal organisation and the least likely option to be utilised in practice, in my opinion. It is the purest form because there is no central hub nor any parent groups, all groups are completely equal with all other groups. There is no hierarchy that would limit communication and cooperation and this is what a general assembly essentially is: all individuals are equal and connected with each other, free to interact and cooperate with any and all other individuals.


However, as stated before the cooperation becomes increasingly inefficient, difficult and time consuming as the number of connections grow i.e. the number of people increases. Eventually it becomes necessary to limit the connections which happens by forming one or more sub-groups with well-defined purpose and scope, which takes us back to the Star topology. As the organisation grows and people wish to keep operations fluid the Star is likely to evolve to a Partially Connected Mesh of one form or another.

A Bit About Group Dialogue and Decision Making

When there are tens and even hundreds of people who want to address the general assembly how does one ensure that everyone is given an equal opportunity to do so? In the spirit of total equality the group might be tempted to try shouting over each other until many voices merge into single voice (consensus) or just a few voices (interest groups) that can have a vote. It is more likely that pretty soon it will become apparent that the discussions need to be directed by a facilitator whose primary duty is to assign speaker turns and to impose timelimits. In order to further improve the proceedings the facilitator might also divide the discussion to separate topics so that one matter can be handled before moving on to another.

Would-be speakers are placed into FIFO stack (First In, First Out). In other words those who raise their hands first get to speak first.

While one person speaks the rest of the people can react and respond with a set of hand-signals. This is an excellent method for the crowd to provide direct, real-time feedback to the speaker without causing disrupting noise. People can express e.g. their agreement, disagreement, ask questions and demand missing details as well as urging the speaker to speak louder, keep to the topic or to wrap up.

Some groups aspire to become more equal than other groups so they may adopt a progressive stack, which prioritises some people over other people in the name of equality. The idea is that the minority representatives and marginalised groups are allowed to speak before the majority representatives and dominant groups. This can be a good thing if minority and majority are determined within the context of discussion instead of social status (remember: people are supposed to be equal, therefore social status should not make any difference) based on more or less fixed attributes such as their skin colour, sexual orientation, profession, gender or even age. However, in my mind, assuming the position of primus inter pares while still speaking of equality is a lie.


Well, that's the first part of my blog. The next part is a thought experiment of how a limited corporation might work if it is organised horizontally instead of vertically.

Update: Thoughts about the Occupy Movement, Part 2 has been published.

Sunday 14 August 2011

Better protection against GPU brute force attack with bcrypt (and common sense)

Summary
For those of you who prefer to get to the point right-away instead of being lead to it, I have two points to make.

First, when it comes to password security it is better for common users to use longer (>8 characters long) and less cryptic passwords than shorter yet morecryptic passwords. By cryptic I mean passwords including lower and upper case characters combined with numbers and special charaters such as '#fK1~2'.

Why? Because a common users are more likely to be made vulnerable when their favourite site gets hacked during which the user database is stolen along with the usernames, passwords and the rest. At that point there will not be a single person trying to guess your password but instead a tireless GPU driven software doing a brute force attack by systematically going through all possible character combinations at speeds exceeding many hundreds of millions combinations per second: your sneaky 6-character cryptic password would be cracked within few seconds.

That is not to say the passwords should not have any special characters and numbers and the rest: there should as those make dictionary based attacks less likely to succeed. The point is that while having more characters in a password makes it stronger but overly cryptic words are harder to remember so if you have to choose between length and cryptic it is better make the password a little less cryptic and that much longer. The XKCD says it oh, so well, as usual :)


The second point is for software architects and developers who can significantly improve the security of their hashed passwords by adopting the BCRYPT hash function instead of common SHA-2 variants (and surely none of you are still using MD5 or SHA1 hash functions to encode passwords these days, right? Right?)

Why? Because bcrypt can be made very expensive to use so that it would take milliseconds (or even seconds, if you have paranoid tendencies) to encode a password instead of microseconds which is achieved by most of the common hash functions. It is trivial to spend about 0,6 seconds to encode user's password during registration and login as these operations happen reasonably rarely. However, those 600 milliseconds per single encoding becomes anything but trivial when a hacker attempts to brute force through all the passwords in your user database. As last line defences go, bcrypt should be preferred over other hash functions including SHA-2 variants.

Want to know more? Keep on reading.

The Problem
It has long been a common practice to store user passwords in a hashed form instead of the clear, human readable form. For years hash-algorithms such as MD5 and SHA-1 have been the preferred methods, but these days neither should be used for any security related purpose as they have well-known vulnerabilities. Instead many recommend that these days SHA-2 should be used, as it has no known exploitable vulnerabilities (apart from inept and lazy people who are using passwords that are too short and simple to stand against educated guess).

For those with a less technical background, a hash function is a one-way cryptographic algorithm that takes a variable length input and calculates a value that is unique for the specific set of data (e.g. file or string). It is not possible to reverse given hash value to reveal what the original value was. So when applied to passwords the way to verify if the given password matches with the hashed password in the database, the given password is hashed with the same algorithm and if the two hash values are identical it means that the right password was given and use may login.

For example, if the password is 'secret' the hash value (SHA-256) is
'2bb80d537b1da3e38bd30361aa855686bde0eacd7162fef6a25fe97bf527a25b'

Why passwords should be stored in their hashed form? Obviously hashing does not protect against somebody  trying to log in to service by using another user's username while attempting to guess what the password might be (to protect against this the service should temporarily lock user's account after n failed login attempts). Instead hashing the passwords is the service's last line of defence when a hacker gains unauthorised access to the user database.

User databases get hacked all too often. For example, in 2011 Sega lost 1.3 million user passwords, Sony's Playstation Network got hacked at least twice after which about a million usernames and passwords were made public by the hackers, not to mention many other known cases over the past few years (and not nearly all cases are known to public).

Hackers have been quite busy indeed but the designers and developers of those services should also take a good look at the closest mirror: after all, hackers merely exploit the existing holes in various libraries and services, many of which have been publicly documented.

Cause of Problem
While hashing is a good way to protect a password against the eyes of unauthorised mortals, they can be vulnerable against brute force attacks. Modern computer components are simply so fast and efficient that brute force attacks (where each and every possible combination is tried until the right combination of characters is found) have become quite reasonable option for attackers.

A modern brute force attack utilises GPU instead of more traditional CPU. Consider this: while a CPU based password recovery tool might take about a year to crack an eight-character password, a similar GPU based password recovery tool could do the same trick in less than a day. In another words, a 13-year old with a gamer's desktop computer and simple software tool could crack a list of typical hashed passwords within hours or days, if not in minutes. Now consider for a moment about what a well-resourced and determined professionals would do to passwords in the user database of your favourite web site should they gain access to it.

For example, a dirt cheap ATI Radeon HD 5450 can handle about 52 million SHA1 or 126 million MD5 hash computations per second. Upgrade to ATI Radeon HD 5970 and you would be doing about 2 320 million SHA1 or 5 631 million MD5 calculations per second, and that is with just a single GPU. Most desktops can have two linked GPUs, including the one I have under my desk that I have dedicated just for gaming and LAN-parties.

To put things into perspective, ATI Radeon 5770 can crack a five-character password under one second while a typical CPU might be do the same in about 24 seconds or so. A six character password would take about four seconds for 5770 to crack, and a seven character password would be sorted in about 17 minutes. The respective times for a typical CPU would be around 90 minutes and four days, or so.

The Solution?
Obviously there are no guarantees but there are ways to frustrate most attackers to a point when the reward just isn't worth the trouble.

For a common user the best protection is not to use overly cryptic and hard to remember passwords with caps, numbers and special characters (e.g. #fK1~2) but simply to use longer passwords. For example, if the system is using ASCII that has 95 printable characters, each new character in the password multiplies the number of possible combinations by 95. On the other hand, if your password is just a common word then you are wide open for dictionary based attacks and guesses of people who know you. Go for the middle ground.

At the same time software architects and developers should ditch the ye olde SHA-2 variants and similar algorithms and move to BCRYPT.

Bcrypt is a Blowfish variant that has one very important aspect that sets it apart from most other hash algorithms: it can be made very expensive to use in a world where cheap equals bad. Most hash algorithms have been optimised to calculate a hash value for large sets of data as fast as possible (for example, an AMD64 CPU can calculate MD5 hash for 335 MB of data in one second) which is great when one needs to find out if two large data sets are identical, but bad when dealing with common <10 character passwords, as shown earlier.

Bcrypt on the other hand is designed to be slower instead of faster when calculating the hash thus making it easy to increase the expense of brute force attack as a single hash calculation would take milliseconds (or even seconds) instead of microseconds. Combined with properly long unobvious passwords bcrypt can seriously frustrate GPU based brute force attacks while attackers relying on CPU should not even bother.

It is important to put this into proper context: it is perfectly fine for a password hashing to take about a second during registration and login as these happen fairly rarely: a typical user registers only once and might login to service few times a day whereas a hacker would need to go through as many individual passwords in as short time as possible. Given certain amount of time the hacker will have some of the bcrypt encoded passwords cracked, but not nearly as many as he would have if the passwords had been e.g. SHA-256 encoded, whch i in turn would be less than when dealing with *gasp* MD5 encoded passwords.

Another very nice thing about bcrypt is that it can be adapted to match Moore's Law: it has a work factor that can be freely increased as computers become faster as well as to tweak the balance between performance and security. Bcrypt is available for most programming languages and for example the Grails Spring Security Core -plugin by Burt Beckwith makes the using of bcrypt practially trivial.


Hopefully in the future I will not be seeing web sites that limit user passwords to max 8 characters while enforcing ridiculous character inclusions while at the same time I do hope to see frustrated hackers pissing their lives away by trying to brute force through bcrypt encoded passwords. It would be a nice start, but only a start as far as improving software security is concerned.

Monday 13 June 2011

Practical difference between knowledge and know-how

About a year ago I was part of a group that was scuba-diving around the Daedalus Reef, smack middle of the Red Sea when something went wrong during a dive. We were down at 31 meters on top of the reef-plateau, close to the long pier when my buddy's regulator malfunctioned and started blowing. As we were the last ones in our group no-one else saw what was happening.

We did everything by the book. I noticed my buddy's problem right away and was able to help her as we had kept close to each other instead of wandering to separate directions as some other diving pairs had done. As we were at the bottom of plateau we settled down on it to see if the situation could be sorted out. There was no panic, no sense of urgency or rushing - we had time.

I gave her my spare regulator while we tried out the few tricks we knew that might stop the regulator from blowing, but none of them worked. The regulator was stuck open and was venting out the tank. As I took my buddy's pressure gauge to see how much air she had left I saw that during those few moments her tank had gone from about 200 bars to well below 100 bars, and the gauge hand was steadily moving towards red zone. There was nothing more to do but to signal ascent and prepare to surface.

Then I fucked up.

As we were still on the bottom of plateau at 31 meters we grabbed each other's BC (buoyancy control) vests and I fed air to my BC to begin our ascent. Her BC was almost empty and she was hanging heavy, so I fed some more air to my BC while keeping my eye on my diving computer's depth and rate of ascent indicators. We were doing alright, rising steadily but slowly enough - until her still blowing regulator ended up between us and all I could see and hear were bubbles. Lots and lots of bubbles.

As the torrent of air was rushing and bubbling around my head I could no longer see my diving computer nor hear its frantic alarms telling me that our ascent has become uncontrolled. Within seconds our slow, controlled ascent had turned to classic two-person cluster-fuck as the air within my vest expanded and accelerated our ascent speed. Once the bubbles blocked my visibility and hearing I lost my situational awareness: I did not know my depth, I did not know how fast we were ascending and indeed the next thing I noticed was how my vest suddenly expanded to its fullest which triggered the vest emergency valve on my shoulder to vent excess air out just as we popped on the surface.

It all took just seconds and afterwards I checked my diving computer's log: we had ascented slowly from 31 meters to about 20 meters, which was when I was blinded by the bubbles. The last log depth entry was at 17.1 meters and less than five seconds later we surfaced. As I signalled the boat crew that we had a problem I knew I had screwed up bad, I just wasn't sure how badly. My mistake had not only put myself in danger but more importantly I might have done some serious harm to my buddy. True, we had come up from 31 meters without safety stop, but on the other hand the dive had only lasted some 6-7 minutes with about 5 minutes of bottom time. Was it long enough for our tissues to collect enough nitrogen to give us the bends?

What did I do wrong? The simplest of things: I forgot to close my buddy's tank valve. She didn't need it any more: she was breathing from my tank and I was the one to control our ascent. If her tank had been closed I would not have been blinded by the blowing regulator and could have controlled the ascent. This was something that was told to me during my dive training and indeed even practiced in the pool, but when theory became practicality I forgot it: it was something I had the knowledge of, but I did not really know. This was the practical difference between theoretical knowledge and experience based know-how.

The same holds true to so many other aspects of life, and to things I have done. Sure, I studied business and IT in the polytechnic for four and half years, and while most of it was interesting and some of it even useful, none of it really prepared me for the realities of practical working life. Everything I know about IT I have learned while doing it, making mistakes and learning from them. Some of the lessons learned have matured to deep understanding that will help me to make the right decisions then and there when the moment so demands - or so I hope. The same could be said about other aspects of life, such as personal relationships (some of the things that my wife appreciates today, I learned from my first girlfriend, usually some time after making a mess of things) and it holds true for my partnership in Envivia as well. Although in all honesty we are still busy with learning from mistakes, but hopefully that, too, will one day mature to success.

What about that diving incident? Well, in the end we were lucky not to show any symptoms of diver's sickness, although the after-treatment didn't exactly go by the book either. After an uncontrolled ascent the first thing to do is to breath 100% oxygen (or richest possible mix of oxygen if 100% is not available). As we we returned to the boat we informed the crew that we need to start with the oxygen treatment right-away, but although there were several O2 tanks on board and even clear instructions by the cabin door, the Egyptian crew could not read English (most of them barely understood spoken English, if at all) nor did they know what to do in this situation. All the diving guides were in the blue with the rest of our group.

About 15 minutes later one of the local guides surfaced once he had noticed that we were missing. Once he came on board we explained what had happened and he gave the crew instructions to prepare for the oxygen treatment. Only problem was that there was only one mask available between the two of us. I chose to wait until my buddy had finished with her oxygen treatment as I had used nitrox during our dive while her tank only had regular air, and I was the one who had made the mistake that put her in danger. Responsibility was mine so she should be treated first.

Afterwards we went over the incident with the guides and decided that since the dive had been so short it should be enough to skip one dive and to go no deeper than 8 meters during our next dive. All went well and I got to enjoy a very good week of Red Sea diving.

However, since that trip there has not been a week that I haven't thought about what happened and the mistake I made. Rest assured that the next time my buddy has a regulator malfunction I will remember to close the tank and exercise proper ascent control - and know how to prepare oxygen mask in case or emergency with  or without the help of the boat crew.

Wednesday 25 May 2011

Of Information Architects and Business Architectures

Information Architects are somewhat a rare breed: there are not too many of us around, at least when compared to the number of other types of IT architects.

The term Information Architect was first coined by Richard Saul Wurman (as quoted in Wikipedia's article Information Architecture):
Wurman sees architecture as "used in the words architect of foreign policy. I mean architect as in the creating of systemic, structural, and orderly principles to make something work - the thoughtful making of either artifact, or idea, or policy that informs because it is clear".
Although it safe say that the meaning of the term has evolved since then, and today the way it is understood seems to vary depending on to which branch of IT it is being applied to. That being said this is how I see and understand my role and position as an Information Architect.

On a very basic level IT architects can be divided to three high-level categories: Business Architects, Information Architects and Technology Architects. Although some might argue that Business Architects don't have all that much to do with IT, these three groups depend on each other in the sense that Business Architects begin by laying the foundations for sound business operations by defining the Business Architecture, which also provides a stepping stone for the work of Information Architects that in turn provides a high-level solution design which will be implemented by various types of Technology Architects.

Briefly about Business Architecture
Object Management Group's Business Architecture Special Interest Group (BASIG) defines Business Architecture as follows:
A blueprint of the enterprise that provides a common understanding of the organisation and is used to align strategic objectives and tactical demands.
 BASIG further explains it in the Business Architecture Overview:
Business Architecture defines the structure of the enterprise in terms of its governance structure, business processes, and business information. In defining the structure of the enterprise, business architecture considers customers, finances, and the ever-changing market to align strategic goals and objectives with decisions regarding products and services; partners and suppliers; organization; capabilities; and key initiatives. 
Business Architecture primarily focuses on the business motivations, business operations and business analysis frameworks and related networks that link these aspects of the enterprise together. 
 The key views of the business architecture are the

  1. Business Strategy view: the tactical and strategic goals the organisation strives to realise.
  2. Business Capabilities view: the primary business functions that define what exactly the organisation can do and how.
  3. Value Stream view: the set of end-to-end activities that delivers value to all stakeholders.
  4. Business Knowledge view: describes the shared semantics within an organisations and how they are related to each other.
  5. Organisational view: relationships among roles, capabilities and business units.

Obviously there is more to all of this but suffice to say that a well-defined Business Architecture form one corner-stone of potentially successful business. This is also an important foundation for Information Architect's work.

The role and position of Information Architect
Information Architects stand between Business Architects and Technology Architects, yet the division is not entirely exclusive but instead Information Architects benefit from having understanding across all three domains. For example, Information Architects need to be able to understand and define business processes at least as far as those processes need to be supported by IT services. On the technology side Information Architects should have practical experience from software development and of various IT systems when coming up with possible (high-level) solutions for a organisation's needs.

So it can be said that the Information Architecture begins with business processes and rules, and leads to practical technical implementation, but between these two there is much that falls to the domain of information architecture. Most notably the requirements specifications, which are mainly mapped through discussions with various stakeholders (once they have been identified):

  • Business Requirements: these requirements describe how the system must benefit and support the organisation. In most cases business requirements include relevant business processes that the system must support, and various quantifiable (if at all possible) goals such as increase in income, decrease in expenses, more new customers, being able to perform certain tasks in less time than before and so on.

    The investment in time, money and resources are justified once the business requirements are fulfilled in production.

  • Functional Requirements: these requirements define what users must be able to do (or not do) and accomplish when using the system. Most commonly functional requirements are described by writing Use Cases along with actor descriptions. In addition functional requirements may include descriptions of algorithms, methods and models for data processing, and other essential details that relate to the way users are going to use the system.

  • Non-functional Requirements: these requirements define the quantifiable, qualitative metrics that are used to measure the system's performance. The possible metrics range from "x operation in y units of time" to high-availability demands (e.g. 99.95% uptime) to various compatibility, testability, robustness and many other requirements - just remember that each metric needs to be monitored so too many is almost as bad a not enough. The key is to identify those metrics that have most impact on how business requirements are being fulfilled.

Requirement specifications are just one part of the Information Architect's domain. Everything Information Architect does is to increase knowledge and understanding of what the organisation's needs are and what is required from a system that is supposed to meet those needs. So in addition Information Architects often involve themselves with activities such as risk analysis (I personally prefer to use Failure Mode and Effects Analysis), workshops and background research.

Finally after information has been gathered and analysed, and the problem domain is properly understood Information Architect can put it all together and come up with a solution proposal, which often includes a high-level logical architecture modelled in UML. Initially there might be several alternative solution models but if the information gathering and analysis has been done properly and the underlying business architecture is well understood eventually one solution model will rise above all else.

What is usually left outside Information Architect's domain are the details of technical implementation such as the programming language, server environments, tools and the finer points of coding. That is not to say that the same person can't also handle the role of a Technical Architect, such as Software Architect, Database Architect, System Architect or some other specialty.

In addition to everything else Information Architect must have good writing skills and be able to produce clear, systematic and concise documentation; get along with and understand the views of both business people and technical peope; balance conflicting needs of various stakeholders, or at very least be able to reach reasonable compromises if conflicts cannot be resolved altogether.

Oh, and have a lots of patience and good sense of humour. It definitely helps.

Saturday 30 April 2011

How to become an entrepreneur - The abbreviated guide

Step Three: Quit your old job with a steady pay, reasonable hours and a company health care.

Most experts might recommended not be obnoxious about handing in your notice however tempting it sometimes might be. It is considered bad manners which isn't good for business, and perhaps one day you would appreciate the same courtesy should one of your own employees decide to become an entrepreneur, too. Once the decision has been made and you are out of your old job, enjoy the freedom and happiness while it lasts, the real hard work is just about to begin.

If at all possible try completing Step Two before Step Three: Make sure you will have a sufficient cash flow to support your entrepreneurial aspirations.

For example, secure a work contract or get support from an investor. Life will be so much easier when you can purchase the necessary tools of the trade and perhaps even pay little something for yourself and keep the family well fed, which can do wonders to any entrepreneur's morale.

Although many would-be entrepreneurs tend to skip Step One it doesn't mean that you should do it too: Come up with a good business idea, find one or more reliable partners that you can get along with well and have the support of your spouse (should you have one).

All entrepreneurs have strengths and weaknesses so having partners can help to balance the negative while enhancing the positive. You should try to do the same to them, too. Another good thing to remember is that nobody can do everything alone so a good core team of partners will be the corner stone for success. Yes, and a good, realistic business idea is also a corner stone - a solid house usually benefits from having several solid corner stones.

Step Four: Be prepared to work hard, shoulder your responsibilities and never, ever give up without giving everything you got and them some.

If Step Four sounds like too much, please reconsider whether or not you should go through with Step Three. There are many good reasons why most people never complete Step Three.


Coming up next: How to become a successful entrepreneur (still working on it...).

Thursday 28 April 2011

The Customer, the Team and the Company

I was once told that for a consultant the company must always come first, no matter what. I disagreed.

When I'm working in a customer project my priorities are clear: first comes the customer, next my team and then the company. These three are not mutually exclusive, in fact it is just the opposite: these three can, and indeed must be inclusive. It's a triangle of a mutual interaction.


Please stand-by for the obvious.

The customer must come first because without the customer there is no project. The custome is the one who controls the project funding and most importantly the project's core purpose is to benefit the customer. The consultant must look after and protect the customer's interests: the consultant must earn the customer's trust and respect since simply being cheap/expensive expert on the field is not enough to establish a good working relationship.

Always do right by your customer; the "used cars salesman" attitude ~ i.e. bleed the customer dry by making them to pay premium for everything whenever possible while trying to maximise the company profits by using cheapest available people and resources ~ should in my opinion be avoided, but then again I'm not a business/sales manager. Idealistic thinking or not, but I believe that a mutual win-win arrangement over a longer time peried tends to benefit all parties more than the instant gratification like the one that came from killing the goose that laid golden eggs.

Take care of the team because their practical work effort directly determines whether or not the project is ultimately a succesful one. If the team members are overworked, burned out, pissed off, unmotived, untrained and/or under-appreciated it will show in so many ways: their overall morale and attitude towards the customer, each other and other members of the company, the quality and pace of their work, ability to deal with unexpected change and hardship, ability to solve problems, willingness to support team mates, and generally speaking their mental ability to take pride in the results of their work, just to mention few.

On the other hand, a well-motivated team with a good morale and up-to-date training that knows the value of their work, that isn't crumbling under unreasonable workload, that are in control of their own work, and that are all-and-all happy to come to work in most mornings, can really get the job done and do it well. Personally, given a choice I would take a small team of motivated professionals over a larger team of tired techies on any given day.

What about the company, then? A simple if not simplistic logic suggests that a happy customer and a happy team should make a happy company: team has work, company gets paid and customer gets money's worth, which tends to place the company and the team high on the customer's short list the next time customer needs to get a job done. This is not say that the company's role ~ including all the other people directly or indirectly involved with the various phases of the project ~ as a mediator and an enabler isn't vitally important for the outcome of the project.


The point of all this? I don't know... that a business could benefit from a touch of ethics and humanity over selfish shareholder greed? That mutually beneficial long term business relationships are preferrable over short-term rip-offs, low quality work and use-and-toss-away approach to employee management? Or perhaps it is that in the end we are all just people trying to work together instead of resources waiting to be exploited? Might be all of the above, or it could just be some wistful rambling of an idealist.

The flip side? Of course it has to be said that not everything depends on how the team and company performs during a customer project as the customer has a key role to play too. While the company would like to maximise the profits, the customer would like to minimise the expenses, which is all perfectly understandable and part of good business. However, this often leads to a situation where the budget is unreasonably tight allowing not enough time, people nor resources to get the job done right. Sometimes customers hire consultants because they think they need their expert knowledge and then choose to ignore the given expert advice with predictable consequances. It is an unthankful moment to be a consultant saying to the customer "I told you so".

No project is a one-way street; a project success depends on co-operation and mutual respect of all involved parties (a healthy dose of common sense and willingnes to compromise often helps, too) among other things.

Tuesday 26 April 2011

Pointing out the obvious: motivation improves IQ test results and this applies to business... how?

BBC published a news article yesterday about how motivation affects IQ test results. The article is short but brings forth an interesting aspect of intelligence and intelligent people:
Getting a high score in an IQ test requires both high intelligence and competitive tendencies to motivate the test-taker to perform to the best of their ability.
http://www.bbc.co.uk/news/health-13156817

How would this reflect to everyday life and to busines in particular? Well, consider a situation all too often seen in the working life: companies like to hire experienced, well educated and intelligent people thinking that it leads to better results in projects, but after a while results aren't exactly great. Hiring good people simply isn't enough if the company can't keep the people motivated.

Well motivated people drive themselves to go beyond good work to do excellent work. It takes motivation to keep on pushing through problems and hardship until the job is done right when others no longer feel like doing more. Most people with a healthy dose of common sense knows this, so why are there companies and public organisations with poorly motivated staff?

It does not matter how experienced and intelligent a person is if that person lacks motivation. Without motivation easy routine work can become a soul-rotting forced labour, and problems that otherwise would be considered as mere challenges to be conquered are rejected as impossible or at very least unreasonable ordeals. In short, without proper motivation people who normally could and would do, begin to give up because there is no point and they don't care.

Motivating people does not need to be difficult or something special, nor should it require expensive leadership training given to the company management (although it probably wouldn't be a bad thing) about how to be a better superior and a leader. One can get far - in my opinion - simply by being a human and treating others as such, too.

One might begin by not thinking employees as resources but seeing them as people, persons with their individual hopes, dislikes and ambitions. One could try talking with them - not to them and certainly not at them -, not high above from the ivory tower but as peer to peer, learning about what they personally value in their work and in life in general, what values and principles guide their actions and how they would like to improve themselves professionally.

When they do good work, one could thank them and let them know their efforts are noted and appreciated, and when they occasionally fuck up, one might help them to understand where and why mistakes were made instead of aggravating the situation further by attacking them verbally and piling blame over guilt: most people are genuinely sorry after they fail at something and they do appreciate it when their colleagues and superiors offer them support at their moment of self-doubt and need, and they will remember this long after the difficult times are past them, and the lessons they learned may prove invaluable in later life.

Now, one might add to this the more common motivational tools in form of good salary (the business way of saying "we really appreciate you and your work, please don't go and work for the competitor"), bonuses (the business way of saying "thank you for helping us to earn more money and to become prosperous and respected") along with various perks of the job (the business way of saying "you take care of us and we take care of you"), and I think there might be some noticeable changes in peoples attitudes.

However, this will have the desired effect only if done right. For example, if a person has no real personal control over the bonus, the bonus loses the ability to motivate: it is disheartening when your bonus depends on how other people have done their work no matter how well you have done yours. Also when company management decides which perks to offer to employees it might help if people are asked what they would find interesting instead of simply deciding on their behalf what they should do be doing.

But the most important thing, in my opinion, is to see employees as people and treat them as such. It takes so little, and yet it will take so far: mutual respect has much stronger motivational effect than strict corporate hierarchies and the implied feudal "the lords shouldn't socialise with the serfs" -attitude. A word of encouragement from a CEO who knows every employee at least by name has power much beyond actions taken by a CEO who never talks with people below middle management.

And while money is important - hey, life is expensive and we all work for living - I think even more important is that people working in projects are encouraged to take the ownership of their own work, to nurture their professional bride of work well done, to challenge*** them and acknowledge them in their success or support them if they falter.

(*** please note: a deathmarch project is not a challenge; it is a poorly managed financial sinkhole and a waste of collective life that merely motivates people to change jobs) 

So, if a company has hired intelligent, experienced people and yet their performance and results are mediocre at best, it might be a good time to re-evaluate management's motivational approach and how employees feel about coming to work every morning.

Monday 25 April 2011

A little bit about Nokia's fall and more about Clayton Christensen's The Innovator's Dilemma

Clayton M. Christensen wrote a book over ten years ago, The Innovator's Dilemma: When New Technologies Cause Great Firms to Fail. I first read it in 2004 as part of my studies in the Tampere Polytechnic University. Following the news about Nokia's demise I felt like reading it again.

The question Clayton Christensen presented is at the very core of any business: how come companies that were at the very top of their game eventually failed. These are the companies that dominated their markets and were hailed for their good management and yet, somehow, eventually dropped the ball in a very profound way. This very much reminds me of Nokia these days: Nokia used to be leading innovator with mobile phones, utterly dominated the markets, made billions and yet, Nokia is now in a desperate struggle against Apple and Google and is probably going to sack thousands following the co-operation deal with a former enemy, Microsoft.

A story I've heard few times (don't know how true this is) tells that Nokia had a working touch screen mobile phone some seven or eight years ago, but the back-then management did not think it had any call in the markets. Oops. Around 2004 - about three years before first iPhone - Nokia published 7710 multimedia phone with a colour LCD touch screen, but because it did not become an instant commercial success it was quickly discontinued and all the time and resources spent on the development of the technology was essentially scrapped ... until Apple came along and proved Nokia wrong.

Consider reading the following article by Mikko-Pekka Heikkinen: http://www.howardforums.com/showthread.php/1679550-Knock-Knock-Nokia-s-Heavy-Fall...

But back to Christensen's Innovator's Dilemma.

Why market leader companies almost invariably fail when markets change? How come a company that had a finger so well placed on the market's commercial pulse at one time so easily flatlines in the next generation market? Christensen writes that
Precisely because these firms listened to their customers, invested aggressively in new technologies that would provide their customers more and better products of the sort they wanted, and because they carefully studied market trends and systematically allocated investment capital to innovations that promised the best returns, they lost their positions of leadership.
Now that should give most managers a pause. They are doing things exactly right and then, somehow, it turned out to be the completely wrong thing? Well, yes and no. No, because they in fact did do the right things at the given time and situation and yet, yes because they failed to understand that their status quo would not be permanent. The decisions they did while everything was going so well prevented them to come up with the next good thing.

Christensen talks about sustaining technologies and disruptive technologies. The former is essentially about improved performance of established products that the current mainstream customers value. For example, ever faster CPUs or 3,5" hard-drives with more and more storage capacity: the underlying technology is the same, it just gets more efficient and often eventually exceeds what the customers actually want and need.

The latter, disruptive technologies when they emerge typically offer worse product performance when compared against products in the mainstream markets so why invest time and resources required by development? Because of this: disruptive technologies are not based on the same value proposition as the mainstream technology but instead brings forth a completely different value proposition. Initially disruptive technologies tend to be valued by few fringe customers and products based on disruptive technology are often cheaper, simpler, smaller but more importantly, more convenient to use, as Christensen puts it.

However, what is often overlooked is the fact that once technology is establised the iterations of sustaining technologies quickly catches up and then exceeds what the markets and customers really need. Therefore a disruptive technology that is underperforming today when compared to mainstream technology, is likely to be performance-competitive in the same market tomorrow. For example, consider disk-based hard-drives and solid-state hard-drives few years ago and again today.

Take a moment to think about the following quote from Christensen:
[The] conclusion by established companies that investing aggressively in disruptive technologies is not a rational financial decision for them to make, has three bases. First, disruptive products are simpler and cheaper; they generally promise lower margins, not greater profits. Second, disruptive technologies typically are first commercialized in emerging or insignificant markets. And third, leading firms' most profitable customers generally don't want, and indeed initially can't use, products based on disruptive technologies. By and large, a disruptive technology is initially embraced by the least profitable customers in the market. Hence, most companies with a practiced discipline of listening to their best customers and identifying new products that promise greater profitability and growth are rarely able to build a case for investing in disruptive technologies until it is too late.
In his book Christensen presents the five laws or principles of disruptive technology stating that "if managers can understand and harness these forces, rather than fight them, they can in fact succeed spectacularly when confronted with disruptive technological change". There are no simple answers, mind you, but rather the important thing is to to attempt to understand the underlying point.

Principle #1: Companies Depend on Customers and Investors for Resources
In short, managers in successful companies tend to think that they control the flow of resources in their organisations, but by doing so they often seem to forget that those resources come from their customers and investors and if the company does not produce what the customers and investors want, they will take their money elsewhere. Christensen says it well:
The highest-performing companies, in fact, are those that are best at this, that is, they have well-developed systems for killing ideas that their customer's don't want. As a result, these companies find it very difficult to invest adequate resources in disruptive technologies - lower-margin opportunities that their customers don't want - until their customers want them. And by then it is too late.
Christensen points out that this makes certain sense as companies whose cost structures have been tailored for competing in high-end markets cannot be profitable in low-end markets as well. One possible solution? Creation of an independent organisation that can become profitable with the lower margins of disruptive technology's emergent market. This way the company can establish a beachhead in the new market while still reaping rewards from the mainstream market.

Principle #2: Small Markets Don't Solve the Growth Needs of Large Companies
Disruptive technologies usually enable new markets to emerge instead of offering better solutions for current markets. It will take time for the new markets to mature and once they have matured they tend to provide products and services that better suite the needs of the customers in the old market. At that point it is already too late for the big companies of the old market to transition to the new market and expect to maintain their market shares. In fact, they are lucky to even survive.

Why then the big companies fail to catch up with a new technology wave? After all, many of them initially started as a small company looking for their fortune in a new market. Much of this follows from the companies way of measuring success through growth: while a $40 million company can achieve 20% annual growth by coming up with $8 million worth of revenue growth, a $4 billion company would require $800 million to achieve annual growth of 20%. Because no new market can provide this it seems to follow that the bigger the company is the more difficult it becomes to see new growth potential in emerging markets.

Personally, I think big companies should see new markets as long term investment opportunities but this comes with risks that some managers just don't want to take. After all, waiting and seeing is so much more safer. I also think that Google has figured this out as it is constantly finding new growth from new markets; however, nothing lasts forever so it is likely that Google's continuing growth already contains the seeds of its eventual downfall.

Principle #3: Markets that Don't Exist Can't Be Analyzed
Probably all business schools are teaching that a proper market research and good planning based on known market attributes followed by timely and determined execution is the road to success. Christensen agrees that this is indeed how things should work - when dealing with sustaining technologies. 

The problem with disruptive technologies is that their markets are only now emerging. They don't really exist and therefore cannot be evaluated in the same way as existing markets are.
Companies whose investment processes demand quantification of market sizes and financial returns before they can enter a market get paralyzed or make serious mistakes when faced with disruptive technologies. They demand market data when none exists and make judgements based upon financial projections when neither revenues or costs can, in fact, be known. Using planning and marketing techniques that were developed to manage sustaining technologies in the very different context of disruptive ones is an exercise in flapping wings.
So a different approach is called upon. Christensen proposes a discovery-based planning which suggests that forecasts are assumed to be wrong rather than right and as such the strategies based on those forecasts are likely to be wrong as well. Instead managers should "develop plans for learning what needs to be known" as a more flexible and realistic approach to mastering disruptive technologies.

Principle #4: An Organization's Capabilities Defines Its Disabilities
Organisations are comprised of people that work in them and yet, Christensen points out that organisations have capabilities that exist independently of the people. First, there are processes as in ways of working and producing things of value. Second, there are organisation's values (which in my experience are not always the same as the ones found in PR-materials) that the organisation's managers and employees use to decide how work should be prioritised.

While people who work in the organisation can be very flexible, the processes and values usually are not. To certain extend this makes sense since a process is usually a practical result of trial and error and embody many best practices that lead to effective work. However, this is true only within certain context: Christensen points out that a process that is effective at managing the design of a minicomputer would be ineffective at managing the design of a desktop personal computer. Similarly, says Christensen, values that cause employees to prioritise projects to develop high-margin products, cannot simultaneously accord priority to low-margin products.
The very processes and values that constitute an organization's capabilities in one context, define its disabilities in another context.
Principle #5: Technology Supply May Not Equal Market Demand
Disruptive technologies, though initially can only be used in small markets remote from the mainstream, are disruptive because they subsequently can become fully performance-competitive within the mainstream market against established products.
This usually happens at a point when both the old and the new technology has exceeded in performance and capabilities what the customers actually need, so they begin to make their purchase decisions based on other values: "the basis of product choice often evolves from functionality to reliability, then to convenience, and, ultimately, to price".

The point is that market leaders tend to keep improving their products over many iterations of sustaining technologies believing that their superior products will keep competition behind them. While doing this they fail to notice that they have over-shot their original customer needs which can have an unexpected consequence in that "they create a vacuum at lower price points into which competitors employing disruptive technologies can enter".
Only those companies that carefully measure trends in how their mainstream customers use their products can catch the points at which the basis of competition will change in the markets they serve.
This was just a short introduction to Clayton M. Christensen's book so I recommend that you read it, if you found the topic interesting. The observations and lessons covered in this book are technology and market independent so in our quest to come up with the next Nokia (or preferabbly several smaller ones) there is much we can learn from it.

Friday 15 April 2011

Thoughts about Data and Information

Question: What is information?

The way I see it, information is data relevant to a given context. In other words, if data can provide an answer to a question, it is information; otherwise it has very little practical value.

Consider an online service that is quietly collecting data about network traffic and user behaviour. Having oodles of data in log files or in database does little good on its own. It is only after someone begins to ask context specific questions that the data begins to acquire practical value by virtue of providing answers.

From this follows that logging and other methods of gathering data may turn out to be waste of time and resources (which often equals to money) if there do not attempt to provide answers to questions. In other words, when designing a service one should also be mindful about e.g. what is being logged and why.

I've seen too many systems that have their logs disabled in production because of the amount of data they collect every day: constant writing in a log file consumes limited disk space and even slows down the overall service performance. So when something goes wrong in the production there are no records because logs are only used by developers and testers to debug the system before going live. And even when logs are enabled in production it often turns out that the data that might be relevant is not being logged at all.

So before implementing data gathering of any kind, someone should think about what questions are most likely being asked when the service is in production and then consider what data would be relevant within that context.