PageBox
Web pagebox.net
 Java PageBox 
 New PageBox 
 Other products 
 Miscellaneous 
 Patents 
  

Society and computing

Motivation

This document is about the Society that implements and uses programs. It complements:

  • Air transport that describes the air transport system that well illustrates the trend toward the integration of human activities: Air transport works because governments, airlines, airport authorities and numberless contractors work permanently together. All the time they exchange messages through computer messages and all the time they trust each other whereas these messages can represent huge amount of money, send someone to jail and even kill innocents. These computer networks must be organized. They require planning and regulations. In this respect air transport shows the evolution from the invisible hand (if me and my neighbors eat bread there will be a bakery in our neighborhood) toward the visible hand (even if we would use a service, we will not get it except if a set of parties agree to work together and governments coordinate them).

  • Patent search, Business method and Examination that present software and business method patents. Software industry is increasingly aware about patents but it is still difficult to see where we go: (1) few cases go to court (2) the patent owner must identify infringements (3) the patent owner chooses who she wants to sue (publisher, user, competitor, rich company or defenseless small company).

  • Software development that presents algorithms and design methods

I try to take an unbiased point of view to answer to questions:

  • How things are working?

  • Where are we going?

The subject is more difficult than the subject of the documents referred above. I am not saying that it is more complex. The problem is that we can look at algorithms, methods, or even integration of human activities from outside. We do not have this option with the Society or with law. We definitely are inside and as the theorem of Goedel shows we can only get a flawed representation. Law has to work. So considerable efforts are made to develop and teach it. Though it may change with the rise of the visible hand, so far, we actually do not need to understand how the Society is working because it self-organizes.

Because the subject is so large I do not discuss the points where I agree with the common analysis.

When I wrote this page I felt that I was infringing an implicit rule. If you do not present something in a positive way this is criticism and you are supposed to propose a solution. Pessimism is perceived as a lack of education (I found a Web based training titled "The Path from Pessimism to Optimism") or as a disease. But I think that, if compulsive optimism regarding things you control is good, compulsive optimism regarding things you do not control is foolish. In the document I talk about blind spots and I think that compulsive optimism contributes to their creation.

Computer revolution

Computers do not change so quickly the society. Like mechanization before, computing is a fundamental and ubiquitous invention that

  1. modifies the environment, making some practices obsolete,

  2. facilitates ongoing changes and enables them to spread to an extent impossible to achieve without it, and

  3. triggers adaptations to its specificities.

Computers were invented after an unprecedented succession of major inventions and social changes. People did not learn to adapt faster to new inventions but they learnt that it was vain to resist and that they even had to show that they quickly embraced the new thing. People tried to exorcize computers by emphasizing their technical side and ignoring their limitations while they refused to abandon practices computers made obsolete. For instance computers are more abstract than engines, which were themselves more abstract than horses. So people were massively educated. But the Society did not abandon Taylorism, whose principle was to specialize the workforce to allow people to work efficiently with a minimal training. So now people are both more trained and more specialized than ever. This is counterproductive especially if you consider how short-lived some specializations are. It would be more logical to teach people a knowledge backbone allowing them to quickly adapt to change and to ask them to perform two or more different tasks.

In the first chapter of “The inmates are running the asylum” Alan Cooper describes his painful experience with digital cameras and computerized clock-radios. He explains: “Finally, in 1992, I ceased all programming to devote one hundred percent of my time to helping other development firms make their products easier to use. And a wonderful thing happened! I immediately discovered that after I freed myself from the demands of programming, I saw for the first time how powerful and compelling those [user] demands were.” So far, so good, computer programming is a challenging exercise and Alan presents a valuable way to improve the situation. Alas then he comes to the conclusion: “The key to solving the problem is interaction design. We need a new class of professional interaction designers who design the way software behaves”. This is typical of the way problems are addressed in our Society.

Measures and checking

At least since the invention of agriculture there are three accepted ways to earn money:

  1. Jobs whose results can be checked and measured

  2. Jobs whose results can be checked and measured with a delay

  3. Jobs whose results cannot be easily checked or measured

Examples

Type 1

Programmers, salesmen and farmers have type 1 jobs.

  1. To measure the production of a programmer, count the number of lines in her programs. To check the production of a programmer, count the bugs in her code.

  2. To measure the production of a salesman, count the orders. To check the salesman we can for instance count the unpaid invoices.

  3. To measure the production of a farmer, weight the milk. To check the farmer we can for instance analyze the milk.

Type 2

Men and women of power usually have type 2 jobs. Because they decide their results can be measured but only with a delay.

Type 3

Everyone involved in control or abnormal condition handling. A custom officer cannot find more smugglers or a fireman cannot extinguish more fires than there are in her scope.

Remarks

There is no relationship between usefulness and measurement: a nurse who has a type 3 job can be more useful than a programmer. No one wants to be said that she has a job whose result cannot be measured because we were taught that result = performance.

There is no relationship between convenience and measurement: a programmer can take breaks when she wants. An Air Traffic controller or a receptionist does not have this opportunity. She has to look carefully at the screen during precisely defined periods.

I am aware that the classification is very coarse. More substantial objections may be raised.

Objections

  • If we take the programmer example we can measure the number of lines of code and the number of bugs but we cannot measure the suitability of the code. There are not so many jobs that can be fully measured.

  • Once she understands how she is measured a human optimizes its score: for instance the programmer adds blank lines and comments cheaper to produce than instructions. The measurement is not very reliable.

However I believe that this classification is valid. We routinely size type 3 activities in man * month or in man * year. The society can accurately measure its needs in type 1 jobs. The society cannot do the same thing for type 2 and 3 jobs and actually has the number of type 2 and 3 jobs it can find and pay. The most convenient jobs are type 3 jobs but some type 3 jobs are not convenient. As a consequence the Society tends to have less workers with inconvenient type 3 jobs than needed and more workers with convenient type 3 jobs than needed.

Proposal

  • The number of type 2 and type 3 jobs grows to the expense of type 1 jobs.

  • Computers are used almost exclusively to eliminate type 1 activities.

  • Because people change of job more often type 2 jobs tend to become type 3 jobs.

Software industry

If computers had a strong impact on the Society it would affect the way programs are developed. We can actually see that computer development closely follows the same slow evolution as the rest of the Society.

60’s development

In the 60’s the key person is the analyst. He (then it is almost always a man) interviews people in charge of those who will use the program (this is not a problem to only interview managers, then managers still understand what their employees are doing). Then he comes back to his office and uses a special plastic rule to draw diagrams. He gives its output to a programmer.

The programmer writes the program on special sheets with one square per character and gives the sheets to a typist who punches cards. Next an operator put the cards in a card reader; the computer prints the job output that the operator sends back to the programmer. This a typical Taylor process. None of the jobs - except the analyst one - is very demanding and the skills of everyone are used at their best. The programmer, the typist and the operator have no professional reason for talking to each other, which is, as we know now, the main productivity benefit of the model. Everyone has a type 1 job. The model is fully compatible with the surrounding hierarchical Society: the operator and the typist are blue collars, the analyst wears a tie and is a white collar and the programmer is somewhere between white and blue collar.

The model failed because two steps could not be formally described:

  • The analyst documents left some place for interpretation

  • The analyst had to understand what the manager wants

The surrounding hierarchical Society with its well-defined jobs suffered similar flaws. Few persons at the top of the hierarchy were entitled to monitor the market, identify opportunities and reorient the activity. These persons usually had not enough time and motivation to pay attention to changes, which turned to be fatal to many companies. This was neither the golden age nor the hell. Remember that with its static organization this Society managed to send people to the Moon and to deliver TV, refrigerators and washing machines to citizens. There were as many innovations then than now.

Nowadays development

What a development must deliver is more complex now than in the 60’s. On the other hand we have better tools (IDEs with symbolic debuggers...), languages (object languages) and much faster computers. A typical application is bigger, manages more data, handles more users and implements a user interface. On the other hand the typical application does not implement more algorithms and has less hardware constraints (memory, speed) than the 60’s application.

The Society answered to the increased complexity of applications with an increased specialization:

Nb

Function

Type

1

A marketing team analyzes the market needs and specifies the product. Then it checks that the product conforms to the specification.

2

2

A development team designs and analyzes the server part of the product.

2

2

The same development team implements the server part of the product.

1

3

An operational research team designs the algorithms.

3

4

Database experts design the database and optimize the database requests.

1

5

Graphical designers design the user interface.

3

6

A development team analyzes the user interface.

2

6

The same development team implements the user interface.

1

7

Security specialists check the security of the product.

2/3

8

The legal department checks that the product conforms to laws and regulations, assesses the risk that the product infringes existing patents and helps filing patent applications for the key innovations of the products.

2/3

9

A team tests the product.

2

10

A team creates the product documentation.

2

11

The internationalization impacts marketing, graphical designers, user interface developers and technical writers. We can need an internalization group to coordinate this effort and interface with translators.

3

12

A team put the programs in production and operates the product.

1

What Alan Cooper proposes is simply to add a thirteenth entry to the table for the interaction designers. There is nothing wrong in the idea: I agree that interaction design is different of user design. We can say the same thing about the other parties involved in the development. So what is wrong with this model?

  • By its size the list challenges human short-term memory (five to nine entries).

  • Among fourteen tasks only four are type 1 jobs. Among them, the two development tasks are the most time-consuming tasks, which is normal because it is at development stage that errors made before are fixed. Programmers have the dubious privilege to be measured and checked not only for their mistakes but also for mistakes made by others. Because development tasks are the most time-consuming and the less predictable tasks this is were productivity efforts are concentrated.

  • For clarity I do not mention budget related tasks (Financial department, cost studies), hiring (human resources)... To control schedule slippage we usually add some methods, audits and steering committees and a close control by the top management.

If computers were really changing the Society we would use proven programming models. In object programming, twelve object classes are nothing. An average project has more than hundred object classes but on average an object class interacts with three to five other object classes, the content of an object class is obscure to the others object classes that know only about a well-defined and limited list of strongly typed methods. What we would not dare to do in programming we routinely do in project management:

  • The project has a modular structure for production and a hierarchical structure for command.

  • If you take the object analogy a team can have an obscure strongly typed interface to peer teams but it must implement a general purpose, hard to parse doVerb method for the management and type 2 and 3 teams.

  • In the same way a team can call the strongly typed methods of peer teams but there is no management method that it can call.

With all its flaws the 60’s development had at least a strong point: people were responsible and were deciding about what they were producing. Our intent is to describe the way things are working and to point out that the current organization is not perfect and therefore likely to change, not to propose a new organization. I must clarify one point: the idea that once a project is launched the top management should act as a regular team without special privilege is not influenced by Dilbert. The resulting organization would be more productive and it would not make the life of participants easier. To illustrate the point here is the way it could work:

  • Teams belong to a resource pool.

  • When a project is launched the teams can bid to participate to the project. Contracting companies can also bid.

  • The top management selects the best offers. If we take the object analogy it instantiates and initializes the objects. Then as far as the project is concerned it acts as a regular team.

  • The different teams work together to achieve the project. Each team is responsible for its budget and resource management.

I saw something pretty close to that organization and I was impressed. When I moved to another company I proposed to implement the same system. Staff was horrified. They felt that they would lose control of their life and while they had a pretty good opinion on themselves they were also convinced that their team would never be selected. They thought that they owe their living to the good relationship they had with their hierarchy.

I would like to explain now why a typical project organization is counterproductive. First of all this organization is a Babel tower. Because people are highly specialized in different domains they do not give the same meaning to words. It is not surprising that when they are asked to work together they do not understand each other. We come back on this issue in the education section where we show that when people are expected to give the same meaning to words (when they work in the same team) it is also harmful.

Secondly participants are not focused on the project. They have indeed two jobs: perform their task on the project and manage their career. In a time of rising unemployment career management is not only a question of self-interest, it is almost a question of death or life. The consequences are:

  1. People avoid taking decisions that could be criticized later.

  2. People collectively take premature decisions. When you have to take a decision alone you try to collect the information needed to take the decision. In a meeting because everyone is sure to not be blamed for a mistake, if a decision is expected a decision will be taken.

  3. People protect themselves. The specifications contain hundreds of pages, describes all features that can be found in similar existing products and include no innovation because (1) the reader cannot make the difference between a new idea and a proven concept (2) by their very existence proven concepts can certainly be implemented. The analysis documents contain thousands of pages with hundreds of UML diagrams and describe with great details functions that can easily be implemented.

  4. People self promote. Instead of informing their management people serve to their bosses the cooking that they think that their bosses expect and appreciate.

  5. People self restrict their choices. As a rule, before starting a design document or a feasibility study ask to the management for the conclusion, then describe the requirements and constraints and next describe a path that looks logical between the requirements and constraints and the conclusion.

Therefore:

  1. The project produces paper, slides and with a delay and favorable conditions a product.

  2. The product is slow, too big, and inconvenient and contains useless functions.

  3. Its user interface is magnificent but users need one or two training before being able to use it.

  4. The product is not innovative.

The way projects are driven is usually ineffective but there are probably cultural differences. I have worked mostly in France but from time to time also in Germany and USA. I think that the last part of the description may depend on the country. Visit http://www.cyborlink.com/default.htm for an explanation of cultural differences by Geert Hoftstede. This psychologist developed a model that identifies four primary dimensions to differentiate cultures. He later added a fifth dimension, Long-term Outlook.

Dimension

Impact on the model

Power Distance Index (PDI) focuses on the degree of equality, or inequality, between people in the country's society. Power Distance Index (PDI) focuses on the degree of equality, or inequality, between people in the country's society.

Higher the PDI is, more impact the management has, more people self promote and self restrict.

When the PDI is high there are no more peer teams. Type 1 jobs are at the bottom, type 2 at the top and type 3 jobs are in the middle.

Individualism (IDV) focuses on the degree the society reinforces individual or collective, achievement and interpersonal relationships.

In societies with a low level of individualism any organization works. The current development model can give not so bad results.

Masculinity (MAS) focuses on the degree the society reinforces, or does not reinforce, the traditional masculine work role model of male achievement, control, and power.

Masculinity may be a safeguard. The model of male achievement may lead to avoid failures.

Uncertainty Avoidance Index (UAI) focuses on the level of tolerance for uncertainty and ambiguity within the society.

When the UAI is high people take only collective decisions and carefully protect themselves.

Long-Term Orientation (LTO) focuses on the degree the society embraces, or does not embrace, long-term devotion to traditional, forward thinking values.

High LTO like low IDV help the organization to work anyway. With a low LTO nobody makes a difference between a type 2 and a type 3 job.

Education

So far I have assumed that participants were acting rationally though they were influenced by cultural factors. However the organization also exercises an influence on participants, creating blind spots, proven by the existence of obvious choices that no participant seems to see.

The human brain creates representations of the world that explains its inputs. Then each time the brain receives a new input it checks if the representation also explains the new input. If it is not the case the brain looks for another representation. Obviously the brain manages to have different representations: a representation of the universe does not help to define for instance a user interface. You actually cut relevant slices in the observable world and you only handle inputs occurring in these slices to build slice representations.

To communicate people do not need to share the same representations; they need to share slices. It is only when slices have roughly the same borders that representations can converge and people can correctly interpret what others are saying or writing. We all have been in cases where we tried to find which slice - you may call it perspective - the person in front of you was using and we were uncomfortable when we met someone who was using a slice we did not know.

The Education provides:

  • Predefined representations. The Earth turns around the Sun.

  • Predefined representation taxonomies. Mathematics is not physics is not computer language...

  • Rules to slice the observable world

The company culture further refines the world slicing.

Our organization is very demanding in term of communications. Therefore it is also very demanding in term of world slicing. We sometimes need to tune the slices, which raises two issues:

  • We put at a competitive disadvantage people who use different slices. It used to be a gift. Now it is almost a handicap: we marginalize the brothers of those who invented the fire and the wheel.

  • With our small set of slices we ignore what is happening outside. We miss opportunities.

Computerized devices

Alan Cooper rightfully points the defaults of digital cameras and computerized clock-radios. However such inconsistencies are common even when computers are not involved. Let’s assume that you frequently fly for business:

  1. At check-in you are welcomed by a hostess - you are an airline customer

  2. Then you pass the control - you are a potential terrorist

  3. Then you go to the lounge - you are a privileged customer

  4. Your plane is not parked at the terminal, you are packed in a bus for half an hour - you are freight

  5. Once in the plane you are again a customer

The 90’s VCR may be more interesting than digital cameras. Beside its cumbersome interface the VCR had a clock display that displayed a blinking 00:00:00 a visitor could not miss. Some owners spent hours setting the device, not so much to record films but just to not look stupid. The interface was cumbersome but did not require special knowledge or gifts. It gave to the user a deceptive feeling of achievement. The aircraft boarding offers the same benefits; it is a test where you have the opportunity to score well and to show how familiar you are with the subject.

Alan Cooper presents the classical analysis: computers create computer illiterates and computer apologists. However, from a social point of view, the most important is what happens between the extremes. The nerd is presented as immature, selfish and insane and the computer illiterate as an idiot doomed to the unemployment by the technology revolution. Computers help people in the middle to compete. A poor interface that presents no conceptual difficulty is (1) the easiest to develop (2) the interface that serves the best the differentiation needs: almost everyone will be able to use it but the one who easily catches a piece of information at the opposite side of the screen or memorizes a dozen of key sequences is more productive.

I am not saying that users want complex interfaces just to show their special talents. Everyone is challenged by changes and needs to be reassured. Life is like Tour de France. In Tour de France there is a van called "la voiture balai". When the "voiture balai" catches a racing cyclist up, he has to give up and, supreme humiliation, to get into the van with his bicycle. Obviously the "voiture balai" is tuned to not catch all cyclists; otherwise the race should stop. People compete in the same way as average cyclists compete to escape the "voiture balai" as well as to help their leader.

Trust

I wrote that computers do not change so much the society. There is a case where the impact is significant: in computers we trust whereas we trust less and less each other. We are surprised to learn that on a given market a handshake is enough to drive a bargain. At the same time this scenario below surprises nobody:

  • Two companies sign a partnership contract

  • Their network administrators set a VPN between the two companies

  • Then every day both companies run thousands of transactions on the partner site. No control, almost no audit.

The partnership contract does not replace the handshake, the transaction does. The camel dealer who shakes the hands abides his word because he wants to stay on a market where he was accepted only because he had thoroughly checked credentials.

Even in daily life we trust more computers than people. We never challenge the program design or the data reliability. That is generally right for commands, invoices and pay slips but necessarily for anything that handles user feedback or is supposed to help for a decision. Obviously silicon does not lie and has no self-interest but it is also unable to correct design and data biases. We seem to believe that errors counterbalance each other when we use more programmers and more data.

Free computing

Marginal cost

Now the marginal cost of computing and networking - hardware and software - is close to zero. This is a remarkable result, unique in the mankind history. Agriculture allowed supplying a quantity of food only limited by the number of arms. Industrial revolution cut transportation and production costs. But marginal cost remained significant in both cases. Marginal cost is the difference between the production cost of n + 1 units and the production cost of n units. Building a new processor factory may require huge investments but producing one processor requires almost no raw material, energy and workforce. And copying or downloading a program is almost free.

These properties were latent when computers were huge machines requiring a special air conditioning and plenty of electricity and were produced by tens or hundreds. In the last decade computers were produced by millions. Let’s focus now on software.

It is interesting to note that when the marginal cost becomes close to zero even corporations sell at a much discounted price when the sold good is perishable. I explained in Air transport how it works for airfares. The cost of flying a full plane is almost the same as flying the same plane with an empty seat. Even the smallest contribution is worthwhile because it is pure profit. However if all passengers fly at this fare the airline cannot be profitable and because there is a limited seat supply people who book later will not be able to board. Therefore airlines use a system called revenue management to predict bookings and keep seats for passengers who need to book late and accept to pay a premium fare.

Obviously programs are not a perishable good and can be sold by any number. But there is another lesson to get from air industry. Challengers never stop entering a market with simpler organizations and cheaper products when (1) the marginal cost is close to zero (2) the market is technically stable: there were no low cost carriers when airlines had to swap their propeller aircrafts for jet aircrafts. Most challengers fail, most established companies survive thank to their larger network but prices go down and the whole industry has to improve its processes.

Some types of programs (operating systems, relational databases and browsers) are now mature. Commercial publishers cannot significantly improve mature products whereas Open source publishers, which started later with less resources can still enhance their equivalents.

Prototype business

At the end of the 80’s one common sentence was: "We want to industrialize our development." Experts noted that programs were never released on time and that they contained many bugs when they were released. The responsible persons were the programmers. Instead of focusing on business needs they preferred to write complex and brilliant programs. The solution was to use industrial methods:

  • Use PERT charts to identify critical paths and monitor project progress

  • Design first and then program

  • Implement quality procedures to ensure that software was bug-free

Though I agree that it allowed making bigger projects I believe that this cathedral approach failed to improve the software quality and increased the development cost. Projects were still experiencing delays. Despite its failures experts never gave up and the cathedral approach is still used today.

Someone wrote: "There is the known and the unknown and in between are the doors." The only function fulfilled by the cathedral approach is this door function, door between programmers and top management, door between the programmers and the rest of the company. Here a door is more than an interface. It is an adapter that makes the requests of top management and the purpose of the company intelligible to the programmers and the project intelligible to top management. The trouble with the cathedral approach is that it also tries achieving other purposes like quality or on time delivery.

Uncertainty

Of course predictable development cost and on time delivery are desirable characteristics, useful to make the right choices and to plan the budget. The problem is that software was born with a perfect production system. Once a program has been tested it can be run as many times and on as many machines as needed. The process of writing the program is research, development and prototyping. These tasks imply uncertainty. Promising ideas turn to be impractical. Things do not go as expected. With huge budgets and a management highly concerned by these issues cars and rockets are still released with delays and their first versions still contain many defaults. There is no hope to do better with programs.

But you can consider the problem from another point of view. You can know from the beginning of what a product will be made and how much it will cost only when this product is a clone of an existing product. If the existing product belongs to you it can make sense to reduce maintenance costs and to get a version easier to extend and customize. In other cases you expose yourself to patent infringement and at the very best you will just release the same thing as your competitor one or two years later. You must accept uncertainty because this uncertainty is the downside of the chance you have to get something better than your competitors. Some managers disagree: "For us IT is a tool; it is not our core business." This is a perfectly valid point of view but in this case do not write program, do not even run them, sign a facility management contract.

Drilling an oil well is extremely expensive and despite research, measures and computations oil companies are never sure to find oil. They accept this fact because when they find oil they make a lot of money. This is the kind of attitude we should have in program development. A high risk is perfectly acceptable if the potential benefit is worth the try.

Quality

Programmer is both a relatively demanding job in term of competence and a job where competence and performance (capability to write quickly fast and almost bug-free code) are not well correlated. Education and training give competence Managers look for competence and experience. Performance, which cannot easily be assessed and does not help to communicate, is the only aspect that can be measured and checked. In a team whose programmers have the same competences a programmer may have a ten and more times higher performance than others and this performance is not strongly correlated to experience. This lack of correlation between competence and performance is not unique. It is the same for art and handicraft.

Someone can take a picture in a couple of seconds, write a song in two hours, a novel in six months but there is a limit to the size and usefulness of the program a programmer can write in a given amount of time. Therefore a programmer cannot easily run her own business like an artist or a craftsman.

The need for a combined effort of people having sizable skills, experience and performance is not unique in the history. Before the industrial revolution it was common. Quite interestingly the Middle Age craftsmen who built cathedrals had common points with programmers:

  • They were well paid (we know that because their employers recorded their expenses)

  • They were citizens (not so common then) but second-class citizens. No one remember their names whereas the names of nobles, priests and lawyers were recorded.

Software quality and not-so-delayed delivery primarily depends on top performers, which may represent ten percent of the programmers. Partly because their status is not recognized they can be hard to manage and they may see other top performers as rivals.

However, according to the cathedral principles, quality is the result of quality procedures, commitment and controls. In the same way on time delivery is possible if (1) the project is properly defined in steps with the right dependencies (2) the project is monitored with charts when steps are green if the step is going well, yellow if the step is late, red if the step is very late. Thank to the charts the management has a clear understanding on the project, can early identify red steps and take proper action.

Quality stinks: programmers have to read procedures whose obviousness is only equaled by the Little Red Book and then to show their commitment to quality. During big meetings speakers show the spread of quality in the company. In that respect there is no substantial difference between quality and dictatorship methods.

During the 60’s during the development of a super calculator called STAR were established "laws". One of them said that we should never say that we completed 90% of a task. When we think that we made 90% of the work we may actually have completed 50% of it. Therefore a step has only two states, completed and not completed. Furthermore the project cannot progress in parallel on more steps than it has top performers. As a consequence there is usually no relationship between project charts and what actually happens on the project.

I am not saying that the theories behind quality and project management have no merit. They proved their value outside IT. I am not even saying that these theories could not have an application on development though they were never designed to address prototyping needs. What I say is that as far as development is concerned quality and project management have only a "door" function. Otherwise quality and project management would also be checked, at least in post mortem analysis.

Type 3 jobs

Quality, project management, product specification and analysis have grown in an uncontrolled way creating many type 3 or type 2 jobs. Some of these jobs could have contributed to projects but because they could not be checked or measured these jobs did not require to be knowledgeable but to look knowledgeable. In the latter case they were completely useless. When we have one top performer we have ten programmers and ten type 3 jobs. Those twenty persons require five managers and team leaders. The twenty-five persons require space, network and other facilities that require other skills. Now we probably talk about thirty persons for a single top performer and it is still not the whole story. A commercial company must:

  • create a brand;

  • promote the product;

  • distribute the product;

  • support the product;

  • make market studies, run customer satisfaction surveys;

  • show to investors its plans for the expansion of the product;

  • show to investors that the product has a smooth growth from "new promising" to "milk cow".

Now we talk about a capital-hungry business with huge fixed costs where top performers represent less than one percent of the workforce. Employees take care about their position and career, not too much about the customers and almost never about the users. Management does not try to plan beyond the next quarter. This explains why a healthy Open Source business without significant revenues and funding can coexist with commercial development.

In Open Source developers are self-proclaimed. Who will dare to say "I have something to show", except a top performer with a purpose? You need to believe in an idea. Other key factors that contribute to the efficiency of the Open Source model are small team size, peer control and resource shortage.

I am skeptical about bazaar model. Open Source projects use iterative development but external contributions are rare. Overall the Open Source development model is not so different of cathedral model:

  1. Marketing: who will use the product and to address which problem?

  2. Product definition.

  3. Design.

  4. Analysis.

  5. Programming and unit testing.

  6. Block testing.

  7. Security audit.

  8. Legal aspect: You have to choose a license, to check close commercial implementations. When your product is published on a CDROM the publisher asks you to confirm that you own the rights on the product...

  9. Documentation.

  10. Internationalization.

5 to 10 are almost the same in Open Source development as they are in commercial developments. But production definition, design and analysis documents are written with a purpose. Different options are considered and only aspects requiring a decision or needed for the development are documented: (1) Why wasting time describing something obvious or that can or should be decided later? (2) Why wasting time browsing hundred and more pages to only find one or two topics of interest? A document that only describes these topics is cheaper to write and easier to read.

The Open Source model is a Darwinian model: many projects are started; few projects succeed. Open Source development does not require large amounts of money, which allows starting many projects and allows projects to survive even if they initially do not attract many users. Because their license allows this, a project can be started by a team, gave up and restarted by another team. This was the case for PHP. You can find many examples on SourceForge. The model also self-regulates: if a project looks promising other members join the team. If the project fails members leave the team.

Therefore Open Source development is more efficient than commercial development. We have some evidences:

  • Richard Stallman wrote alone the first versions of Emacs and gcc. How does it cost to a commercial publisher to develop a compiler?

  • Now commercial products embed a significant amount of Open Source. A commercial application server may include Apache, JServ, Xerces, Xalan and Struts. In such cases Open Source has provided key functions of the sold product.

In a stable market, in an Internet environment ruled by search engines users are more mature and no longer need the visits of "evangelists". Users who program want to invest in products that successfully passed the test of micro-projects, not in products that they bought without knowing what they contained. They dislike do-it-all toolboxes and prefer to select standard-conformant optional products when they need them. Users who do not program want ready-to-run solutions, not solutions that require three months of training, six months of consulting and one year of customization. All users are ready to pay for a support that solves problem, not for a support inferior to Open Source mailing lists. Sure of their market power commercial publishers have not paid enough attention to the rise of low-cost alternatives and routinely fail at providing the right resources and skills at the right place.

Therefore there are good reasons to think that:

  • Network, computers and programs will become a commodity as cheap as water or electricity.

  • Providers will combine programs mostly developed in Open Source and package them for end consumers.

  • Users will prefer products with sources, not because they want to read sources but because they want to decide when they migrate to another version and because they want to choose additional components on the shelf.

Future

Society

" This is the end

Beautiful friend

This is the end

My only friend, the end

Of our elaborate plans, the end

Of everything that stands, the end

No safety or surprise, the end

I'll never look into your eyes...again"

We did not fully appreciate the impact of some inventions like Internet. We put in place increasingly ineffective organizations with a rise in jobs whose contribution cannot be measured and this is ironically especially true in software development. We inflated a bubble of type 3 jobs. If this bubble burst we could get massive unemployment and deflation. This catastrophic scenario is unlikely to happen, social changes taking years even in case of revolution. Therefore we will only tend toward more unemployment, deflation and other inconveniences in a smoother way.

Why

If it becomes an almost free commodity, computing will not be able to create or keep a large number of jobs. A low cost ubiquitous service only needs half a percent of a country workforce when it is stable. Computing will continue to grow fast boosted by its low cost and by the fact that computers are the only thing we still trust. Therefore computing should require between one and 1.5 percent of a country workforce, which implies that large numbers of relatively well-paid persons (mostly type 3 workers) will be laid off.

IT represents a significant share of company spending. If IT expenditure goes down in a competitive world prices should drop. Because productivity gains come mostly from increased computerization in the visible hand way, prices should drop even more. The trend to cut type 3 jobs should extend to the whole service sector creating even more unemployment and reducing even more prices. We should get deflation.

Losses of jobs have economical consequences, for instance the fall of government income. Deflation should also hurt indebted countries. The human consequences are also hard to predict. Most of the laid off people will have had type 3 jobs - perfect to lose their landmarks. They will have mastered VCR, digital cameras and PDA. They will have been examples of moderately successful lives to eventually wear an "Obsolete" notice. It is really tough to imagine the implications.

How

The how is interesting because it shows my assumptions:

  1. Cultural. In some cultures people more easily accept to give up their experience asset.

  2. Technical. I assume that computer-related changes would continue to nullify pre-acquired experience.

  3. Early adopter disadvantage. Note that a perceived early adopter disadvantage is enough for the assumption to be valid.

3 is the most questionable assumption. I explain below why it is valid today.

I met managers who were in this dilemma:

  1. They had hundred employees. Three years ago it was just enough, but now employees are more productive because they are more experienced and learnt to work together. Managers only need eighty employees to make the same work. They are budget constrained but also under a strong pressure to improve their processes

  2. They can fire twenty employees.

  3. They can reengineer their process but then they need to retrain the employees, to hire twenty other employees and to take the risk of a failure.

  4. They can change nothing and keep busy their extra twenty employees in studies, control and quality. These employees enter the great family of type 3 workers.

Up to the time they are asked to cut their workforce they choose 4.

Note on this simple example another fascinating blind spot in our Society: experience. No one would dare to hire someone without some kind of experience. So when the team is created it only contains experienced people. Maybe but two years later team members

  1. Have improved their technical skills

  2. Have the details of the project in mind

  3. Have learnt to work together faster.

This is the well-known experience curve. Experience is a sort of non-monetary benefit because it is created by the organization but it belongs to the individual. No worker wants that her experience, a painfully acquired asset, get challenged. It is false to say that people do not like change. They like change as far as it comes on top of their experience. In that respect nowadays computer related changes are terrible. A trainee who was taught the new thing is usually more productive than someone experienced in the older thing.

A company is just a set of departments where this dilemma happens and each department iterates the process every three years. Some people succeed learning something new and keeping a type 1 job but most of the employees move to a type 3 job and combine an obsolete technical experience to a growing experience of the company culture. In a communication-intensive society cultural experience is a huge competitive advantage and job 3 uselessness is undetectable precisely because its result cannot be measured or checked. Therefore type 3 workers can usually avoid living again the painful quest of technical experience and when it has to start a new project a company has to hire new employees with a first experience of the new technology.

The management also has this dual experience, a cultural experience useful to manage people and a technical experience useful to take decisions. Obviously this technical experience is not about programming details but rather about models in order to (1) understand what is needed or feasible (2) understand the explanations of experts. An important aspect of computer related changes is that they relate to new models in development, operations, support... and therefore also nullify the technical experience of the management. When it is unable to decide the management organizes collective decisions. Though it is true that when people participate to the decision they are more involved in its implementation, the real motivations are:

  1. The hope that an optimal decision will come from the discussion between people knowing together all aspects of the issue

  2. Responsibility dilution

The discussion is only good for eliminating some really bad options. First people need to feel responsible to act responsibly. Second when you take a decision alone you take your time and you can be honest with yourself about the options and your doubts. A collective decision is a social exercise where it is of paramount importance of looking clever, concerned, or daring depending on your role in the group. A collective decision is also a compromise between the interested parties, market needs and technical constraints remaining in the second plan. Therefore a collective decision is highly predictable, never original and sometimes suicidal.

We can easily see that even if a project is its last chance to survive the company can only mobilize the newcomers and the couple of type 1 workers who moved to the new technology. The company partners with smaller companies, not because this is cheaper but because it does not have enough skilled employees. Because its competitors started at the same time and have the same history as it, the company usually stays competitive. It works like a super tanker: ridiculously underpowered but still able to move thank to its size. The company can survive if it has a significant market power and if there are barriers to new entrants on its market.

I talk about barriers to new entrants in a next section. Market power gives time to take corrective action when the company has no longer a competitive product. The most effective weapon of marketing power is Fear, Uncertainty and Doubt (FUD). Roger Erwin summarizes FUD with this sentence: "Hey, it could be risky going down that road, stick with us and you are with the crowd. Our next soon-to-be-released version will be better than that anyway." Companies with a market power use FUD to reinforce what customers already think.

The problem is that barriers and FUD stem the tide of better products. When this dam breaks even biggest companies suffer. It is what happened to IBM at the end of 80’s with the PS/2 and OS/2. IBM, then omnipotent, announced the same day a new hardware platform, the PS/2, incompatible with the existing PCs and a new operating system, OS/2. The PS/2, which did not require OS/2, was ready and OS/2, which did not require the PS/2 was not and the market said no to both. IBM had lost its biggest asset, a twenty-years-old invincibility and had to struggle to survive. IBM was a forerunner:

  • Though the success of the PS/2 and OS/2 was key for the development of the company, IBM was not able to mobilize all its resources as it did for the 360 computer twenty years before. IBM even had to partner with a smaller company, Microsoft.

  • In IBM decisions were collective. The consequences were delays, compromises and ultimately bad decisions.

Today most companies also partner with smaller companies to develop products, take collective decisions, and make mistakes. But because they are weaker than IBM the mistakes are often fatal. New companies with simpler and more focused organizations can hire a part of the laid off employees in the same way as it already happened in the air industry. The software industry can quickly adjust to the almost free nature of computing. This is not that all companies that fail to adapt will disappear. This is like an epidemic. Most companies are sick, only the strongest will survive and new companies, more resistant to the type 3 virus will partly replace the dead companies. It would not be that bad without another factor: in recent years innovators failed to reward the early adopters. This is true for end consumers. This is even truer when the customer is a company.

You can easily see that:

  1. An innovation is always released too early. Then there is only one expensive source. The innovation is not reliable and nobody knows how to use it. The customer has to test the innovation and to find out the best way of using it.

  2. Once the concept is proven the innovation has many sources and is cheaper. In case of software some implementations may even be free. You can find information (books, articles, explanations, examples...) from multiple sources about how to use the innovation. You can even hire people with a first experience of the innovation.

It has always been like that. However when there were decades in the past there is now barely one year between the innovation and the next version releases. To illustrate this point we can look at the story of the steam engine. For two thousand years the industry faced a problem: there was water in mines that had to be sucked up. A first solution was found in Middle Age with millwheels moving pumps. This solution was reliable but required a waterway. Newcomen released the first useful steam engine at the beginning of eighteen century. It was big, wasteful and inefficient but still better than nothing. For early adopters it was an obvious choice: they had to wait sixty years for the next version released by Watt.

Consider the poor case of early adoption today:

  • The early adopter can only dedicate limited resources to use the innovation.

  • The early adopter has to acquire experience and to find out the best way of using the innovation, which does not fit with the cathedral development model. Furthermore the early adopter must spend a significant part of its resources just to circumvent the innovation problems. Therefore the early adopter usually cannot release a product using the innovation before the release of the next version of the innovation.

  • Because the innovation is then better understood and its problems are fixed the early adopter must reengineer its product to include the improvements and make its product easier to maintain. Almost all the effort made before is wasted.

  • Early adoption would still be a good idea if the innovation were giving a compelling advantage over the competition. It is usually not the case because of marketing power. Marketing teams of competitors have piled up a huge experience at coping with the common situation where their product is inferior. They use the FUD sentence: "Our next soon-to-be-released version will be better than that anyway." They save the time that the development teams need to create a copy.

Companies watch each other and just repeat the moves of the one that dares to do something. This is the safest attitude. They think that on the long run the winner is not the early adopter but the one that better markets and produces its products. This view is well described in Michael Porter’s big ideas. Therefore there is almost no more market for innovation. This is not a new problem and this is the reason why patents and copyrights were invented.

Patents

If an object from a provider B looks the same as an original object from an original provider A then B infringes the copyright of A. But the copyright does not prevent a company of making a product using the innovations found in another product. If innovators are not rewarded for their inventions there will stop inventing. Patents exist to grant inventors "the right to exclude others from making, using, offering for sale, or selling" inventions. The American constitution says: "Congress shall have power ... to promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries."

The application of this patent principle has always been difficult. Granting a too wide scope to a patent would go against the patent goal, which is to promote the progress. You may believe that Newcomen just had to file a patent to get a fee each time a steam engine was built. It was not the case because the English patent office accorded before a patent to Thomas Savery for any engine based on fire use. [Savery’s engine was unusable. This was a deadly embrace case. With his patent Savery owned the way to get money from users but no potential users. Newcomen had potential users but no the way to get money from them. Newcomen and Savery wisely chose to associate to build and license the engine.] You can see that an important task of the patent office is to limit the scope of patents. Morse had the intuition that a wire was not needed to send a signal. However the US patent office asked him to include the wire in his telegraph patent. It must also be possible to patent an improvement of an invention. Let’s come back to our steam machine. The story did not stop with Newcomen. Most people name Watt when they are asked who invented the steam engine. Watt actually filed patents to improve the Newcomen engine. Watt contributions changed the steam engine from something good enough to pump water to a dependable, general-purpose power supply.

A patent aims to grant the inventor a monopole on their invention sufficient to reward their effort but not strong enough to prevent competition. The customer has two options (1) pay a license fee to the inventor (2) not use the process described in the patent. The customer will accept to pay only if the invention is clearly better than anything they can design to serve the same function. In the case of the steam engine this is simple: the function is to provide power and the means that can be patented is the steam. Now and especially for software patents this is not easy to determine what is the function and what is the means. Consider the case of spreadsheet. Its function is to perform computations or to perform computations on the rows and columns of a table? Recently the author reviewed a product specification (the document that describes the function of the product) and found that this specification potentially infringed five patents. It happens because the difference between the function and the means tends to change with the time. A successful solution shifts the customer expectations. What was a means in the first product becomes a function.

The authors of software patents aim to protect what could become a function at least for two reasons (1) in programming it is very difficult to find a significantly cheaper or more effective means (2) it is easy to identify companies that infringe a function and almost impossible to identify companies that infringe a means. A programmer would not consider most software patents as a sufficient basis for a development. Such patents do not satisfy the enablement criterion, which states that a person of the art should be able to recreate the object of the patent with the description.

This is possible because in twentieth century courts have loosened the patentability conditions:

  • New or Novel: The invention must be demonstrably different from publicly available ideas, inventions, or products (so-called "prior art"). This does not mean that every aspect of an invention must be novel. For example, new uses of known processes or machines are patentable. Incremental improvements on known processes may also be patentable.

  • Useful: The invention must have some application or utility or be an improvement over existing products and/or techniques.

  • Non-obvious: The invention cannot be obvious to a person of "ordinary skill" in the field; non-obviousness usually is demonstrated by showing that practicing the invention yields surprising, unexpected results.

These conditions seem useless: why someone would spend money to protect something useless or obvious? The reason is that identifying a new business area is like discovering a new island. When an island of Intellectual Property is discovered, corporate ships board armies of engineers and lawyers and sail to the new island to register as many slots as possible. The island is owned before being exploited. A corporation does not necessarily exploit its slots either because the island does not keep its promises or simply because this corporation has other priorities. This process is called patent portfolio building. The key issue is to delimit the estate and function claims are perfect for this purpose.

Corporation did not start building and courts did not start accommodating patent portfolios with software patents. For USA it started in 1908 when the Supreme Court rejected a nineteenth-century doctrine disfavouring "un-worked" patents. Before courts favoured a patent infringer who used the claimed invention to make and sell something against a patentee who did not implement this invention following the agrarian revolutions’ reasoning: the soil belongs to the one who cultivate it. There is evidence that before WWI patents were already "understood to have important uses outside the protection of technology that the firm planned. In particular, they were an important strategic weapon in the long term struggle between a large firm and its equally large, equally well heeled rivals." [found in One Hundred Years of solicitude: Intellectual Property law, 1900-2000 by Robert P. Merges; itself referring to Leonard Reich]. The "worked" requirement caused a burden to portfolio builders. They often had the resources to implement and marginally use their inventions but the "worked" requirement limited their patent throughput. The portfolio strategy commanded to not look for few strong patents but for many wide-scope patents at strategic location that could be cross-licensed with other patent owners.

As we have seen in the Savery / Newcomen case courts understood at an early stage that an improvement patent on some feature of a patented machine could not be interpreted as conferring the right to manufacture or use the improved feature. The Savery patent is indeed the prototype of corporate patents: wide scope (any engine based on fire use) and useless. Newcomen could spend ten years at making his engine usable. He still had to associate with Savery to produce and license his machine. Savery got the first blocking patent but he still had to implement his invention. Modern Saveries get patents for inventions they have neither implemented nor proven, just in case someone would do something useful in the patent scope.

The extension of this patenting doctrine to software raised two interesting issues:

  • Software patents are cheap because they do not require a specific R&D effort. Even when it describes a worked process a software patent just describes an aspect of a product implementation. The product is not implemented because an invention was made; the patent is filed because the product is implemented. Because software patents are cheap there are many of them. Therefore it is difficult to be sure that a process is not patented. On some markets everything, which is not in public domain, is patented, sometimes more than once [a good deal of public domain stuff being also patented]

  • Before software patents a patentee had few potential infringers of comparable size to watch. Because (1) in software industry there is no minimal capital requirement (2) Internet makes products global for free, the number of potential infringers is much higher. And this is up to the patentee to identify infringements and choose whom they want to sue, a publisher or even a user when the product is Open Source.

Patents applying to goods whose marginal cost is zero, for instance software patents, have a property that goes against the goal of patents. Because at a zero marginal cost a company cannot pay a license fee, these patents deter people of developing infringing products but also customers of adopting the approach claimed by the patent. The Mac Intosh and the Smalltalk IDEs were in advance of their competitors and well protected. It took ten years to the competition to provide acceptable alternatives but the market preferred to wait. To get market acceptance inventors need followers but when the inventors have followers the invention only returns marginal benefits. This is what happened to Sun with Java.

To circumvent portfolio patenting the solution has been to create royalty-free corridors in the patent land in much the same way there were corridors in the former DDR to go to Berlin. I use corridor to express the fact that every business needs a continuous royalty-free space to grow. Companies that neither have the resources for creating a patent portfolio nor want to pay a rent to patentees implement products in these corridors. The idea of W3C with its patent policy was to create standards only in these royalty-free corridors. This is not an ideal solution: development and competition take place only in corridors. Even portfolios holders focus their efforts on corridors and do not invest in the large outside-corridors domains they own because patents repel customer. The net result of portfolio patenting is to artificially reduce the field of possible and to contribute to the adoption of standards. This is not completely counterproductive because, computer science progress requires that providers and costumers make the same choice among equally valid approaches. But corridors seriously diminish the value of patent portfolios.

So Microsoft, Hewlett-Packard, Philips, Apple, AT&T, IBM, ILOG, Nortel Networks, The Open Group, Reuters, Sun Microsystems and others proposed to loosen the W3C patent policy. They "acknowledged a central conflict to the standardization process: Companies that spend serious time and effort coming up with the technology behind the standards may be reluctant to simply give away the rights to what they consider their intellectual property." You can find their proposal, called the RAND, here. RAND stands for "reasonable and non-discriminatory". For the moment all working groups in W3C use the royalty free model but the chairman of the W3C advisory committee, Daniel Weitzner wrote "this does NOT mean that W3C has made final decision in favor of a RF-only policy, nor does it mean that we have made a final determination about the role RAND licensing will play. The final decision about W3C's patent policy will be made after the PPWG [Patent Policy Working Group] has developed a new proposal, the public has had another chance to comment, and the W3C membership has had its chance to express its views formally to the Director". In its new proposal the PPWG could grant exemptions to open source developers so that they could avoid paying royalty fees on patented technologies accepted as W3C standards.

The patent system does not help to reward the early adopter and hinders the innovation. Why a company would improve an existing invention if it has to pay a fee to the company that owns the invention patent? If it knows the patent it does not make R&D in a patented area. Software patents do not reward invention but are an additional barrier to new entrants, which should not make the mistake to ignore sleeping patents just because their competitors do so. If they succeed, if they start making money then they get sued and have to pay for the past infringement period. And rich companies have no choice but to build a patent portfolio to minimize the risk and create cross-licensing opportunities.

The situation we face today is similar to the situation faced by armies in WWI: the defense (market power, patent and other barriers) is superior to the attack (innovation).

Barriers to new entrants

Barriers to new entrants are well described in the first chapter of Competitive strategy by Michael Porter.

A rational company or individual should consider the sum of the barrier to entry and of the barrier to exit. However people deal differently with entry and exit barriers. Someone who does not risk her money in the new entry primarily consider the barrier to entry. If the entry is a failure she will be fired and will not have to take care about the exit. Therefore depending of her career objective she will either simply ignore or exaggerate the exit barrier.

According to Michael Porter barriers to entry are:

  • Economies of Scale. Economies of scale refer to declines in unit costs of a product as the absolute volume per period increases. Units of multi-business firms may be able to reap economies similar to those of scale if they are able to share operations or functions subject to economies of scale with other businesses in the company. The benefits of sharing are particularly potent if there are joint costs. Joint costs occur when a firm producing product A (or an operation or function that is part of producing A) must inherently have the capacity to produce product B. A common situation of joint costs occurs when business units can share intangible assets such as brand names and know-how.

  • Product Differentiation. Product differentiation means that established firms have brand identification and customer loyalties, which stem from past advertising, customer service, product differences, or simply being first into the industry.

  • Capital Requirements.

  • Switching Costs. Switching costs are one-time costs facing the buyer of switching from one supplier's product to another's.

  • Access to Distribution Channels.

  • Cost Disadvantages Independent of Scale. Here Michael Porter lists (1) know-how or design characteristics that are kept proprietary through patents or secrecy (2) favorable access to raw materials (3) favorable locations (4) government subsidies (5) experience curve

  • Government Policy

And barriers to exit are:

  • Specialized assets that relate to the capital requirements in barriers to entry

  • Fixed costs of exit

  • Strategic interrelationships. Interrelationships between the business unit and others in the company in terms of image, marketing ability, access to financial markets, shared facilities, and so on

  • Emotional barriers

  • Government and social restrictions

For the discussion we must distinguish technical barriers and barriers that pertain to market power. Technical barriers to entry are:

  • Most economies of scale. A brand name is not a technical barrier.

  • Most capital requirements. Advertising expense is not a technical barrier.

  • Switching costs

  • Cost Disadvantages Independent of Scale.

  • Government Policy

Market barriers to entry are:

  • Brand names.

  • Advertising and other non-technical capital requirements.

  • Product Differentiation.

  • Access to Distribution Channels.

The importance of barriers to new entrants declined in recent years partly because they were more universally understood:

  • Governments understand that barriers to new entrants reduce competition and increase price. Companies on a barred market have no incentive to improve their processes and products. Therefore governments reduce the policies and restrictions that favor incumbents.

  • Customers who are also often stockholders know that the value of a company depends more on its barriers (slots for an airline, brand for candies, distribution channel in other cases) than on its skills and assets. Customers can also decipher marketing actions aiming to improve customer loyalty.

Technical barriers to entry and exit are quite strong in mining and "heavy" manufacturing. For instance car industry needs economies of scale, expensive specialized assets and a sizable experience. Car industry also enjoys strong market barriers (product differentiation and distribution channels). They can however be challenged with unconventional methods. The author had the opportunity to visit a small company, MDI that designs compressed air cars. Those cars do not pollute (they even marginally clean air) and provide air conditioning for free. This company has a very original production concept of factory/concession that address both production and distribution channel issues. The factory/concession includes an exhibition hall, assembles cars and produces car parts. Obviously this factory cannot use the same production methods as a traditional car manufacturers but it is more flexible and easier to manage. The company pushed very far the un-specialization: one of the designers is also the author of the French site.

Mining and "heavy" manufacturing frequently suffers from overcapacity and from a cyclic demand. They are no longer the core activity of developed countries. Service industry and "light" industry (cell phones for instance) are now the most important activities and the activities that generate the biggest profits. These activities do not have as strong barriers to new entrants as mining and manufacturing. In the analysis below I focus on the software industry.

Economies of scale

In software industry economies of scale depend on the point of view.

If we consider that the software industry produces program copies or serves user requests, the marginal cost being zero the unit cost declines sharply when the volume per period declines: economies of scale are extremely high.

If we consider that the software industry produces programs then there is no significant economy of scale. Actually costs may grow when more programs are produced. We presented above the door effect: an organization that needs many different skills specializes its employees: some employees understand the technical aspect, some others the marketing aspect and some others the financial aspect... Once people are specialized they cannot talk to each others (Babel tower) and others employees are needed to interface with specialists. Bigger the organization is bigger the door effect is.

Product Differentiation

A company does not buy a program or service based on brand identification. Customer loyalties are of two sorts:

  • Managers prefer a vendor that proved to be reliable in the past or simply because of personal affinities

  • Technicians prefer a vendor because they are experienced in the vendor technology. They often have a veto right in collective decisions with sentences like "I do not see how I could make it work".

Software publishers are unable to deliver products with a completely consistent interface. I mean here that the interface usually assumes that the user does not know the product but that she is an expert of the domain (or language in case of IDE) and, when things go wrong, that she knows the product internals (the user is displayed fancy things like the register values or a stack trace.) The publisher provides tools and wizards for simple operations but the customer needs to hire experts:

  1. To understand the advanced documentation of the product

  2. To troubleshoot the product incidents and interface with the publisher support

  3. To configure the interface of the product with third party products

Publishers ignore and sometimes even deny this inconsistency. They seem to believe that the next version will be bug-free, that troubleshooting will not be needed anymore and that third party products will vanish. It is actually the opposite that occurred: troubleshooting requires more skills than ten years ago. Not only they are more third party products but also products themselves embed and support third party programs. In the foreseeable future there will still be bugs and troubleshooting will still be needed and even harder as we stack more components on top of the existing ones.

Because of the publishers’ attitude the customer service is almost universally bad and cannot be a differentiator: because they need to have knowledgeable experts, customers ask for technology transfer but publishers serve customers with their way of doing the advertised things.

Capital Requirements

There is no business that requires less funding than software development. If you want to open a hairdressing salon you need to rent a shop. To create a software business you need a PC, a subscription to an ISP and maybe a domain ($10 to $25 per year). You need time to write a program but this time is an asset and represents the value of the program – this is the time someone should spend to write this program if you have not written it first.

If you put together the facts that software development requires almost no money and that there are no economy of scale to expect in software development, you may ask why there are large publishers like Microsoft. This is due to three reasons:

  1. Even when the program you need exists you have yet to know that it exists.

  2. We all are ignorant. We need training. We need information about the experience of others.

  3. A user program must (1) address the need (2) be used enough time to pay off the expenses. At the current stage of computer industry to choose a solution addressing the latter requirement we need to have direction.

Large publishers exist to let us know about their products and to show the direction, not because of a unique know-how.

Paradoxically software industry is also an industry with almost only fixed costs, which are primarily wages. The wages that it pays represent the prototyping capacity of a software company. If it cuts its workforce the evolution of its products may slow down but the software company still be able to use or distribute its products. Boeing can stop designing aircrafts but it still needs to make aircrafts to get money. If Microsoft fires almost all its employees for some time its revenues will not change.

Because a program is an asset a software company creates and updates as many programs as it can pays with its sales. The novelty of a version n does not depend on customer demand but on the success of version n -1. If they fail to perceive the benefits of an update or of a new product customers will not pay for this update or product. Then the software company will adapt its workforce to declining revenues and reduce its updates and new product releasing. Market will have to wait for version n +1, regardless on customer demand. Software industry is extremely sensitive to a diminished value of enhancements and should have a cycle of growths and shrinks of its own, independent of economical cycles.

Switching Costs

This interest of a publisher is to increase the one-time costs facing the buyer of switching from its product to another's. In case of software this cost is primarily the cost of get experienced in the new technology. The switching cost may be high but if we only think in monetary terms we miss its real dimension. Switching from one product to another is a disruptive change. As we have seen this change nullifies the experience of both experts and managers. Therefore these managers and experts become the advocates and the internal sellers of the product they are experienced in. For them this is the only way to keep power and recognition and to not get a type 3 job.

A high switching cost prevents users of switching to another product but also prevents users of switching to the publisher product. In an ideal world the winning strategy would be to have a better product and the lowest switching-in cost. In the real world it is almost impossible to know which product is the best. The switching cost is like a fence that prevents customers from moving to the competitor herd. It allows the publisher forces to focus on lost sheep and on competitors’ herds. However I do not believe that publishers deliberately increase switching costs in the same way I do not believe to the conspiracy theory. What I believe it that designers tend to design disruptive changes and that there is nothing that prevents their designs to be implemented and released.

Access to distribution channels

The advent of the Web eliminated the need for traditional distribution channels. Users can read a product’s success stories, download the product, learn how to use it, evaluate it, send e-mails to the product support and register on mailing lists and forums to meet other users of the product. Users can even check documents like the SEC 10K to learn about the company health. In a couple of hours they know more about a company than they would have learnt in one month and numerous phone calls, meetings and conference calls. But traditional distribution channels die hard. Eight years after the Web expansion, having subsidiaries in most countries is still a big asset for a software company.

There is a cultural barrier. You see people usually curious and of good sense who regularly read but who never use a search engine in the same way as you see people with decent reading skills who will never read a book. This is a serious issue. You may hear middle-class people arguing that Internet is insane because it abolishes the need for contact and discussion. They would be surprised to learn that people, usually of more modest condition who do not read books use almost the same arguments, books are not the real life. Furthermore from a historical point of view this is ridiculous. The world existed before the telephone and the airplane and then everybody used newspapers, books and mail in the same way as some people use today the Web and the e-mail.

The Web was released at the time where communication skills were eventually regarded as the most important skills. The Web just challenged these skills as a competitive advantage. This is not surprising that the people the most gifted for oral communication or who have a charisma and therefore were doing well in business resisted to the Web. For them the Web is just good for filling forms at advertised URLs. I believe that:

  • The tremendous expansion of the Web hided the fact that it failed to change the world as expected. No Web pioneer could imagine that some people would not use hyperlinks and search engines.

  • On a medium term the Web will fulfill its potential just because it is cheaper and less error-prone than oral communication and meetings. Then traditional distribution channels will lose their value.

Know-how

Michael Porter precisely lists in this topic know-how, experience curve and design characteristics that are kept proprietary through patents or secrecy. In software industry know-how and experience have much less importance than in other comparable sectors:

  1. In computer science an experience is an individual experience. A programmer implements a task that was assigned to her. To do her job she needs to interact with her neighborhood as much as a researcher and therefore much less than an operator or a waiter. Then her peers tend to minimize his contribution and her manager cannot look interested because she could want a promotion.

  2. A programmer defines herself more as a Java, C++ or Microsoft programmer than as a company employee. For her the most valuable project is the project that may improve her skills in her technical domain. This is just normal: programmers are the door between a company and the providers of the products that it uses. They hide the costs of testing new versions, the porting costs and the costs involved in bug circumventions that together may represent half a project cost.

  3. A software company has a short cycle that just matches the disruptive change cycle and the high turn over and internal move rate of the software industry. Big programs are hard to significantly enhance or extend. Furthermore they are obsolete as soon as they are released: a new technology has been released and customers ask for it or it addresses more elegantly a couple of issues. Software society is a subset of the consumer society. At each cycle the company throws the previous generation model and creates a new model with the same logic and algorithms as twenty years before. If you do not believe us just look at patent databases. The same processes were implemented first on mainframes, then in client/server and now with Web services.

Technically a clever newcomer may do as well as an incumbent software company.

Software patenting implicitly confirm our analysis. Most of software patents aim to protect what could become a function and therefore:

  • either to prevent others to enter a market that the patentee created;

  • or in the more common case of defensive patenting to own a share of a wider function used by the patentee and its competitors.

In Software business the means (which is likely to be different in the next version) does not matter.

Exit barriers

In Software industry projects do not require specialized assets. Furthermore the programmers, analysts and type 3 workers involved in software development are software specialists and not business specialists. Therefore when it decides to phase out a product or to leave a market a software company can easily reuse these software specialists on other projects or move them to consulting positions. There are almost no emotional barriers because involved persons are technology specialists and not product specialists.

The lack of exit barrier in the software industry is well known, which partly explains the customer preference for market leaders. To continue a product development a commercial publisher has to be a market leader with substantial license revenues. In Open Source things are different for two reasons:

  • When sources are available and enough customers use an Open Source product there will be experts ready to support and improve the product, even if the original development team gave up, just because of the invisible hand.

  • For the ground up an Open Source project is designed to save resources and an Open Source development team does not expect a quick return on investment.

Consequences

The software industry does not have strong barriers to new entrants. We observe:

  • Huge economies of scale on the unit cost but no economy of scale on program development.

  • A product differentiation increasingly based on technology with a low impact of brands and no customer loyalty.

  • High switching costs.

  • Specialized distribution channels on professional markets whose importance should decline.

  • Almost no exit cost.

  • No capital requirement and distribution channel show stopper. Capital and distribution channels help incumbent companies to stay in business but there do not prevent challengers from entering the business.

When I talk about technology-based differentiation I mean by technology a perceived technology. When a new market comes into sight there is no real market leader. Potential customers read web pages and newspapers to understand what this thing is about. Market shares do not matter when the few customers are still evaluating the product. What matters are the mind shares. A clever company tries to capture mind shares with technical arguments such as a modular design and standard conformance.

During its anti-trust trial Microsoft argued that the software industry is a unique winner-take-all business whose cut-throat competitors try to win all of a market, only to put everything at risk every few years when software changes. This is true that customers prefer market leaders because of (1) the lack of exit cost for the publisher (2) the high switching cost for them if they do the "wrong choice". This is also true that there is a cyclic risk due to the short cycle of the software industry. However without the mistakes of competitors like Apple, IBM and Novell Microsoft would not have achieved such market dominance because of the switching costs. I actually believe that the natural state of the computer industry is the oligopoly as we can see with Java and .NET.

Standards are unlikely to reduce the switching costs because they are no longer defined by institutions like IEEE or DIN but by commercial publishers. Customers have the choice between being the prisoners of market leaders and using Open Source products. Therefore the natural state of a market is a duopoly with a commercial market leader and an Open Source product. The best example is the Microsoft / Linux duopoly on personal computers. Open Source exists because the cost of actually developing programs is marginal. When you buy a program you buy mostly distribution channel, marketing and type 3 work. Furthermore to stay profitable commercial market leaders have found nothing better than the "this year model" policy of the 50's American car industry. The switching costs secure the market shares of the leaders, which therefore do not put everything at risk every few years.

Barriers to new entrants are not entirely wrong. They hinder competition but they make markets predictable and allow long-term investment and vision. We have yet to see to which extend the lessons we can learn from the Software industry also apply to other new industries. The software industry has unique features:

  • No production cost

  • No economy of scale on product development

  • High switching costs

  • Almost no exit cost

For a new industry like cell phones:

  • The producer makes less economies of scale on production and more on product development

  • The customer has no switching cost

  • Exit cost lower than in most traditional industries but still substantial

The price / production cost is high but not close to infinite like in software industry and development cost is lower. When the customer has no switching cost an industry cannot be a winner-take-all business. In these industries three or four competitors can coexist on a market as predicted by the Lanchester model. This model was developed to describe war campaigns and states that the rate of casualties of an army is proportional to the number of enemies times the effectiveness of their weapon. An implication is that the strength of an army depends on the square of the number of troops. It has also been found that to vanquish an enemy that has established a defensive position, you must be three times more powerful. The Lanchester model can be extended to economy where it shows that a company A dominates a company B if it has more than 1.7 times the market share of B, 1.7 being the square root of 3. By the way when there is no switching cost barrier the Lanchester model also applies in software industry as you can see for Linux distributions.

Therefore the switching cost is the most important characteristic of the software industry. It can drive to an evolution different from other new industries or accelerate changes that will be visible later in other new industries.

Conclusion

"Too fast to live,

Too young to die."

The short-term future does not look nice I tried to give examples, to show evidences and to point out my assumptions. As I said at the beginning my goal was not to propose solutions. Furthermore I do not know how to improve the situation. We need other studies to understand the frictions between Society and Science progress that probably partly explain the current situation. The situation could certainly improve if we:

  • Make choices with the goal of minimizing the adaptation effort and reusing existing experience

  • Abandon Japanese methods outside manufacturing

  • Reform software patents

Ecology

It would be useful to better understand why and how changes nullify technical experience. I do not believe that Machiavellian executives design disruptive changes. I think that people may collectively opt for the most disruptive solution. Cultural factors probably play a role; most computer changes come from USA where big bang is cool and technological revolution sells well. The market should adopt a more "ecological" attitude and look for solutions that minimize adaptation effort and recycle existing experience. The ageing workforce of developed countries, especially Europe and Japan, needs this attitude change to stay competitive.

Japan

One of the factors that contributed the most to the current situation may have been the universal adoption of Japanese methods. Companies that adopted these methods ignored differences between development and production and cultural factors. For instance Japaneses are usually less individualistic and more long-term oriented than Westerners. To some extend any organization would have worked in the Japan of 70’s. The adoption of Japanese methods created in the 90’s an additional friction between the Society and the organization.

Furthermore the sole purpose of the Japanese methods was to improve operational effectiveness. Operational effectiveness indeed improved in the 90’s. Companies carefully analyzed their competitor products, focused on the implementation and tried to make higher quality products than their rivals. It was safe and effective but it raised an issue: when everyone copies there is nothing worth to copy anymore.

Japan is no longer a model. Because its methods were universally adopted they no longer give a competitive advantage. Because these methods command focusing on details and procedures it pays again to focus on strategy.

Patents

Software patents hinder innovation and are not even effective at rewarding innovations that get copied. Banning software patents would be discriminatory and would actually promote copying. However it may be the best solution if we fail to strengthen patenting rules for software in the following way:

  • We should go back to the nineteenth century rule: the patentee should be able to show that they significantly use the invention. When the patentee stops using it the patent should be abandoned.

  • The patent should describe the invention with enough details to allow a person of the art to implement it.

  • Claims should be more restricted in scope. Maybe should we primarily consider patent descriptions in disputes and only use claims to identify the innovative parts of the description.

  • A patent should be granted for a shorter period of time, maybe five years. This is needed because of the fast evolution of the Computer science.

I believe that this interpretation enforces the non-discriminatory clause of the Agreement on Trade-Related aspects of Intellectual Property rights (TRIPs), which is not the case with the current ruling:

  • When a patent for a manufacturing process falls in public domain, other companies often reuse the process to produce the good at a lower price. See for instance the generic drugs in pharmaceuticals. It is possible today to write software patents whose disclosure is useless for the public.

  • Software patents are cheap and therefore cannot be granted the same period of protection as manufacturing patents.

In a Theory of Disappearance of Property Rights With an Application to Intellectual Property, Dotan Oliar writes: " The optimal copyright term should strike a balance between the benefits conferred by a copyright regime, namely the social value brought about by enhance creativity, versus the costs it creates in terms of limiting dissemination of works of authorship due to the legal rights of exclusion conferred upon authors, as well as the cost of reduced sequential authorship. The optimal copyright term is the one in which the marginal social benefit of extending its duration would equal the marginal social cost. When we start from a zero copyright protection and extend the term sequentially, each additional period of protection increases the incentives to create, but in a decreasing marginal rate. The first periods, say years, provide authors with a relatively large incentive. Each consequent time period provides an incentive that is lesser than the one preceding it. The reason is simple – far future benefits translate into relatively small present values, and the farther in the future the income is, the smaller its present value. The social costs, on the other and, are believed to be increasing in time, as they combine losses from monopolistic production, the costs of deferred social innovation (mostly in patents) and tracing costs (mostly in copyright). These costs are increasing in time. At a certain point, the marginal benefit and marginal cost graph cross, and at this point proprietary intellectual resources should pass into the public domain." He also writes that his analysis "is generally applicable to patents, both with regards to the institution at large, and both with regards to ownership in particular intellectual creations."

An essay from Oren Bar-Gill and Gideon Parchomovsky, reviewed by Dotan Oliar and called THE VALUE OF GIVING AWAY SECRETS also supports similar views: "This Essay demonstrates the strategic advantage of narrow patents and unprotected publication of R&D output. Broad patents might stifle follow-on improvements by deterring potential cumulative innovators, who fear being held up by the initial inventor at the ex post licensing stage. By opting for a narrower patent and unprotected publication, the initial patent holder commits not to hold up follow-on inventors, thus promoting sequential innovation and generating lucrative licensing fees. Counter-intuitively, in cumulative innovation settings, less protection benefits the patentee."

PageBox

The issue is no longer to design a hardware architecture to run the software but to design software to run on a dependable and almost free hardware infrastructure. This has the following implications:

  • Programs must be deployed. A program will be replicated on thousand and more computers.

  • A program instance will have to talk to its clones

  • Workload will have to be balanced between program instances

  • It has to be possible to administrate and troubleshoot this large instance set

It is just what PageBox aims to do, in combination with Grid libraries like MPI for compute-intensive tasks.

Cuckoo Ptah Algorithms Graphs Kalman Air Transport Society Networks Device

Contact:support@pagebox.net
©2001-2004 Alexis Grandemange   Last modified