PageBox for Java: Web application deployment using Java PageBox

 PageBox for Java 
Presentation Download User guide Implementation Epimetheus euroLCC Prometheus

PageBox for Java demo


This page presents some examples of PageBox-enabled Web applications and some mechanisms that can be used in distributed applications.



Pandora is a Web application that allows commanding goods.

Pandora implements an installation class and supports the dynamic update of its inventory.

Pandora is designed to be used in the following context:

Client sites typically combine:

  • A portal

  • A single sign-on facility

  • A PageBox that installs Pandora

Pandora, PageBox, the sign-on facility and the portal are not necessarily hosted on the same Application server instance and on the same machine.

The Repository deploys the Pandora archive on the PageBox.

The Pandora instance calls the Pandora central server to trigger the payment and the delivery and to refresh its image of the data.

The payment and delivery facilities are typically provided by external organizations that can themselves implement the same multi-site PageBoxes organization.

Pandora implements three mechanisms:

  1. Trusted referrer. Look at Support of trusted Web sites for more information. The idea is to check (1) that the user is coming from a trusted site, (2) to delegate the user authentication to this trusted site and (3) to get from this site information allowing to charge and invoice the user and to deliver the good if needed

  2. Approximate data knowledge

  3. A central server that has much less requests to process than in a centralized environment

Starting with Pandora 0.0.3 Pandora uses the getClones method of PageBoxAPI to list the other Pandora instances (neighbors) deployed from the same Repository.

Approximate data knowledge


We must first note that approximate data knowledge is nothing new: it already exists for instance in RDBMSs that handle transaction isolation:

Obviously RDBMS make a great work to keep data consistent but if at a given time, depending on its position an observer would give different responses about the data state.

If the observer is on the left (yellow thread) it knows that the first yellow transaction has committed and it is aware about the insert that took place in the second transaction. The observer doesn’t know about the right thread top transaction that was committed after its new transaction beginning and the observer select.

If the observer is on the middle thread it is aware about an insert and an update that won’t be committed.

And so on. If the observer is the DB administrator it will probably say that the database state is made of all committed transactions up to the observation time.

As we can see we are already living in an approximate data knowledge world. The problem with the current approximation is that it uses mechanisms that cannot be effectively parallelized and distributed: A uncommitted data change can be written on disk whereas a committed change remains in memory and the database state is actually kept in log files. Distributing the data on multiple servers implies exchanging a very large number of lock requests.

Distributed applications

A fully distributed application requires a rougher data knowledge. To be more specific let’s take the example of Pandora. Some data are local: it a user A makes a command on site 1 this command doesn’t matter for site 2 and site 3. However the command reduces the number of articles in the inventory. In the "classical" model it implies that the inventory must be centralized.

In environments that process large number of transactions we believe that:

  1. A distributed architecture gives the needed scalability and fault tolerance at a lower cost than centralized architectures

  2. The evolution of shared data is predictable. In the case of Pandora if we can predict the inventory evolution we can use distributed inventories tuned by forecasts and synchronize the inventories in batch.

  3. Errors like oversell can be managed at a cost lower than the delta between distributed and centralized architectures

Revenue management

In this section we show that prediction and error management are already common practices.

Revenue management (also called yield management) is used mainly in the transportation industry. This industry has a perishable offer: for instance for a given flight and date there are only a number of available seats that are either sold or wasted. Some customers have to travel under a short notice and accept to pay a high price. However there are not enough of these travelers to fill the planes. Some other travelers can only travel if the price is lower but they can also plan their trips.

The principle (simplified) of revenue management is to predict thank to statistics the number n of travelers that will travel at a high price and to offer plane_capacity – n seats at a discount price. We can question the marketing side of revenue management: high price tickets are not paid by travelers but by their companies whereas discount tickets are paid by travelers. However technically the revenue management has proven that predictions were effective.

The same mechanism also allows airlines to forecast the no shows and therefore to overbook.

Pandora status

The current version of Pandora doesn’t implement prediction and inventory tuning.

The predictions themselves are business-dependent and are therefore out of the PageBox scope.

On the other hand an inventory tuning is easy to implement:

  1. The Update Web service can return a prediction array. The first entry in the array describes how the inventory should evolve in a first time slot (for instance -5 or +3 articles), the second entry describes how the inventory should evolve in a second time slot and so on.

  2. A worker thread can update the inventory according to the prediction array

Pandora version 0.0.2 implements new classes, CheckSandbox, CheckClassLoader, CheckLoaded, and CheckLoad to check if the sandbox is properly set (if it cannot read and write files everywhere, create a class loader, run native code or commands.)

Central server

In case of Pandora the central server is no longer a single point of failure: if the central server is temporarily unavailable command consolidation and payment/delivery requests are simply postponed and end-consumers don’t notice the outage. This model is easy to implement and is enough in many cases.

However it is possible to design a fully distributed organization (without central server.) We discuss this design in the next section.

Token update

In this model all distributed Pandora instances maintains an approximate image of the inventory and call the payment and delivery facilities. Distributed Pandora instances are ordered in a ring. A Pandora instance calls the Update Web service of the next Pandora instance on the ring. The called Pandora instance updates its inventory and calls the Update Web service of the following Pandora instance on the ring and so on:

This model is a token passing model. If an ith Pandora instance fails to process an Update request the token can be lost. If a jth Pandora instance comes alive it can generate an extra token. We need to implement a mechanism to reissue tokens and detect extra tokens on the ring. We can note that token passing is a proven technology. We eventually chose to not use token passing in Pandora and to create a new example, Prometheus, to illustrate token passing because:

  1. We also use examples to check implementations and a new example was more suitable to test errors

  2. This new example is a better illustration of token passing


Epimetheus is a Web application that allows maintaining contact information.

Epimetheus is designed to illustrate the use of resources in PageBox-enabled applications.

Starting with Epimetheus 0.0.3 and PageBox 0.0.9 Epimetheus also illustrates the use of extensions through a serial page that calls an extension to display the serial and parallel ports of the host.


Epimetheus is called after the husband of Pandora in the Greek mythology. Epimetheus and Pandora are a bit like Adam and Eva. However Epimetheus had a brother, Prometheus and was not a man but a giant (Titan). He was a contractor hired by gods to create animals and humans.


euroLCC is a Web application that helps finding Low Cost Carrier (LCC) flights between origin and destination airports. euroLCC illustrates the use of the generic installation class. The initialization files of euroLCC describe the European airports and some LCCs and LCC routes. In the future we may add other LCCs.


Prometheus is a simple chat Web application designed to illustrate the token API and the Active naming.

The user enters a pseudo and a set of metadata to enter a chat session. Then she can see messages from others and send messages to the other participants to the session. The key difference with a traditional chat system is that participants are distributed across multiple server sides. A distributed Prometheus constellation should be able to handle an unlimited number of participants. Because the number of participants can be huge Prometheus allows sending messages to subsets of the participant population.


Prometheus is more popular than his brother, Epimetheus, because he stole fire from the Greek gods and gave it to humans. As a consequence, Zeus chained Prometheus to a rock where each day an eagle pecked out his liver.


The documentation of the PageBox for Java demo is made of
2002-2004 Alexis Grandemange. Last modified .