From hardware to digital

Embrace the digital world

From hardware to digital

For a company, moving from hardware based solution to digital is not only a question of technology, but a deep change in the processes and more importantly in the people mind.

Looking at the industrial world, we often consider the digital change as part of the fourth industrial revolution.

  • The First Industrial Revolution (18-19th) used water and steam power to mechanize production.
  • The Second (1870-1914) used electric power to create mass production.
  • The Third (1980s) used electronics and information technology to automate production; often called digital/numerical transformation, this revolution was a machine transformation from analog to digital (i.e. numeric)
  • The fourth revolution (also associated with smart factory and Industry 4.0) represent new ways in which technology becomes embedded within societies and even the human body and with four principles : interoperability, information transparency, technical assistance and decentralized decision

This classification highlights the changes for the industrial capabilities and the technical structures, but also implies a dramatic change in the companies that want to create those products

Whatever is the name, IOT, smart grid or cloud, the new model is now based on a full decentralized system that target to have data flowing inside and outside the solution. This new vision has an impact on the architecture but more widely on the full business model related to it that we will call in the rest of this tech talk : the digital change

To understand this deep change nature, let’s have a look at some dimensions of the digital change

The first dimension is the dual state of physical objects or information transparency : The ability of information systems to create a virtual copy of the physical world.

With digital, each object has its physical and digital state; or said more simply: what you are and what the system think that you are.

During the 3rd industry transformation, the need for the digital identity of each object (and its type) has often been solved by a model of part number (PN) and serial number (SN). In the other domains, unique identity like social security number (SSN), credit card number … are still used on a daily basis.

But this limited knowledge only provide a security identify not a description of an object state.

For example, this is not only an Expresso model 6 (PN123456/SN15678), but also Michael’s coffee machine (the security guy of building 9) created on 02/05/2012 that is not working due to a power supply defect.

A state is a set of attributes or expected properties associated to an object that is used to build assumption for future behaviors.

For example, the credit card can be new/not activated, approved or lost. An approved credit card will allow you to buy a drink at your next visit to the coffee shop

We are used to some applications of this, for example, talking about human, the most common use case is the marketing which search value in matching personal preferences to product sales offer.

But, matching physical and digital state is critical for all objects due to the fact that, if the real state (like a credit cart that is stolen) is different from the digital state (full approved credit card), the system will apply the wrong behavior to the associated operations (for example approve a specific purchase).

With concept like IOT or smart grids, the goal is to move from a declarative update of the digital state (for example calling to declare a stolen card) to an automatic update that will use several sensors to update the object state (like using a phone to update a credit card location and measure the distance between the object and the owner) or allow the object itself to declare its status (like an empty vending machine).

As a side note, when evolving to full digital, a service can be performed without any physical object involved (like selling a stock option).

The second important concept of the digital change is the interoperability : the ability of machines, devices, sensors, and people to connect and communicate with each other.

To reach this goal, the solution must evolve in 3 different dimensions

  • Physical architecture
    The physical architecture will allow each object to communicate data with the others objects by using multiples physical communication pipes and communication languages
  • Data architecture
    The data architecture define how each information exchanged between objects are related to other exchanges of information in the system
  • Data flow
    The data flow is the definition of the size and speed of the information pipes that need to be created by the physical architecture so that all exchanges defined in the data architecture can be realized as expected

If we think about it, none of those dimensions are new compared to previous systems created in the 3rd generation, but in the 4th generation, no object own (or know) the entire architecture.

Each object is in charge to realize the expected connection, used the appropriate data exchange based on contract and support the data flow, but the system context in which it operate is not needed.

This flexible architecture give to the object the ability to be used in many different environments; the value of the product depends on the usability and the integration with the eco-system (see article)

A good example of interoperability is the product Alexa from Amazon that allow to pilot with the voice any equipment that conform to the driving protocol (like Smart Home Skill API)


Talking about data, a digital service is made to either use or create data, which mean that, most of the time, the service will target the next element in the data chain.

In a micro-service or connected environment, the service will also not be concerned by the use of the information that is created or the previous transactions (stateless).

Like the OODA decision loop (see definition), data are created for action which mean that data are often not living in the boundary (for the use) of the system itself. The data contract of the component will allow to connect to any provider or consumer without change of the system. This also imply, that the system, is not self sufficient and will rely on the integration with other services.

For example, the DOT system of Alexia does not provided any value without a valid connection.

In a connected network of services, this dependency of each service from one to another requires that each service needs to be provided with high availability and reliability.

A bad service will not only impact the vendor that provide it but also the others services that relies on it. For this reason, a product that provide digital service needs to be considered as a continuous service that should never stop with the same mindset than a physical production line.

However, with this consideration, even if the system is designed to be updated often, the insertion and the test of the new developments in context imply, in average, to have a total of 5 partitions in the cloud.

  • The first one (Dev) is used to develop the new function
  • The second (Test) is used to test (automatically as often as possible) with stable external services (that can be patched if simple)
  • The third (Staging) is used to integrate with external services and need to provide a stable version of the services provided (for integration for example with an airline data center)
  • The fourth one (Production) is used by customer and need to fulfill all SLAs and performance requirements.

Update to production and staging, needs to be performed without interruption of services (which is accomplished,most of the time, by a blue/green deployment).

And the last one ? Each environment needs to be able to be updated anytime by a build chain (CI) that create digital components and a delivery chain (CD) that deploy those digital components. This management cloud is also in charge of allowing to change the users and configuration of the system.

In term of purpose, the 4th industrial revolution also define that the system is build for technical assistance i.e. the ability to support humans by aggregating and visualizing information comprehensibly for making informed decisions and solving urgent problems on short notice.

To do so, the system will have to correlate several dimension of the data and provide value from its processing. This goal will often drive the collection and retention of important set of data for direct or indirect processing. This principle is often known as creating a data lake.

For processing, the system can process the information on the flow (streaming data) or by trunk (batch data);most of the time, both data type of sources needs to be correlated.

Even if there is several possible architecture of the solution, the most common one is called lambda architecture. All data are collected and funnelled to two parallel processing based on the nature of the data (streaming or batch), then consolidated in a serving layer that allow to provide the final digital services.

This architecture, which target to apply the appropriate processing to each type of data, will also be realized by combining several types of technologies to take maximum benefit of each of them.

This mean that a digital team needs to be prepared to be efficient with many of them, and selecting the technologies that they master and the technologies that they delegate.

Finally, the system needs also to perform by decentralized decisions : The ability of cyber physical systems to make decisions on their own and to perform their tasks as autonomously as possible. Only in the case of exceptions, interference, or conflicting goals, are tasks delegated to a higher level.

This goal, combined with the use of commodity resources in cloud data centers, implies to have an important mindset change for people coming from hardware based industry.

In digital world, performance and ability is made by combining resources in groups (called clusters), and apply redundancy and group management principle.

To understand this important mindset shift, let’s look at this process in detail.

  • In the hardware world, each element is built to be stable and connected to other element to a static (or partially variables through switches) network. In the cloud environment, the assumption is made that each element can fail anytime and be replace by another connected one.
  • To allow this, the connection between the machines (topology) will be dynamic based on digital restrictions (security groups, virtual networks …)
  • To avoid waste of usage, the replacement of a component will be dynamically performed by some specialized components of the cloud that monitor each cluster. Replacement of a component often imply to power a new machine, install the appropriate softwares to perform the expected behavior, load the initial state and connect to the existing cluster.

The part of a cloud in charge of this automatic orchestration behavior will use several technics to optimize the performance and the visible availability of the solution like :

  • load balancing to route a request to the component having the best readiness to perform the action
  • circuit breaker to disconnect a non performing component
  • dynamic size to maintain a number of components proportional to the observed workload

This automatic optimization strategy leads to another mindset change which is to consider spare (memory, processor, storage) as a waste and not as a good practice. This consideration is also motivated by a cost impact as most of the cloud services are paid based on usage (time, data, …) and not on static cost.

To allow efficiency, each component needs to support the ability to start or stop anytime and to connect to others by centralized registers instead of static mapping.

To understand this notion, we can make the parallel with a support line. Each request coming to a unique number is processed by a different team member. The customer does not have to memorize a different phone number for each member.

But this also means that his request is tracked by a unique identity or identification number, and that any team member can follow up on the previous support based on persistent information created in the system.

In summary, the decentralized principle and automatic behavior will deeply change the solution design, moving from big system components to mesh of small independent components that interact dynamically with others. The rules to apply in the design of each small component (called micro services) is in many literature defined as the 12 factors.

To learn to create and deploy this type of solution is a challenge for companies that have already an existing portfolio.

If we use the image of ice for the hardware based solution (static footprint and topology) and vapor for … cloud solution (set of particulars interacting with the nearest neighbor), we can look at the transformation of the company as a process of conversion from ice to cloud

Like the physical process, even if it is water that is provided for both, the delivery and the management of each form of the solution require different processes and management.

Also, if you extend the comparison, even if it is possible to convert directly ice to vapor (sublimation), this transformation will imply to inject an important amount of energy in the heart of the legacy solution; this will results in an important dispersion of the supplied energy (so for a company, inefficiency of investments). This mean that the insertion of digital into existing products is not always the good strategy.

So, does it mean that the company need to recreate a new digital version of each product of the portfolio ?
Obviously, this simple answer come with a cost challenge and a real risk to create a digital product that do not have all the values and lessons learned of the previous generation

Before jumping to huge plan to find a compromise, it is important to refocus on the core principle of the digital change.
Above all other dimensions, the change define a new way to look at the architecture of the solution

A digital product is built based on an architecture that is data driven, which mean that the definition of the data exchanged between each element is more important than the physical architecture that allow each component to process and deliver its data.

This imply that if the non digital component already produce the expected data, the digital services can be built anywhere without any constraints related to the non digital component.

Following this principle, along the way, any monolithic component can be modernized later and replace by several modern digital components without impacting the rest of the digital system as soon as its data is preserved or retro-compatible.

In summary, the digital change will force the company to have a deep structural transformation that will change the entire process of development, deployment, management and even product policy as the company will now develop an offer of digital services.

To conclude on this last aspect, company that are specialized in digital services will have a product offer (like machines for AWS) but an important part of its offer will be a panel of services that the user or the partners can use ( AWS has now more than 90 services)

The company needs then to decide its strategy to consider each of them as a product, to allow to choose them individually or to group them by bundles