The HP Model Manager has initially been a Product Document Management System. The company I worked for at that time extended the PDM to a full-fledged PLM System. HP sold the Model Manager to PTC and PTC dropped the product, so the company had to replace their PLM solution with a new one.

After a selection phase, the company decided to go with ARAS Innovator.

The resulting mission: transfer the data, the model, the processes from a heavily customised HP Model Manager into an ARAS Innovator PLM System.

The company decided for the migration to avoid the “big bang” and to implement a smooth transition from the legacy application to ARAS Innovator. That means that both PLM applications had to co-exist and the data flow’s in both directions.

Using SOAP Services to connect both systems directly sounds like a good idea but its not. Testing a so tightly coupled integration is a nightmare. It, although would lead to massive and invasive customisation of both systems and after the job is done you have to remove this stuff again from the new system. The solution would not scale, and we would introduce many single points of failure.

It was clear that we need a less coupled integration to get the job done and maintain our mental sanity. Researching the topic of system integration, Apache Kafka took my attention. It was immediately apparent that Kafka is the right tool for this kind of problem.

Apache Kafka is a publish/subscribe messaging system. It is similar to a traditional message queue but with different characteristics. Typically processing the message leads to its removal from the message queue but this is not the case in Kafka.

In Kafka, the published messages get’ s persisted for a while. That fact allows you to have multiple consumers with there own domain-specific logic that can process the same event. For example, you can have a consumer for transferring data into ARAS Innovator and another consumer to update the SAP system and both consumers processed the same message.

Bootstrapping the data migration and co-existence

Before we could transfer the data, we had to customise the data model on the target system. The image below shows how we started automating the process:

Bootstrapping the integration

  1. Enter the simple property mapping information manually.
  2. Send the mapping data into a target model generator.
  3. For the target model generator, I used the FreeMarker template engine. HP Model Manager is a Java application. The integration of FreeMarker was easy, and the task of creating the templates was not complicated.
  4. The generated output were ordinary XML files that contained the model definitions as ARAS Markup Language (AML).
  5. The AML files get then imported automatically by our CLI tool created for this job.

Kafka is calling message publishing actors producers, and processing actors consumers. For each direction, we need a consumer. The applications act’s as producers. Each PLM system is writing changed data immediately into the appropriate Kafka topic. Consumers are listening for messages to transform and import into the target system.

For further details on Kafka, you can find a good introduction at the Kafka homepage.

General kafka concept

Implementing the consumers

First I had to develop the first version manually. From that version, I refactored the core consumer library and created the templates to generate the consumer code from the mapping that is already in place.

Boottrap refinement 1

So far we have a good foundation for future refinement iterations.

Improvement Life-Cycle

With each iteration, we implemented new features. The next image is showing the improvement process.

Refinements

  1. The first step of such an iteration is extending the mapping.
  2. Then we have to generate the models and the code again
  3. If there are some edge cases that we do not cover at the moment, then we have to implement it by hand. Sometimes there is only one domain object that needs special treatment. Then we do not change the generator. Otherwise, we program a new rule so that the generator can handle such cases by itself in the future.
  4. The next task is to test and deploy the regenerated consumers.
  5. The last step is to import and check the data again.

After several iterations, we had a pretty complete migration and co-existence system.

I hope you enjoyed this reading so far and you get an inside of the way we implemented the co-existence and data migration solution for our two PLM applications.

In the next blog post, I will talk about how we ensured that the user is not working on stale data.