September 2013

It’s the end of September, and that means it’s back to Brazil for your friendly neighborhood agilist! I’ve got a full itinerary this year, and there are a number of stops planned for this trip.

To start with, on September 26-27, I’ll be visiting the National Institute for Space Research (INPE)  in São José dos Campos. While at INPE, I will be giving a talk entitled “Adaptive Object-Model Architecture: How to Build Systems That Can Dynamically Adapt to Changing Requirements.”

From São José dos Campos, I will be returning to São Paulo before embarking on my first trip to Brasília (Capital of Brazil), where I will attend MiniPLoP on the 28th. At MiniPLoP, I will be presenting the keynote address on the subject “Taming Big Balls of Mud with Diligence, Agile Practices, and Hard Work.” I will also be presenting a writers workshop simulation with Eduardo Guerra.

After MiniPLoP, I will be attending CBSoft from 9/29-10/3. During the 7th Brazilian Workshop on Systematic and Automated Software Testing (SAST 2013) at CBSoft, I will be giving a talk about Pragmatic Test Driven Development. At CBSoft, I will also be presenting a Tutorial with Eduardo Guerra entitled “Test Driven Development Step Patterns.”  I will also be reprising my talk “Taming Big Balls of Mud with Diligence, Agile Practices, and Hard Work” at the Brazilian Symposium on Software Components, Architectures and Reuse (SBCARS) while at CBSoft. On the 3rd, I will head back to Sao Paulo, where I’ll catch up with some friends and give a talk to some investors and start up companies about Agile Best Practices. I’ll head back home the evening of the 6th.

If you’ll be in Brazil during this time and would like to get together, please get ahold of me.


In my previous post I described how AOM helped us build an agile development pipeline and increase the frequency, and more importantly the quality of our deliveries.  This time I will describe how we harnessed AOM to revolutionize our architecture from legacy RDBMS to Big-Data.

As a provider of real-time marketing solutions to Tier-1 Telcos, the Pontis platform should handle massive data volumes –billions of mobile subscriber events that need to be processed and analyzed every day.

In our efforts to scale up our solution to handle the rapid growth of data, we quickly discovered that HW and licensing costs increase super-linearly with scale. The unreasonable costs made us realize we had no choice but to switch to Big Data architecture to be able to achieve a factor of 5-10 decrease in costs compared to our ‘classic’ architecture.

Initially – like any R&D group I guess – we were reluctant to change. We offered many logical reasons. For one, we knew from others that moving to Big Data could be a long and painful process. It would involve re-writing from scratch, training all R&D engineers, building a new infrastructure and many more painful and time-consuming tasks. Such a transformation could easily take 2-3 years or longer.  And after all the effort was done, in order to support existing customers, we would end up with two product lines: the legacy platform and the new, Big Data product. This would mean two R&D teams and a never-ending compatibility headache.

AOM to the rescue

It then hit us. Our AOM development environment can come to the rescue. In this approach, business logic is written in Domain Specific Languages (DSLs) and is agnostic to execution technology. The model is platform-independent and handles the business-logic. Execution is done by thin engines that run the business-logic over the underlying technology platform.  The DSL user defines the business-logic using abstract language that may have several execution engines for different platforms.

The move to Big Data, then, could be much easier. The total separation between the business logic and infrastructure layers reduces the need for overlapping knowledge between teams. We would not need to maintain separate product lines to support different technology platforms. Instead, we could build an Analytics-DSL that has execution engines for both platforms.  We had to start by reverse engineering our existing Analytical application into an AOM DSL as it was originally developed in SQL.


True, we still had to obtain Big Data expertise. But we did not have to train all or even most of our team.  Our infrastructure team mastered the necessary technologies, while the rest of R&D continued with their regular tasks.

Today, less than a year since we started our journey, we are in the certification stage and about to go live with our Big Data solution in a few weeks.


Refactoring the Universe

September 13, 2013

Next week marks my first visit to Geneva, Switzerland. While there, I will be teaching a course on Refactoring to Better Design and Test Driven Development: Evolving Legacy Code at The European Organization for Nuclear Research, better known as CERN. I’m excited to be making the trip and I hope to find some time while I’m […]

Read the full article →