Hein Fleuren is part-time full professor at the department of Econometrics and OR, as well as partner and founder of BlueRock Logistics in Den Bosch. Since September 1, 2016 he has a special chair on Data Science for Humanitarian Innovation together with Prof. Conny Rijken from the Law School.
Last column, I elaborated on the possible consequences of our work: they can be far-reaching, as we have seen. A part of good OR/BA work is exceptional explanation of what our models calculate and how we should interpret the outcomes. This helps management and the change management process enormously. In practical situations, this is where I see many things going wrong.
To trigger your thinking: how many times have you delivered already in your study an assignment and/or Bachelor’s Thesis with the statement/conclusion ‘the optimal answer x is ….’ Followed by a number with 3 digits behind the decimal dot?
I start by assuming that your validation and verification process has been right. This means that the answer is in essence meaningful, which was not the case when a young consultant, who was presenting his results to the board, was talking about an average vehicle load of 23000 tons. Oops, sorry, mixed up kg’s and tons in his model. And that in a situation, where the difference between 22.2 (the base case) and 22.4 tons per vehicle (t/v) can be a big improvement and on the agenda for many, many board meetings.
But you had not made these mistakes, I am sure (you are from Tilburg!). No, the outcome of your model was a decent 23.0 t/v and you checked that carefully. Moreover, this outcome was the result of a carefully designed tactical LP-model with many constraints and a complex cost objective.
What I see quite often is that this outcome is presented and then… period; the presentation stops. We, modelers, have faith in our methods and we are happy to reach an objective, which is much better than the existing situation. Imagine here: from 22.2 to 23.0 t/v. That is a 3.6% improvement; way larger than the profit range of many transportation companies. But what does this 23.0 t/v say and what more can we explain to management?
It is good to investigate what the active constraints in the model are and how they limit the solution to ‘only’ 23.0. Is it only one constraint (group) that limits the solution or are there several constraint groups (nearly) limiting simultaneously? There are a lot more useful recommendations that can follow from here. One can show the sensitivity for a few key parameters and determine how difficult it is to ‘relax’ these constraints. Relaxing the constraint (for example) on the port gives a huge improvement to 24.0 t/v, but costs a few million euros. Relaxing the working time constraint costs relatively little and gives an improvement to 23.4, which might be a more desirable solution.
But also look the other way. What happens if some of the parameters in the crucial constraints appear to be different? This is also sensitivity analysis but then the other way (and almost always forgotten). It is also a kind of risk analysis of your current solution. If there are not that many vehicles available as you thought, the solution might easily go to 22.5, which is only a good 1.3% more than the base case; still attractive to go for, but not the original 3.6%!
Systematically investigating the parameters and displaying them in a so-called tornado-chart really helps to understand management what is calculated and what the (biggest) sensitivities are. I learned from prof. Kuno Huisman that top-management at ASML is now so familiar with the tornado-chart produced by his department that it is the first thing they take a look at. There are also many other ways to display sensitivities. This gives many more ‘optimal’ solutions.
Explanation, in its broadest sense, really helps to understand the outcomes as I have outlined above. It can give more-optimal (analytically nonsense of course) solutions but also more optimal solutions! 23.0: never forget this number; nor its explanations…