OH Canada's Procurement Review Process

While many disparage the government's work as being bloated and 'behind-the-times', I would say that the Canadian government's procurement review process, while arduous and meticulous, created great products and ended hemorrhaging of funds early in the project.

I was introduced to this process while interning in the Medical Liaison Office and Health Services Attache department, and have found it to be the most detailed and apt process I have encountered, still. We applied these steps, tailored of course, to every project that was under contract for potential purchase or underwritten research by the government.

During every session, we re-aligned under the mission of most effectively (note: not efficiently) using tax payer dollars. We understood, as a collective team, that pulling funding from a project did not mean a failure, but rather the ability to more appropriately spend and research on behalf of the Canadian people. It was a fine line to walk when addressing funding for emergency medical support devices, body armament systems, and remotely piloted support aircraft; however, staying aligned under a common goal to support and serve the people of Canada and her deployed service men and women the world over gives a sense of clarity to these detailed steps.

 

I share these here as a helpful process that you can tailor and leverage as you see fit to your own product development and procurement processes, and welcome any questions or feedback.

Here are the core steps to threading in a successful procurement review into your process —

During requirements gathering process: 

  • Determine the full environment for the application of the product
  • Create a step-by-step script for how the user would utilize the product within the environment
  • Determine the minimum and estimated life of the product
  • Write the hypotheses for the test
  • Establish acceptable deviation from the 'success'

At the Model stage: 

  • Re-view the hypotheses
  • Review the number of human-machine interactions required to perform the desired task (this number should be generally less than 4 per task)
  • Perform simulated environmental tests
  • Produce a report on the performance of the assembly (by component) and compare the report against the definition of success and acceptable deviation

At the Prototype stage:

  • Perform functional tests
  • Perform environmental tests
  • Perform user tests
  • Produce a report on the performance of the assembly (by component) and compare the report against the definition of success and acceptable deviation

Prior to the first run of manufacturing, we would have a Pre-production Review, which is a rather hefty review of the assembly in its entirety. At this stage, we would pull-up and re-check:

  • Estimated time of delivery still matched the estimated field delivery requirements — will we still need the equipment by the time it is ready?
  • Usage — how will this equipment be used by whom, for what, where, and why?
  • Step-by-step application — every minute detail of how we anticipate this equipment to be used
  • All previously done simulations — anything we may have missed the first time?
  • Manufacturing concerns — anything we know about working with this factory or vendor from before we should be concerned about?
  • Delivery — how will we get this out into the field? any assembly, weight, packaging concerns?

This pre-check is probably the most painful part of the whole process, so we made it into a game. We would take a large bag of some small candy, like starbursts, and if you asked a question that required a revision, explanation, or concern to be highlighted, you got a piece of candy. The one with the most candy at the end was 'King Wet Blanket' and you got a paper crown. Yes, we are adults.

The press test is the last test before the full manufacturing run. It's the most fun part, and so appreciated after the pre-production check. Simply, take the item out, try and use it, and beat the hell out of it to see what breaks. Take all your frustrations at the dedicated testing, detailed notes, and brain exercises you've done and take it out on the equipment — within reason of course.

After the first delivery wave, we would do a pull-up internally to rank the vendor. This is essentially a retrospective on a vendor by vendor basis. It helps highlight what you're team could have done better, and what concerns you may have about working with the vendor, plant, or supply team in the future. Approach this impersonally, and ask questions like:

  • Did we like working with the (vendor/plant/supplier)? Why or Why not? - think about this only from the perspective of a working relationship, not a personal one. It's perfectly fine and normal to like working with someone and not like them as a human being, and vice-versa.
  • Do we have full contract completion? Anything missing? Over-delivery? - over delivery at times can be just as bad, if not worse than, having items missing. It can also be used to hide missing contract items.
  • Were there any delays? Why? - delays happen, but you need to ensure there were not too many or for poor reasons.
  • Any unanticipated costs? Why?

Post-delivery, at major pre-determined deterioration checkpoints, we would pull diagnostic checks on the current fleet and check against the estimated deterioration of that fleet. If we saw deterioration faster than the expected rate, it was important to understand why. Maybe we didn't account for something in the situational analysis, or maybe poor quality supplies were used, but continuous checks help ensure that these project's problems don't permeate into other projects.

 

It's a rather long process, so I highly suggest customizing it for your application. However, always keep this root process as your process validation, to ensure that each customization is deviated from the most robust version of the process and not the slimmed down version. It will help keep you honest and your product sound.