What matters most when building algorithms - people, process or code?

< Blog home

The 5 phases of product development for building responsible, people-friendly algorithms

The role and impact of algorithms has been getting a lot of attention lately. They make decisions for us on a daily basis and we’re finally asking questions about the practical and ethical implications.

This is wonderful. On the flip-side of this stands a more pragmatic question: how to go about actually building them. Many technical guides and tutorials can be found in answer but scant attention is given to the role of processes and people. This myopic, technical view is perhaps a subtle but significant contributor to the unmanaged, runaway impact that algorithms have had on our daily lives. So how does one harness a broader set of viewpoints to build more responsible algorithms? I’d like to share my experience tackling this topic as Head of Product at Bibblio.

During the past couple of years we’ve taken Bibblio’s algorithm products through the startup gauntlet. Simple, quickly-hacked-together prototypes gave way to robust, scalable architectures and data pipelines. A whole new type of problem emerged when we started designing algorithms. Between us we hold the perspectives of educators, philosophers, data scientists, software developers, product managers, lawyers, designers, media theorists, and customer support. Each one can provide distinctly valuable input on every algorithm, especially when trying to build them responsibly. The challenge lies in harnessing these perspectives.

So how do it? Firstly, let’s not pretend a problem like this could ever be ‘cracked’. It’s an ongoing challenge and we work on it constantly. Our current process is divided into five phases. Each phase serves as a transition from one mindset to another. This facilitates a diverse set of inputs from various members of the team. This is of course subject to having a diverse team in the first place, so that’s definitely worth considering.

Each phase entails a series of steps and has key outcomes. Let’s go through them.

1) Incentive Phase

From product development to data science.

You could call this the ideation phase but it’s a slippery slope from there to platitudes like leveraging synergy. I prefer 'incentive' because I think it’s worth highlighting that there is (and should be) an incentive that drives algorithm behaviour. Facebook claims to be about connecting people but is ultimately incentivised to keep users scrolling and revealing their preferences in order to serve them more targeted ads. So what is the incentive to build an algorithm in the first place? More clicks? More sales? Higher ‘satisfaction’? What does that even mean? How can it be measured? What are the social implications of optimising that metric? These are deep, difficult questions that are dangerously easy to ignore. If answered early, they can guide the entire process to a far more intentional result.

The incentive phase is also the easiest and most obvious place to elicit broader input. Talk to customers, support staff and product managers about your goals and how they might be achieved. Discuss the social impact of apparently self-evident measures and assumptions. There’s probably someone on the team with a humanities background and a knack for the big picture. Speak to them, even if only to organise your own thoughts. Their perspective is valuable and it will rub off if you engage with it.

Steps:

  • identify product needs
  • establish algorithmic incentives
  • discuss measurements and optimizations relating to these incentives
  • communicate objectives to data scientists 

Outcomes:

  • considered intentions
  • clear R&D objectives

2) Research & Development Phase

From data science back to product development.

During R&D we go from an incentive to identifying a specific algorithm for implementation. This might seem obvious, but from an entrenched agile perspective it’s easy to imagine a process that moves from incentive directly to a minimum viable product or prototype. Move fast and break stuff, right? This doesn’t hold so well for algorithms. There are often several viable options and a variety of established implementations of each. The best-performing one is also likely to be highly contextual. What might work on one data set could fail miserably on another. That said, lean/agile methodologies are valuable and can be applied here. Move quickly, iterate, evaluate, adapt. These principles hold but they should be applied to the process of selecting and designing an algorithm, not building one in production (yet).

For us this means isolated, trial implementations of several distinct algorithms, often in Python Jupyter Notebooks outside of the core stack. These implementations accumulate through spikes. Each one answers a specific question about the performance of a particular algorithm and is concluded with product and technical feedback. To focus R&D and retain shared goals we place emphasis on executing and testing them as per our normal development and QA processes. A product manager helps constrain the spikes, the dev team helps estimate them (often as time-boxes), and they are acceptance-tested by members of both the product and dev teams.

In practice, testing tends to require little more than an in-depth conversation. The data science team imparts trade-offs, parameters and performance metrics for each algorithm, as well as a technical review of the trial implementation. You could say that the purpose of each spike is not only to answer a question but also to deliver the context of an algorithm across various mindset boundaries. The devs gain technical context for implementation and the product manager gains a functional understanding of each algorithm. 

Once all spikes have been completed we perform a blind, qualitative evaluation of the results with the broader team. A diverse set of people test sample outputs, evaluating them against the incentives and measures determined in phase one. Thereafter we present the top-ranking algorithms and make our final selection as a team.

Steps:

  • identify a series of R&D spikes to investigate various algorithms
  • constrain those spikes with the product team to focus intentions and prevent research sprawl
  • execute spikes on the backlog where they are visible to the dev team
  • QA spikes with product and dev team
  • evaluate algorithm performance with broader team
  • select best-performing algorithm (measured according to the initial incentive)

 

Outcomes:

  • a considered choice of algorithm to implement
  • shared understanding of algorithm particulars across data science, dev and product teams
  • an isolated, reference implementation of the algorithm selected for production

3) Product Development Phase

From product to software development.

From this point on we’re in more familiar territory. We now have specific objectives that must be sliced into user stories and fed to the scrum machine. That’s oversimplifying it a bit; a lot goes into mapping product features to technical requirements. That is, however, outside the scope of this article. The important bit is that we now have a clear feature request in two parts: our initial incentive and a trial implementation of the selected algorithm. A standard agile dev process should be perfectly capable of handling it from here. The product and dev teams should have enough context to break the feature down into user stories since they’ve been so involved in the process thus far.

There is one caveat: this is an opportunity to maintain quality in the algorithm stack. How the algorithm is fitted to existing architecture can lead to beautiful code or an incomprehensible mess. Where that lands is entirely up to the quality standards the team employs during this and the subsequent phase.

Steps:

  • deconstruct algorithm and fit to current system architecture
  • create user stories
  • communicate both the algorithm's design and incentive in these user stories

Outcomes:

  • modular algorithm design
  • clear user stories

4) Implementation Phase

From software development back to product.

This is another standard phase that’s outside the scope of this article so we won’t go into much detail. There should now be user stories on a backlog for the dev process to work through. Be sure to provide data science support to the devs. If you’re lucky enough to have a data scientist with an interest in software development and knowledge of the relevant programming language then there’s no reason to stop them from implementing a few choice user stories in production. Feel free to blur some boundaries in the interest of collaboration. At the very least make sure there’s good communication between data science, product and dev during this phase and be sure to encourage the dev team to maintain quality (and give them the time they need to do it).

Steps:

  • implement user stories
  • iterate, evaluate, adapt

Outcomes:

  • production-grade algorithm code
  • shared understanding of the algorithm stack

5) Testing Phase

From product development to customers.

Once the new algorithm has been implemented we need to test it against our initial incentives. I’m not talking about unit testing or QA on each story. That’s important too so don’t skip those bits. Once that’s done we need to zoom all the way back out and test our implementation to see if it actually does what we want it to. This means another round of subjective evaluation. Probably another round of user stories to adjust some tuning parameters and fix bugs too. Then it’s time to get customers involved. Beta test with a small set of friendly customers if you can. A/B test if that’s an option. However you do it, you’re ready to roll this out and keep learning from it in the wild.

Steps:

  • perform internal qualitative evaluations
  • iterate on corrective user stories
  • beta or A/B test
  • release your algorithm unto the world

Outputs:

  • a shiny new algorithm in production

This process should foster collaboration and a shared understanding of the implication of each algorithm. You should have ample opportunity to evaluate them from a variety of perspectives. It’s worth thinking about the mindsets you wish to foster and include when doing so. Process merely provides a framework. This one yields opportunities to consider the impact of your decisions and assumptions.

It is ultimately up to you to elicit diverse input, ask difficult questions, and listen when others question your goals and assumptions.

Recommended for you

Interested in this topic? Get in touch to talk more