4 Tactics to Improve Sprint Predictability in Big Data Analytics Projects

Natalie Conklin
6 min readNov 5, 2018

--

Taking on a full-scale agile transformation is never easy for any waterfall-based engineering organization. Adopting the agile ceremonies is relatively quick and easy — training is a few days, coaching a few weeks — but the mindset changes needed to make agile truly valuable are usually measured in years. When you add in the complexities and scale of big data, the challenges of an extremely diverse set of technologies and languages, the uncertainties inherent in analytics research, and the conservative, date-obsessed approach favored by traditional telecom customers, you get a perfect storm of difficulty!

This is the challenging landscape we faced about a year ago when we started our first agile pilot project. Given this list of negatives, your first question is likely why move to agile at all? My engineering team asked me this one…a lot! The simple answer is that while big data analytics clearly adds complexity, the reasons to march ahead anyway are the same as those for any other agile project — the need for a development structure that provides the flexibility to explore features, change requirements, adjust priorities, and still deliver incremental value to customers on a consistent and frequent basis.

Where We Fit Comparative To Agile’s Sweet Spot

There’s been quite a bit written on agile’s sweet spot and the above is a simplification, roughly based on the Stacey matrix, commonly used to depict the types of projects where agile works best. The overall picture is that with increased uncertainty in technology and requirements, agile is a better fit…up to a point. With too much uncertainty, such as that typical in initial analytics research phases, no development methodology works well. My organization’s typical development project, and likely any big data analytics project, starts right at the edge of this complex to anarchy boundary. Therefore, one of the first things we must typically do is decrease the unknowns.

4 Tactics to Decrease Unknowns (and Increase Predictability)

In our engineering organization, there are 4 primary methods we use to decrease the unknowns and get more consistent sprint output:

1. Focus on Reuse

First, we have built, and are continuously improving, a development platform that already integrates, hardens, instruments, and scales most of the components and technologies we typically use.

Second, we keep a stable of analytics research available, and we pick projects where previous analytics research can be applied. Analytics research is in the anarchy zone for a reason and starting development on a project where completely new research is required is a recipe for delays…long ones.

Finally, as teams build any new product or customer solution, we’ve created a culture around constantly questioning whether new functionality can be built as reusable modules. We enable selected modules to be pushed back into our platform fabric using an innersource model for review and approval, to encourage seamless reuse later.

2. Time-box research tasks

In cases where a new component, a new technology, or a new adaptation of existing analytics research cannot be avoided, we start targeted and time-boxed proof of concepts early. Teams sometimes handle these investigations as a Sprint 0 before the development project officially kicks off, or where that is not possible, as spike stories added to the backlog. In either case, the key to success is in time-boxing the research efforts. The idea should not be to find the best possible solution ever, but to find the best possible solution for right now, based on what we know at this moment.

3. Use frameworks to automate…well everything!

While extensive automation gives the best chance of success for any agile project, it is a strict requirement for big data projects. Without automation, two weeks is simply not long enough to accommodate the extensive time that would be required for non-development activities like environment provisioning, verification of massive data, integration testing with extreme combinations, and tuning for performance at scale.

To make agile possible, we’ve set up automated provisioning of environments from an internal cloud. We’ve created our own test automation framework, specifically designed to handle validation of massive data without requiring recoding of complex data handling routines each time. We’ve linked our test automation framework into a Jenkins CI/CD pipeline to enable automatic promotion through validation gates, ensuring timely creation of releasable code without sacrificing quality.

Our continuous delivery triggers more extensive system testing aimed to address areas of focus such as performance tuning, scaling, sizing, security scans, and high availability testing. Testing big data analytics applications is an entire discipline by itself. For example, how do you automate verification of output when the answer is not definite, but instead ‘precise enough’, based on an approximation algorithm like probabilistic counting? This is an area we have spent a lot of time over the years, and still we are learning.

4. Groom the backlog — mercilessly

While this sounds simple and obvious, of the entire list, this one is the most difficult to do well for big data applications. Good user stories are relatively easy to write, with ample examples available, for web applications. You can deliver incremental value by leaving certain features out, or even just disabled, in earlier releases. But how do you partially build and deliver a big data pipeline that provides no immediate value unless it is complete? How do you incrementally deliver analytics algorithms that will take more than a sprint to code and test? We’ve made progress here, and we are starting to define our own set of good examples to share amongst teams, but good incremental user stories for big data applications are harder than we realized. However, the potential value of success in getting better at this should not be underestimated — this is the single greatest contributor to the uncertainty in any sprint we start.

These tactics are not new or unique, and they are important in every agile project. But I’ve found that in big data analytics projects, these are the most difficult to do well, and yet, are the most critical to success. There are many recommended practices in agile, but for now we are laser-focused on getting these right, and we’re starting to see the benefits of that commitment.

We are by no means done with our agile transformation story, but we’ve learned some valuable lessons on our journey thus far. As I’ve struggled to find resources on how to adapt accepted methods and industry standards to our particular set of complexities, I’m sharing some of our early steps.

In future articles I plan to address “Test Automation for Big Data Analytics” and “Writing Better User Stories for Big Data Analytics” in greater depth, so please follow along if it helps. But remember, the information I’m sharing is by no means the only way or the best way. It simply reflects where we are today, based on what we know today. We could change course tomorrow if we learn something new. And if you have a better way or know a good resource that can help, please comment for the benefit of all! We will only succeed if we learn from each other.

--

--

Natalie Conklin

Fearless and forever curious — a life-long learner, explorer, cat-herder, and engineer, leading software projects for some of the world’s coolest companies.