Welcome back to Agile AI at Georgian, where we share lessons learned about how to adapt agile methodologies for AI products. Both in Georgian’s in-house product development work and through working side-by-side with dozens of data science teams at our portfolio companies, we’ve iterated, experimented and refined our processes.
In our last installment, we discussed the importance of setting a North Star — a purpose for your project that you can rally your team around. Today, I want to talk more specifically about the team itself as building and nurturing your team in the optimal way takes special attention to the specific needs of AI products.
Getting the right skills on your team
Skills are important on any team, but in AI, a diversity of backgrounds is important as well. It’s great to have team members who are newer to AI, but you’ll want to make sure you have a few key bases covered with more seasoned folks. Your product management team, in particular, should seek to develop some foundational skills relevant to ML, model QA, and deployment — these things will affect timelines and product functionality.
It’s important to note that as AI has matured, the need for roles beyond data scientists has become apparent. While it’s important to be able to design new models and test novel ideas, production thinkers can help you move beyond prototypes. You’ll need ML engineers and someone who can bring that production mindset, and someone who can think about compliance and governance as you incorporate AI into your product.
Beyond the skills themselves, you’ll want to be thoughtful about how they are distributed throughout your team. One surefire way to slow down an AI project is a bottleneck resulting from a single point of failure: a part of your project that requires skills held by only one team member. To avoid falling prey to this predicament, invest in documentation, cross-training and knowledge sharing — both within the context of specific projects and at the team level more generally.
This doesn’t have to be onerous and time-consuming. For example, to keep things interesting, Georgian hosts internal paper clubs each week covering the most recent AI research papers. In one recent meeting, for example, we discussed a number of zero-shot learning NLP experiments and identified common problems and solutions.
We also host regular community discussions where members of the Georgian portfolio and wider community join us for monthly paper club sessions featuring leading AI experts. These sessions, which are 50% theory and 50% hands-on/applied topics, cover various aspects of machine learning such as machine learning operations (MLOps) and computer vision.
One of my favorite things about AI is the ability to collaborate across disciplines. A linguist like myself thinks differently from a statistician or an engineer — but all have valuable contributions to make to AI projects.
Beyond diversity of discipline, successful AI teams also need a diversity of perspectives. While some people are constantly excited about the new problem of the day and can’t wait to dive in, others are motivated by building lasting infrastructure for long-term value. Recruiting a team with a diversity of life experiences, across the spectrum of gender, race and age is also important, so team members can bring different perspectives to the table and help guard against bias in your AI product. Help team members understand, respect and appreciate these differences, because they are what makes building these systems possible.
At Georgian, we use a variety of ways to keep our own diverse team working together smoothly. Our whole team takes regular surveys and engages in retrospectives to help everyone get to know each other’s work preferences. An awareness of how each individual works also translates into more insight when we design team processes. For instance, in the brainstorming and reflection phases, we use both live and asynchronous methods to ensure we’re getting everyone’s best input. We’ve found that our team members also change their preferences through the process of working together, so we keep reassessing and calibrating ways for people to contribute.
Cross-disciplinary empathy also comes with practice. Our data scientists rely heavily on our engineers, who have built platforms that massively speed up experimentation. Our engineers appreciate the impact that the foundational elements of the infrastructure they have built have on teammates across disciplines.
Motivate your team through the rough patches
Given the uncertain outcomes of most AI work, your AI team may require a little extra help to stay positive and feel pride in their contributions. Here are a few ways to keep your team energized and motivated:
- Failure is a fact of life in AI teams — so embrace it! Create a journal of interesting results on your Wiki page to capture what you’ve learned, for instance. Define results with a high amount of surprise as “information gain moments”.
- Avoid multitasking — keep team members focused on one project at a time. After a few months, give team members flexibility to work on new projects.
- Rotate your team through different project types. A team that has just done a grueling sprint on a high-impact, demanding project might be ready for a more open-ended, experimental sprint that allows them to reconnect to the broader field and gain new skills and perspectives.
- Don’t be afraid to change the structure if team motivation is starting to lag because of lack of progress. At Georgian, we use hackathons to unblock challenges or to discover possible pathways. At a time when we were feeling stymied by challenges, we held a 4-day hackathon for our sourcing engine that yielded multiple promising product directions.
- Connect your team directly with users so that they can see their work have a meaningful real-life impact. For instance, implement demo days to gather feedback on releasable software, or invite lead users to co-present new product features and share their case studies.
- Consider a special time allotment for people to be creative — maybe 10-20% time for which there is no specific expected output.
Above all, keep iterating on your team processes and make sure there is a mechanism for your team to share learnings and reflections on a regular basis. Feeling like things are getting better is crucial to keeping team motivation high even in the face of the often frustrating challenges of AI projects.
This is the second in a series on agile AI. If you would like to receive the rest to your inbox, sign up for our newsletter here.
Next time, in our Agile Series, I’ll talk about experimentation and effort allocation.
This article was originally published on Georgian’s Medium.