Why Productivity Gains from Gen AI Starts with Responsible Development

An opinion piece from the senior director of engineering at Slack

V Brennan, Senior Director of Engineering, Slack

December 12, 2023

4 Min Read
Generative AI (text) on an abstract background
Getty Images / AI Business

New generative AI tools are changing the game for tech employees and developers. By drawing on these AI helpers, teams can save many hours of work - whether it comes from simplifying the creation of custom apps, more efficient testing of new lines of code or getting rapid updates on a deployment. Building a successful and productive tech team of the future will mean making the most of these technologies and the productivity boosts they can offer.

However, technology leaders also need to be clear-sighted about the potential risks. AI is still largely uncharted territory, and its rapid developments (and the excitement around its capabilities) make it easy to get caught up in the hype cycle. That is why to make the most of AI, it starts with understanding the impact of new tools, confirming that they have been built responsibly and only then deploying them to the right strategic use cases. 

The importance of AI guidelines 

When a new technology disrupts the market, things have a tendency to go a little ‘Wild West.’ Prospectors start searching for smart investments, risk-takers test the limits of regulation and many people try to make a quick buck. In the heat of the moment, risks can be overlooked. 

As businesses bring AI technology to new use cases it is vital that it is done so with a sense of responsibility, inclusivity and intentionality. It is not enough to simply deploy generative AI capabilities for tech teams. It has to be done in a thought-out manner, with guidelines and safeguards in place. 

This is the responsibility of both the vendors and the leaders who choose to implement them. Minimizing the potential for mistakes, keeping data protected, and ensuring tech teams know how to make the best use of AI are all key if they are to enjoy its productivity-boosting potential. 

Signs of responsible AI development 

When selecting AI tools, tech leaders should look for certain signs that demonstrate responsibility. First and foremost is accuracy: AI solutions should be able to showcase their commitment to accurate results, and be transparent when there is risk of uncertainty. No AI system is infallible, and it is key that providers are open about potential mistakes. 

Safety is also key, particularly when it comes to mitigating bias in AI platforms and protecting sensitive data. AI providers have a duty to provide information on how they are addressing these issues, and be honest about the data sources they use to train their solutions. 

Finally, human empowerment is vital. What this means is that AI should be developed to supercharge teams, not replace them. While some processes in tech department’s workflows will naturally be fully automated by AI, the innovative and impactful work needs to stay in the hands of humans. 

Launching automation and AI 

If step one to getting the most from AI comes with understanding how it is built, step two is putting it to work on specific use cases.

Narrowing down those use cases is key because one of the challenges of integrating AI is knowing where to start. Should the tech team begin by automating as many routine processes as possible, use AI to test out new deployments, simply see how AI summaries can cut down on meeting time or try all of these options at once? 

The answer to this question will depend on the business, its KPIs, and the capacity of the tech team. But to get an idea of what a successful implementation can look like, consider the example of fast-growing mobility business, Bolt.

Bolt’s tech and development team decided to focus on accelerating work primarily with two types of automations. The first are small integrations between its central productivity platform and other key tools like Google Calendar - so that employees can reduce the manual need to switch between platforms. 

The second goes a little deeper. The in-house engineering team built a custom app called the ‘Test Stability Reporter.’ Plugged into their trusted productivity platform, Bolt’s team can use the tool to provide a bundled report showing the success and failure of different tests every morning, directly to their productivity platform.

That means when the team starts the day, they can see exactly what worked, and what needs fixing. This is just one of the many different custom workflows and automations that the engineering department at Bolt is taking advantage of - with more being built on an ongoing basis.

While just one example, by starting with a trusted platform, and having a clear set of goals, Bolt reveals to others the potential benefits that automation and AI can offer.

Continuing to learn in the AI era

Making the most of AI is a balancing act. The benefits of accelerating processes have to be weighed alongside an approach that ensures accuracy and safety are prioritized. 

Thankfully, tech leaders can take the measure of different solutions. Leading AI providers are already highlighting their guidelines and approach to development while those that lack transparency are raising a red flag. Meanwhile, early adopters offer helpful frameworks for leaders to follow. 

Tech leaders are well placed to take advantage of AI. It is in their DNA to assess, deploy, iterate and learn - qualities that will all be essential as the AI era of productivity kicks off in earnest. 

Read more about:

ChatGPT / Generative AI

About the Author(s)

V Brennan

Senior Director of Engineering, Slack

V Brennan is the senior director of engineering at Slack.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like