Today, anyone can move towards building AI into their business – with the advent of open-source technologies in this space; execution isn't the all-elusive dream that it used to be. For AI initiatives to run their course and create long term impact, a strong link to the product is necessary, and cultural + organisational changes matter more than technical deployments. This article introduces a three-step process, derived from my book Starting and Scaling AI Ventures, to helping your organisation be AI-powered: the pre-start, the start, and the element of repeatability. The pre-start discusses the identification of an AI problem, how you can add value beyond your best technologies, and the importance of having an industry/domain-specific eye on your initiatives. Subsequently, the article discusses how AI requires a culture of data-backed decision making, empowering your current team and preparing them for an AI-ready future, how you can watch for biases and loopholes within your processes, and being watchful of the domain of your deployment. The final stage of repeatability discusses the importance of moving forward after failures, understanding metric measurement, the side effects of a network effect, and the presence of data monopolies.
The first step before knowing whether this transformation is for you is to identify if the problem you are trying to solve is an AI problem. You should make sure you're not trying to force-fit AI into your current processes just for the sake of doing so since that would pull down more than what it would benefit. Many times, I've noticed a sizable proportion of individuals apply black-box algorithms to large datasets only to realise that the outcome that would most benefit the business isn't the one they came up with. If you're trying to use your data to make decisions, explore analytics as a precursor. Once you have enough direction as to what your data is leading into, rethink your strategy to ask yourself the question again.
There are very few avenues in which you may find no application of AI, considering the breadth of use-cases that are currently present in the industry. A foolproof way to get through this step is to speak to actual end-users or the affected party of a process. Ask yourselves why this process has been created, and what is the end-goal the team is trying to achieve? How AI can help them here is your thought to drive, not theirs or someone else's.
My team and I were once trying to find a solution to an incredibly straightforward problem with multiple avenues to a solution: summarising trends in textual feedback from employees to their organisations. Starting with complex thoughts and ideas of what we wanted this to be, we began work on this initiative without customer direction and limited clarity on what we called "value" to the customer. As is the case with most initiatives, the fact that we did distance ourselves substantially from the actual problem that we were out to solve made our solution have a strong inside-view.
When we did finally present it to the end-users, we found that we overcomplicated a simple text-bucketing solution where all that our customers wanted was segregation of their employees' feedback. Over the next few months, we made sure to hear out customer pain-points before approaching solutions and sometimes giving them precisely what they've asked for. During this phase, we found that we could've saved a lot of time and effort by first trying to emulate how the end-user would use a particular initiative. By creating a real-life scenario of being in the end-user's shoes is fundamentally crucial to being marginally better at forecasting problems that may come up during the creation or usage phases. Once you can see the solution clearly, you are bound to have a better understanding of what it takes to lead your initiative down that path.
This made me think about the importance of a bridge between customer-facing and data science teams – building on to a role that McKinsey defined, the analytics translator. These individuals are domain experts that help business leaders prioritise initiatives at the outset. Extending to the skills of translators, as defined earlier, I have noticed that a translator isn't necessarily a silver bullet to significant data science initiatives. An entire ecosystem needs to be in place before they are brought in or trained in-house to rise to this role since it involves a ton of communication across hierarchies to take initiatives from start to completion. As the translator is an acting middleware between end-users, business leaders and executioners within the organisation, none of them accepts the translator as a part of their circuit – which makes expectation-mismatch roadblocks harder to avoid. In many instances, you will find yourself tempted to use AI solutions to fairly straightforward problem statements: don't fall prey to this temptation. There will be enough instances within your business to put AI to better use and derive greater value from it.
Additionally, watching for trends within a particular market may be incredibly useful before defining problem statements and going all out after catching the wrong direction. An example stated in McKinsey’s best-selling book “Strategy beyond the Hockey Stick” was about how the Camera industry was a dominating force in the 90s, only up until phones started hosting cameras within them, and the industry remained one only for aficionados to get a grip on the latest DSLR cameras. With every wave in a sector, comes a natural saturation point with each turn of innovation – that point is near within the current chatbot industry, and constant innovation is necessary to avoid obsolescence.
Most employers and organisations are using chatbots for a variety of use-cases, be it speaking to customers and handling their queries or helping with site navigation. In particular, the HR-Tech space has two kinds of chatbots – the regular Q.A. chatbots that help you track leaves, holidays, or FAQs, and the engagement chatbots that have guided "conversations" with you about your work, health and your life.
In part, I believe that the saturation of this space helped us at inFeedo be an incredible first mover by adding a small nuance to chatbots and personifying ours, Amber. Amber isn't your regular H.R. chatbot, she speaks to you at regular intervals, and essential milestones in your organisational tenure to understand how you feel, help you with cultural issues, and understand your likelihood of leaving or how disengaged you may be. At best, Amber is your friend at work.
We moved our core selling-point from the actual chat to the analytics behind the conversation and contextualisation. The data and personalisation became the selling point, the trends and the actionable feedback – not as much the fluidity of the chat. It's our data and our algorithms that helped us beat the odds.
Similarly, try to track market trends within any space that you find yourself in – be it education tech, H.R. tech or consumer internet. There's always one avenue that may be saturated with AI and automation advancements – try your best to find a more innovative way to enter your market rather than being just another solution. Regardless of where the market currently is, have a critical eye on every domain's processes and identify manual interventions. You may find many of these as potential next avenues for a move.
In complex societal systems, adding an AI initiative amidst an already existing structure involves thinking through the outcomes from not only business avenues but also societal and ethical ones. This may also help reassure those that worry about AI replacing their jobs. Some jobs that require complex emotional awareness can at least for the foreseeable future not be done as well by algorithms or robots. Even though AI leaders are incredibly bullish about the limited impact on jobs, there is another train of thought on this debate which is an equally legitimate one. The question to ask then becomes whether the initiative is a narrow automation task, or opening up a new task altogether, creating more roles and responsibilities for current workers to upskill to. In today's scenario of AI deployments, most efforts align towards automation. This ends up impacting quality jobs and increasing the divide between highly skilled workers that program these algorithms and the ones that will now be responsible for labelling them or doing other ancillary tasks that spawn which may not require much use of their cognitive abilities. This problem has the potential to compound further since one of the significant reasons for unemployment is a mismatch between people's expectations of jobs and what jobs are available (Banerjee and Duflo, 2019).1 2
Initiatives and domains for AI initiatives could broadly be classified into three distinct buckets, AI, AI + H.I., and H.I. (Human Intelligence).
Even within incredibly transparent domains, some initiatives could be replaced by AI and automation without requiring a high degree of explainable outcomes. The basis of differentiation isn't limited to the degree of explainable outcomes that a domain requires, but also a brief look at the current state of the processes within the domain, the level of empathy necessary for effective functioning, the danger associated with carrying out day-to-day tasks, and the ethical/socio-economic factors in play. If you find yourself in a domain that functions particularly well with humans at the forefront rather than robots or algorithms, one example of which could be childcare, a great way to use AI is to offload repetitive and administrative tasks from humans and to enable them to maximize their effectiveness by allowing them to focus on what they're great at. My mother has been teaching children in her long-standing career of over 20 years as an educator and speaking to her tells me that teachers love interacting with students and watching them grow and learn. Sometimes, they aren't effectively able to perform just these activities due to an ongoing and overburdening load of administrative tasks that come their way, with teachers clocking over 50-hour weeks regularly.3 Many argue that this is only a better way to maximize throughput and resource allocation, but some domains should strictly be left out of this. A better way to involve AI in Education wouldn't be to replace these teachers with robots, because they probably wouldn't be able to bring in the human element that this domain craves and thrives on, but to allow AI to get these administrative tasks off of their plate. This may help us increase the quality of education delivered, increase the lucrativeness of the profession by allowing a better work-life balance, and help teachers do what they love while enjoying it.4 In the UK, 81% of teachers are considering leaving teaching altogether because of their workloads5, and this is the problem to solve.
1 Banerjee, A. and Duflo, E., 2019. Good Economics For Hard Times. PublicAffairs.
2Gopakumar, G., 2019. Nation Obsessed With Quotas Isn’t Putting Right Emphasis On Policy: Abhijit Banerjee. [online] Livemint. Available at: <https://www.livemint.com/Politics/X0Ry0jwJ194fN1CBDIA9wJ/Nation-obsessed-with-quotas-isnt-putting-right-emphasis-on.html> [Accessed 5 April 2020].
3Mourshed, M., Krawitz, M. and Dorn, E., 2017. How To Improve Student Educational Outcomes: New Insights From Data Analytics. [online] McKinsey & Company. Available at: <https://www.mckinsey.com/industries/public-and-social-sector/our-insights/how-to-improve-student-educational-outcomes-new-insights-from-data-analytics> [Accessed 9 August 2020].
4Bryant, J., Heitz, C., Sanghvi, S. and Wagle, D., 2020. How Artificial Intelligence Will Impact K-12 Teachers. [online] McKinsey & Company. Available at: <https://www.mckinsey.com/industries/social-sector/our-insights/how-artificial-intelligence-will-impact-k-12-teachers> [Accessed 2 June 2020].
5Teachers and workload, National Education Union, March 2018
You can go on to read Part II from here.