How AI helped me rebuild my work life after a stroke

A personal essay by the founding editor-in-chief of Wharton's management journal and former senior fellow at Wharton AI for Business

October 31, 2022

15 Min Read

A personal essay by the founding editor-in-chief of Wharton's management journal, [email protected], and former senior fellow at Wharton AI for Business

One morning last September, I fired up my desktop computer to answer a few emails when I noticed something weird.

My fingers had stopped listening to my brain; they kept hitting the wrong keys. If I tried to type the letter ‘A,’ the pinky finger of my left hand would hit the caps lock key, and I had to go back to retype what I had written.

After a few minutes, I realized it was just my left hand that had become disobedient. The previous day, I had returned at midnight from Florida after a day packed with medical tests and meetings at the Mayo Clinic near Jacksonville. “I am still exhausted from my trip,” I reasoned to my wife Hema and daughter Tara. “I should take a nap.” I went back to bed.

A few hours later, when I woke up, Hema and Tara were in a state of agitated alarm. “We need to take you to the emergency room right now,” Hema said. “We shouldn’t waste any time.” “I’m fine,” I replied. “I am just tired because of yesterday’s trip.” Denial runs in our family. Tara burst into tears. “Dad, please listen to mom and me,” she said. “I looked up your symptoms online. We have to leave for the hospital now.” My daughter’s tears melted my pig-headedness, as they usually do.

At the hospital ER, we had barely begun to describe my symptoms when I was moved to the top of the priority list. Nurses appeared out of nowhere, strapping me to devices to check my vital signs. The rest of that day is a blur. Later that night, I was told what had happened. As a neurologist explained, I had had what is called a pontine lacunar stroke. It occurs when an artery that supplies blood to a deep part of the brain is blocked.

Overnight, I lost the use of the left side of my body. In her 1997 novel, “The God of Small Things,” Arundhati Roy, the Booker Prize-winning author, writes about how life can change in a day. No one knows the truth of that statement better than a stroke survivor.

I was in shock. I could hardly believe how dramatically my life had changed in a matter of hours. I could no longer walk. The only way I could move around was in a wheelchair. I began to slur my words, though after a while I could sense when my tongue was about to mangle a word and change to a different one. I felt myself becoming infantile in many ways. Since my hand sometimes threw food around while eating, the nurses had to place a bib around my neck at mealtimes.

What bothered me most was the hellishness of being helpless. I became overdependent on my family, friends and caregivers, who were saint-like in their kindness, patience and support. Despite my best efforts at trying to stay positive, I had miserably dark days. The main reason why life felt so bleak was the conviction that three months after my retirement from the Wharton School, my professional life was over.

After more than 40 years as an editor and writer, I could neither write nor edit. If I could not be a writer or editor, who was I? It was a catastrophic crisis of identity, which every stroke survivor goes through. Debra Myerson, a Stanford professor who suffered two strokes when she was in her 50s, explores this theme in her remarkable book, “Identity Theft: Rediscovering Ourselves After Stroke,” which she co-wrote with Danny Zuckerman, her son. Like many men and women whose sense of worth is tied to their professional identity -- in my case writing and editing -- I questioned whether life as it had become after the stroke was worth living.

While I did not become actively suicidal, on the darkest days the words of La Pasionaria, a revolutionary during the Spanish Civil War, echoed through my mind: “It is better to die on your feet than to live on your knees.” I reached out to a friend for information about an organization in Switzerland that helps people end their lives legally and painlessly through euthanasia.

Today, more than a year after my stroke -- which was followed by two heart attacks -- I am much further along the road to recovery. I have learned that recovery does not mean fully regaining the capabilities that I had before my stroke, but coming to terms with my post-stroke capabilities and building a meaningful life around them. I have also come to recognize that to recover, I had to heal my body; heal my mind; learn to appreciate how much love I had in my life; and learn to use -- but not overuse -- technology.

These four factors, like the wheels of a car, can get your life rolling again after a crippling disability like a stroke. Today, I will write about the fourth factor, technology, and about the other three in the future.

Tap, don’t type

Learning to write again was a painfully slow process, made possible by new technology, particularly artificial intelligence or AI. My first baby steps involved starting to use tools from Google and Apple on my laptop and iPhone; they would look at the words I had typed and try to anticipate and suggest the next word. For example, if I wrote “I am in the … ” the AI algorithm would ask if the next word ought to be “hospital.” If that was correct, all I had to do was to tap that word rather than type it. This process worked well, I discovered, for text messages and for short emails. Often, if I had mis-typed a word, the algorithm would underline it and suggest the correct spelling. I could construct short messages to keep in touch with family and friends around the world, though there was a sort of sameness that crept into these texts.

Still, it meant that I did not have to wait for anyone else to type emails for me. Although Hema and Tara had kindly done this in the first few days after my stroke, the fact that I could do it myself gave me a small measure of freedom. Some of my agency returned. A ray of light broke through the darkness. The technical term for the AI technology that makes this possible is predictive analytics.

As my friend and former Wharton colleague Kartik Hosanagar, author of a wonderful book titled, “A Human's Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in Control,” has helped me understand, “AI increases the accuracy and reduces the cost of making data-driven predictions.”

In my experience, this tool got better with use. As the AI algorithm learned my preferences for the words I liked to use -- based on the frequency with which I used them -- its predictions improved over time. Gradually, I was able to write longer email messages, even though each message took an excruciatingly long time to compose. Many friends, including Kartik, suggested trying out speech-to-text software programs, and I did. As the name implies, these are programs to which you can dictate your messages, and the software turns them into text.

Many people have had positive experiences with this software, but my early efforts ended in disaster. I am not sure if it was because I was slurring my words or if the AI algorithm did not understand my Indian accent, but each text message I got was riddled with errors.

This was frustrating; it took much longer to retype an error-filled message than it did to write it slowly but correctly letter-by-letter in the first place, though this too was agonizingly slow. There appeared to be no efficient way to write long-form text. It seemed that I was doomed to do double the work if I were to depend on this technology. I got tired of that pretty quickly and gave up on it.

Then I found an intermediate solution. WhatsApp, which Facebook (now Meta) acquired in 2014 for more than $19 billion, had a recording feature: I could press down a button, speak my messages, and send them to family and friends on my contact list as voice messages. This was effective because I could now send voice messages that were several minutes long. Often these were complex explanations of medical issues I was dealing with, and I did not have to deal with the hassle of the AI algorithm miscommunicating what I wanted to say. WhatsApp’s privacy features also meant that I could speak freely about my health.

My friend Rohan Murty, founder of Soroco, a U.K.-based startup, who strongly encouraged me to write this article, says: “Until you said it, I never thought that WhatsApp could help somebody who's gone through a medical condition like this. If I were a product manager, I would have never realized that maybe one day someone will use it like this.”

Another advantage was that my communication could be asynchronous. In other words, I could leave messages for friends in different time zones, and they could respond whenever they had the time. This voice technology allowed me to progress beyond short, terse texts and emails, but I still could not write or edit articles. Just as the frustration was beginning to build again and the darkness threatened to return, unexpectedly, I had a breakthrough.

Before my stroke, I had agreed to interview Google’s Neil Hoyne, about his book, “Converted,” which is about how companies use data to win customers’ hearts. I emailed Neil, a longtime friend, a list of questions -- typed and tapped on my iPhone. He was kind enough to send back his answers as audio messages. I sent those on to an editor friend, who had them transcribed, then edited and published the interview.

Someone reading the article in its final form could hardly have imagined how the process had worked. Thanks to kind and compassionate friends, I was able to produce a long article eight months after my stroke. That gave me an immense boost of positive energy. It was therapeutic and helped me keep healing. The following month, I was able to do a second story about ransomware and cybersecurity using the same technique, featuring David Lawrence and Kevin Zerrusen, experts from the Risk Assistance Network + Exchange. The glimmer of hope grew brighter.

Speech to text on steroids

The editor, who was earlier my colleague at Wharton, told me about the AI software she had used to produce these transcripts. It was made by a company in Los Altos, Calif., called Otter.ai. “Have you tried it?” she asked. “It's good.” I downloaded it, and that transformed my life. My use of Otter.ai was initially a bit complicated. Let’s say I had to write a 1,500-word article. I would start by hand-writing a short outline of the story, mapping its structure paragraph by paragraph. (If the article was longer, say 3,000 words, I would map out groups of paragraphs.)

After that, I used the iPhone’s Voice Memos app, which turns the phone into a recorder, to dictate the entire article. As a result, I ended up with an audio file that I could upload onto the Otter.ai website. In a few minutes, Otter.ai’s algorithm would email me an almost accurate transcript of what I had said. I could now copy and paste the transcript onto Google Docs, Microsoft Word, or any other word processing program, clean up the text, and have the final version of the draft ready.

While the Otter.ai algorithm got most of the text right, what was truly amazing was the speed with which the AI converted the audio file into text. It could turn even a 60-minute interview into an editable transcript in a few minutes. What made this magic possible? According to my friend Apoorv Saxena, who once worked for Google and now works for Silver Lake, a private equity firm, automatic speech recognition in the past used to be highly complex.

The field was transformed after the publication in 2016 of an influential paper titled, “WaveNet: A Generative Model for Raw Audio,” which radically redefined the way that algorithms turn speech into text and the other way around. “We have seen next generation speech-to-text being produced in the last three to four years ,” he said. That is what makes companies such as Otter.ai as effective as they are.

These days, I use a somewhat different process. Otter.ai lets me create my own digital assistant who “attends” my Zoom or Google Meet meetings. I introduce “her,” my AI assistant, as a participant in the meeting to my interviewees, asking if they mind if she joins the meeting to take notes. As soon as the meeting ends, “she” emails me a transcript a few minutes later.

I have taken to practicing typing for an hour every day now, so that I can edit the text. It is important to me to use -- but not overuse -- the AI technology, as I have said above. If I were to use AI to do everything, I would have no incentive to keep working at strengthening my hand and the neural connections between my brain and fingers. It would simply transfer my dependence from humans to digital technology.

While the AI algorithm that Otter.ai has developed is impressive, it isn’t perfect. It gets many things right, but occasionally it gets things spectacularly and hilariously wrong. For example, I was recently working on a document in which I had to quote my former Wharton colleague Raghu Iyengar. Otter’s transcript turned his last name from Iyengar to “anger” and his first name from Raghu to “Rachel,” getting the name, gender and nationality wrong. So it still has some way to go.

Still, fundamentally it has given me a tool to resume my writing and editing, and in many ways, to reclaim my identity. Raghu, David Reibstein, another dear Wharton colleague, and I were working before my stroke on a project with colleagues at the consulting firm McKinsey. It was a short ebook about adaptive marketing, dealing with how companies had modified their marketing plans during the pandemic and what that might mean for future disruptions.

In more optimistic times, we had imagined we would complete it in a matter of weeks, while memories of the 2020 pandemic were still fresh. I had finished most of my reporting and written almost three chapters when the stroke struck. I was in tears as I handed back the project since I could no longer write. Monish Gangwani, a friend who works for Microsoft, kindly offered to step in to help. He even wrote a draft chapter, though eventually it did not work because it changed the writing style too much.

I was deeply touched by the support of friends and colleagues. I feel deeply grateful to each of them for keeping me intellectually challenged during a tough year. They helped me keep exercising my mind with writing or editing tasks as I was exercising and strengthening my body. It helped restore a sense that life could still be meaningful.

Human-AI collaboration

As I think about the process that has made this transformation possible, I realize that it has to do with structuring human and AI collaboration the right way. The work begins with a human process (I think of the interview topic, select the right expert, and come up with the questions to ask). Next, I turn over to AI the relatively narrow task of capturing the conversation in audio format and turning it into text. It does this at a speed that is unimaginable for even the world’s fastest human transcribers.

Finally, I take back the task from the AI algorithm to edit and eliminate the laughable “Rachel Anger” kind of errors and complete the work based on human expertise. I focus on doing what I can do better than the AI and leave to the AI algorithm what it does best.

This human-AI-human workflow process has allowed me to rebuild my professional life. For those who are inclined to geek out on this topic, I recommend reading Soumitra Dutta’s research on human-centered AI. He is the former dean of Cornell’s business school who is now the dean of Oxford’s business school. I also suggest checking out the research of Phanish Puranam and Ruchika Mehra of INSEAD on why better human-AI collaboration may depend on workflow design.

Academic research has long tried to figure out how AI and humans can work together in ways that are constructive rather than destructive, and which enhance productivity without demolishing livelihoods. For me, finding the right human-AI workflow has been more than a matter of intellectual curiosity. I have lived with it every day for the past year. Without exaggeration, it has been a question of life and death.

I was chatting with my friend and former Wharton colleague Bruce Brownstein about the role of AI in overcoming disability when he reminded me that my experience echoed the findings of our former colleague Laura Huang, who now teaches at Harvard. She outlines her findings in an outstanding book titled, “Edge: Turning Adversity into Advantage.”

“What inspired me to study this question was that people face constraints, barriers and obstacles, but it is not just that these negative effects happen or exist,” she said. “People can empower themselves and flip things in their favor, and even gain an advantage over those who didn't have those disabilities or constraints.”

AI and its strengths have been positive forces in my own life, but it cannot replace the human touch. Human ingenuity can still find lots of ways in which to compete and cooperate with AI. That is what gives me hope for stroke survivors everywhere.

Mukul Pandya writes occasionally for AI Business. Also, AI Business Editor Deborah Yao is his former Wharton colleague.

Get the newsletter
From automation advancements to policy announcements, stay ahead of the curve with the bi-weekly AI Business newsletter.