May 30, 2023
ChatGPT has recently seized the spotlight with its formidable humanlike text responses to prompts. Large language models are poised to have a huge impact across many industries in the near term, so IT and business leaders must understand what it does well, poorly, or not at all.
While the ChatGPT AI capabilities are impressive, there are pitfalls, and its output is far from perfect. IT leaders should gain an understanding of the tool’s utility, supervision requirements and trust issues, and then start with a data quality assessment and master data clean-up effort. By doing so, IT leaders can make informed decisions and use ChatGPT effectively to benefit their organization.
IT leaders must be aware of the following ChatGPT risks:
1. Utility of responses depend on prompt quality
ChatGPT responds to the prompt provided, and therefore a simple prompt will produce a simple response. However, as a language model it can process complex prompts and respond with in-depth clear, concise, and relevant outputs. On the other hand, poorly defined prompts will lead to vague, high-level results.
A particular circumstance that presents a risk to IT leaders is ChatGPT may not be able to correctly interpret a difference in terminology, which can lead to an inaccurate yet articulate response. Therefore, review the output carefully before using it for any communication.
Since the AI does not have access to company specifics, it might be tempting to feed it such information to see what it can do with it, but this must be avoided. Never provide your company’s intellectual property or other confidential information to a public domain system that you do not own. Also, do not give it information about your suppliers and customers.
2. It requires end-user supervision
While ChatGPT's output is impressive, it is far from perfect. One of ChatGPT’s obvious flaws is that it can produce incorrect answers in a convincing manner. AI creates new connections from existing information to create new, original content. If the information it generates is off slightly or completely wrong, ChatGPT will not know how to identify this. The method for generating a correct response is the same as the method behind an incorrect response.
ChatGPT is an evolving solution, and it is still learning. AI-generated content can provide fantastic, lightning-fast insights to support decision-making. But when it produces inaccurate results, it can do so convincingly. Proceed with caution before taking any action that could have significant financial implications. In situations where big financial commitments or possibly life or death consequences are involved, human expertise is still required.
Additionally, remember that ChatGPT has not been trained with any company's internal data, so it does not know anything about company-specific challenges or how to solve them. Therefore, while it can be used to inform decision-making, it should not be the only factor used to decide.
3. Real-life encounters
It is likely that ChatGPT-generated text will be used to apply for jobs. As ChatGPT is designed to produce well-written text, application cover letters and resumes are a perfect match for its capabilities. The role of the interview will become more important since the way an application is written may no longer be enough to differentiate applicants; this is just the beginning of the use of AI in the recruitment process.
Finally, it must be noted that AI tools consume massive amounts of data, and thus the data must be good. Start with a data quality assessment and begin supply chain master data clean-up efforts. Data quality is not an IT responsibility, so take the lead to ensure that any company data is suitable for training an AI tool in the first instance.
Read more about:ChatGPT / Generative AI
About the Author(s)
You May Also Like