Sponsored by Google Cloud
Choosing Your First Generative AI Use Cases
To get started with generative AI, first focus on areas that can improve human experiences with information.
Few of us want to go back to slide rules or library card catalog
Many of us have probably used some form of artificial intelligence (AI) interaction these past few months after the successful growth of ChatGPT, which breached 200M users in late August 2024.
Most AI engines such as Chat GPT, Microsoft Copilot, Google Gemini, Grok X, Claude AI, Meta AI and others use some variation of the General Purpose Transformer (GPT) AI engine, with some adopted for specific use cases such as chatbots, art rendering, music composing, video rendering. Chatbots use Large Language Models (LLMs) that have been trained over the past few years on the wealth of published information on the Web.
These mentioned AI apps are general purpose, meaning you can ask these AIs to do many things. These have been trained with publicly available data on the Internet and elsewhere, including volunteer public trainers.
On the rise at the moment are AI bots that are trained specifically for particular roles. For example, if you feed your AI bot some proprietary corporate training literature that is not on the public Internet, you can use it for in-house company training for employees. If an AI is trained to specifically just read x-ray plates for cancer (oncology), it will become a subject matter expert on that and not on different types of wines (enology).
A surgeon for example can ask the AI chatbot to read all the latest developments in the field through newly published journal articles and ask the bot to summarize if it has found anything new or interesting that may merit a closer look. He can specify that the AI bot disregards unvetted comments on the web and just refers to published journal articles and approved professional books. Mechanics can load up all their manuals and ask pointed questions to the chatbot just to refresh their knowledge of certain aircraft, boat or car models.
Related to this, a new technology called fully homomorphic encryption (FHE) allows organizations with high secrecy requirements such as the military, companies with secrets and others to train AI without divulging the intellectual property to outsiders. This is because FHE allows encrypted data to be used for training AI. Although AI has been trained using a lot of public open data, especially on the internet, the use of proprietary and secret data has not been used much. With FHE we may see an increase in AI bots trained with secret data.
On the lighter side, artists can draw different types of characters and pair these with AI chatbots to make these new meme AI chatbot tokens. Thus someone who wants a digital friend can have a Star Wars Jedi AI chatbot, a medieval knight, a princess, or a cute digital pet.
Do we actually need these personal AI chatbots? Technically no of course, just as we do not really need many of the new technologies we have around us. On the other hand, very few of us want to go back to slide rules or library card catalogs. Soon some of us might prefer to just ask our AI chatbot for an answer instead of digging through some links from Google search.
We humans love our creature comforts and want answers now and not later. Having these chatbots might make us lazy and sometimes do more, but not necessarily useful, things.
On the flip side, these chatbots can also give us time to tackle unresolved and strategic issues and become more productive in our personal and professional lives.
You May Also Like