Ukraine Wants ChatGPT Access
Sanctions and security issues prevent the war-torn country from accessing OpenAI's viral chatbot
Ukraine is on a list of nations unable to access OpenAI’s ChatGPT. The war-torn country would like this to change, with the country’s vice prime minister appealing to the chatbot’s owners to provide access.
Mykhailo Fedorov, who also serves as Ukraine’s Minister of Digital Transformation, appealed to ChatGPT makers OpenAI via Twitter to ask for access. “We are excited how develops AI tools. Ukrainians are tech-savvy, cool and ready to test innovations now. Personally (I) will use your tool to make my Twitter account great again.”
Ukraine finds itself geo-blocked from using OpenAI’s catalog of tools. But also on the list is its antagonist, Russia, as well as China, Afghanistan, Belarus, Venezuela and Iran.
Using a barred country's name in text prompts for DALL-E, OpenAI's text-to-image AI tool, was also restricted until the company changed its stance.
But Ukrainians want to use ChatGPT, with Taras Mishchenko, the editor-in-chief of local tech news site Mezha Media, saying, “We hope that this will gain wide publicity and OpenAI will change its position regarding users from Ukraine.”
OpenAI has never outright said why certain countries are barred from using its services. Some nations, like China and Russia, are deemed security threats by the U.S. government. GitHub, for example, blocked developers in sanctioned nations such as Iran and Syria from accessing git repositories to avoid violating export rules.
But OpenAI’s geo-lock has not proved too effective, with Russian cybercriminals reportedly accessing ChatGPT according to Check Point Research (CPR). Hackers are attempting to circumvent IP addresses to gain access to the chatbot from Russia.
Bypassing OpenAI’s restricting measures “is not extremely difficult to bypass,” according to CPR.
“We are seeing Russian hackers already discussing and checking how to get past the geofencing to use ChatGPT for their malicious purposes. We believe these hackers are most likely trying to implement and test ChatGPT into their day-to-day criminal operations,” the cybersecurity firm said.
“Cybercriminals are growing more and more interested in ChatGPT because the AI technology behind it can make a hacker more cost-efficient.”
And ChatGPT, along with other generative AI models, presents potential misinformation problems. For example, OpenAI has previously expressed concern that DALL-E could be used by back acters to create deepfakes. And just before ChatGPT released, Galactica, a language model from Meta, had to be pulled after continuously generating false scientific papers.
AI Business has contacted OpenAI for comment.
Read more about:
ChatGPT / Generative AIAbout the Author
You May Also Like