Yes, they are. A recently published quarterly report from Google Threat Intelligence Group (GTIG) discusses that hackers are attempting to use it as a support tool, and some private organizations are even trying to clone the model. Google says there are no direct copy attempts by hackers, but threat actors have been observed using AI to support sophisticated hacking attempts against individuals and businesses. Hackers would love to obtain Gemini AI’s proprietary logic, but are not as daring as private companies, which are actively exploring ways to extract it. Google says such attempts constitute intellectual property theft and violate its terms of service. And the tech conglomerate is actively working to deter them and prevent “cloning” or misuse.
Key takeaways
- State-backed hackers are using Gemini AI as a powerful support tool for cyberattacks. And Google is trying to stop this practice.
- Private companies and researchers are the main ones trying to clone or extract Gemini’s proprietary logic. Google classifies this as intellectual property theft.
- Google says no successful cloning of Gemini has occurred.
- Americans are increasingly relying on AI in daily life (work and home). But trust remains low due to privacy concerns, lack of regulation. And fears of data exploitation.
Have hackers been successful at utilizing Gemini AI models?
Not really when it comes to cloning, but there have been recorded attempts to use chatbot power to support malicious activities, and GTIG is actively working to stop the trend. The hackers were never fully able to complete direct model extraction/distillation attacks on frontier models like Gemini. Still, GTIG has confirmed that hackers have had fruitful interactions with AI chatbots that supported them during different stages of cyberattacks. However, thwarting such exploitation attempts from both cybercriminal organizations and the private sector remains a high priority.
In the report released by GTIG, the cybersecurity experts describe how Korean, Iranian, Chinese, and Russian state-backed criminal cyber organizations found ways to operationalize Gemini AI. The fight for “unsupervised” access to AI is ongoing, as Americans continue to develop a love-hate relationship with AI. US residents are increasingly relying on AI. Still, they are also growing mistrustful of the companies behind those AI agents.
Why is the private sector after Gemini AI’s proprietary logic?
The only difference between hackers and the private sector is that hackers might use AI capabilities to execute state-sponsored cyberattacks. In addition to the usual monetary gain goals. Private organizations predominantly focus on gaining access to use those capabilities to develop their product and service offerings. If used correctly, this would undoubtedly have a positive impact on their bottom line.
However, as Google highlights in the quarterly report, such attempts enable people to accelerate AI model development quickly and at a significantly lower cost. This effectively represents a form of intellectual property theft.
Why are Americans not trusting AI?
AI certainly shapes Americans’ lives, with increasing use and reliability at both work and hom. As usage increases, the malicious use of AI is also growing—i.e., state-sponsored and regular for-profit cybercriminal organizations are attempting to exploit the new technology. People from all over the world—and sometimes with questionable moral compasses—would try to use AI’s power to achieve their goals. Americans know that, with the convenience, they are also giving up privacy freedoms. And appear to be aware that interaction with AI is also opening the door for companies to exploit the company-customer relationship.
AI is entering its Wild Wild West era. In other words, customers are flocking to the usage of a source of information with unprecedented capabilities. AI organizations are still not well-regulated, nor are they providing comprehensive privacy and security assurances to the people. Americans don’t believe companies will use their input unfairly against them. This was the case during and after the mass adoption of search engines in the 90s and 00s. And during the birth of social media in the 00s, which continues to be widely adopted today. And that is understandable as AI offers an opportunity to everyone, including regular people, governments, hackers, and private organizations.