By Rebecca Ampah
A cybersecurity expert has warned that it is unsafe for government organisations to rely on artificial intelligence (AI) platforms for research and information gathering without strict security controls, citing significant risks of data exposure and misinformation.
Speaking on the growing use of AI tools in public institutions, cybersecurity specialist David Gyedu said government employees who depend on publicly available AI systems could unintentionally create serious cybersecurity vulnerabilities and compromise sensitive state information.
According to him, one of the biggest risks is data leakage by design, particularly when public servants paste internal documents into cloud-based AI tools for summarisation or analysis.
“Staff may copy internal emails, incident reports, citizen data, procurement documents or intelligence briefs into an external AI platform just to get help with research,” Gyedu explained.
“But the moment that happens, a potential data exfiltration channel is created.”
He warned that such actions could violate privacy regulations and expose sensitive government information, including national security data.
Gyedu also pointed to the emerging threat of prompt injection and poisoned online content, where malicious websites manipulate AI systems that browse or summarise web pages.
In such cases, hidden instructions embedded in web pages can trick AI tools into ignoring safeguards or producing misleading information.
“This can result in incorrect conclusions, leakage of information contained in prompts, or unsafe follow-up actions,” he noted, adding that the threat is rapidly becoming a new attack surface in digital research workflows.
Another concern is the issue of AI hallucinations; instances where AI systems produce confident but inaccurate answers. In government contexts, Gyedu said this could lead to poorly informed policy decisions, flawed procurement processes, incorrect threat assessments and misinformation entering official communication channels.
The cybersecurity expert further warned about risks linked to third-party AI vendors and plugins, noting that many institutions may rely on tools whose data handling practices are unclear.
“Unvetted AI plugins, third-party model APIs, browser extensions and free AI tools can introduce supply-chain vulnerabilities,” he said. “If a single vendor or connector is compromised, it can become a foothold for attackers.”
He added that weak security practices such as reused passwords, lack of multi-factor authentication (MFA) or accessing AI platforms through personal email accounts could also allow attackers to hijack accounts and retrieve sensitive conversation histories.
Gyedu acknowledged that Ghana has made progress in strengthening its cybersecurity framework through the Cyber Security Authority and legislation such as the Cybersecurity Act, 2020 (Act 1038).
However, he believes government institutions are only partially prepared for the cybersecurity implications of AI adoption.
“Ghana has the legal and institutional foundation for cybersecurity, and national digital policies increasingly recognise the importance of data classification,” he said. “But operational governance, enforcement and consistent implementation across ministries, departments and agencies are still uneven.”
He noted that many institutions continue to face challenges including inconsistent data classification practices, limited data-loss prevention systems, fragmented identity and authentication controls, and low awareness among staff about the risks of using AI tools as potential data exfiltration channels.
Despite the risks, Gyedu said governments should not ban AI tools outright, but rather regulate their use through clear policies and governance frameworks.
He recommended the adoption of a Government AI Acceptable Use Policy, which would define what types of data cannot be shared with public AI systems, such as classified information, personal data, security configurations and investigation records.
He also called for stronger AI procurement standards, approved vendor lists, mandatory multi-factor authentication, data-loss prevention controls and centralised security monitoring for AI platforms used by public institutions.
To ensure safe AI adoption, Gyedu urged government agencies to develop internal “AI safe use” blueprints identifying approved use cases, implement secure architectures such as private or government-controlled AI environments, and strengthen auditing and monitoring systems.
He also emphasised the need for staff training and AI-specific incident response procedures, including protocols for isolating compromised accounts, reviewing audit logs and assessing potential data exposure in the event of a breach.
“AI can improve efficiency in public service,” he said, “but without strong governance, security controls and staff awareness, it can also become a new pathway for data leaks and cyber threats.”
More Stories Here
Source:
www.gbcghanaonline.com
