Brands are the cornerstone of commerce, comprising a blend of trust, reputation, and legal exclusivity. Generative AI has brought challenges like training on copyrighted content, generating outputs that use or imitate natural persons, protected marks, and deploying autonomous systems with unclear liability.
For brand owners in Ghana and throughout Africa, these are not abstract discussions, rather, they represent immediate threats and opportunities.
The increasing integration of AI into branding and advertising presents novel challenges particularly at the intersection of intellectual property, liability and regulation. This article examines the legal ramifications of AI software built and trained with datasets that includes copyrighted works and registered trademarks deployed in advertising and similar marketing campaigns.
The ownership, infringement risks and accountability frameworks associated with AI-generated contents and advertising are discussed. It examines evolving jurisprudence that limit copyright protection for purely AI-generated content while acknowledging potential liability for output-based infringement and trademark misuse.
The analysis further considers the allocation of liability among AI developers, platforms, advertisers or brands, and consumers relative to scenarios involving defamatory AI, deepfakes and misleading ads.
This article reviews evolving regulatory approaches emphasising accountability and transparency for developers and brands and proposes a legislative framework for Ghanaian policymakers and stakeholders. It concludes with practical compliance strategies like adopting AI audit tools, disclosure requirements, intellectual property due diligence for data sets used in training AI for brands and online platforms that integrate generative AI in its business.
INTELLECTUAL PROPERTY:
Copyright at the Input Stage: Training on Protected Works:
Generative AI models acquire knowledge from data, frequently sourced from the internet without explicit authorization. This raises the vexed legal question which is framed as this: does the unauthorized utilization of copyrighted works for training an AI system constitute fair-use (or its analogous concepts) or infringement?
In 2025, a US court made its first fair-use decision regarding AI training datasets in Bartz v. Anthropic. It ruled that legally purchased or sourced materials could be used for AI training under fair use, but pirated books infringed copyright. The $1.5 billion settlement (the largest for AI copyright) required Anthropic to destroy pirated datasets. Another US court noted Meta’s use of copyrighted material might be transformative, but stressed companies must compensate copyright holders if such use is essential.
Across the Atlantic Ocean, the United Kingdom, in Getty Images v Stability AI Getty largely failed on its primary copyright claims but secured a narrow ruling on trademark infringement after abandoning central copyright arguments during trial.
This significant UK decision provides interim legal protection for AI developers with respect to training data, while underscoring the requirement that AI companies refrain from reproducing proprietary watermarks in their generated outputs.
In Nigeria, the Advertising Regulatory Council (ARCON) has issued warnings to digital marketers regarding fraudulent AI-generated advertisements inundating social media platforms, with deceptive health claims posing significant public health concerns.
Observation for brands: It is possible that your copyrighted materials are being used to train AI models without authorization. It is advisable to regularly review training datasets and actively protect your intellectual property rights by issuing take-down requests as appropriate.
Copyright at the Output Stage: Who Owns an AI-Generated Brand Asset?
On March 2, 2026, the United States Supreme Court declined to review Thaler v. Perlmutter, thereby affirming that works generated solely by artificial intelligence, without significant human creative contribution, are ineligible for copyright protection. Consequently, any logos, advertising copy, or product descriptions produced exclusively by AI for a brand will not be afforded copyright protection.
Despite this, US courts are increasingly receptive to output-based infringement claims, which argue that an AI’s output directly violates existing copyrights. This distinction matters because while AI creators may not be able to claim copyright over their creations, they could still be held responsible if their systems reproduce copyrighted material owned by others.
Practical guidance for Ghanaian brands: Document human creative contribution to any AI-assisted work. Relying solely on AI generation for core brand assets is legally risky.
Trademarks: From Training Data to AI-Generated Outputs
Trademark law relies on commercial use. In the realm of AI, liability may arise when trademarked materials are used in training data, when generated content includes protected marks, or when AI product is given a name that conflicts with an existing trademark.
Cameo v. OpenAI illustrates the trademark risks that arise when an AI product uses a name that conflicts with an existing registered mark. Cameo, a platform known for personalised celebrity videos, sued OpenAI after it introduced a ‘cameo’ feature in its Sora video app that allowed users to generate AI videos featuring celebrity likenesses.
The court issued a temporary injunction preventing OpenAI from using the ‘cameo’ name, citing a likelihood of consumer confusion. The case underscores the legal risks of adopting established brand names for AI products.
In the Getty Images trademark case, the UK Court found Stability AI liable for using watermarks similar to Getty’s marks in AI outputs. The court found that Stability’s control over training, prompts, and results met the ‘use in the course of trade’ standard under the UK Trademarks Act and further held that technical autonomy did not exempt the provider from responsibility.
Practical guidance for Ghanaian brands: The legal implications from the above for brand owners, underscore the significance of watermarking digital assets beyond mere aesthetics. It serves as a crucial element in infringement actions, providing substantial evidential support.
LIABILITY:
Who is liable for AI contents that violate IP?
Liability may fall on the AI developer (who creates and maintains the model), the brand/advertiser (who uses the AI), or the platform (which distributes its output). These are discussed in detail below.
AI Developer Liability:
In the Getty Images trademark case, the court established that an AI developer can be held liable for trademark infringement if it exercises sufficient control over its outputs. The court considered five factors: firstly, control over training materials, secondly, the ability to flag prohibited prompts, thirdly, capacity to filter outputs, fourthly, inability of users to control core model behaviour, and finally, consumer perception of the platform as the originator. When these factors favour control, the AI Developer must be held liable.
Brand/Advertiser Liability:
For emerging brands or startups, the most immediate risk lies not in potential lawsuits from large, multi-billion-dollar companies, but rather in failing to ensure that their own AI-generated advertisements adhere to existing legal frameworks.
To ensure compliance with existing legal frameworks while mitigating the incidence of litigation initiated by reputably notorious brands, it is recommended that brands develop a simplified compliance checklist for AI generated contents for advertising. This is because, AI-generated content that mimics real endorsers, celebrities, or testimonials could arguably be in breach of Ghana’s consumer protection laws and advertising codes.
Platform Liability and Safe Harbours:
Within the European Union (EU), the Artificial Intelligence Act (AI Act) mandates transparency obligations upon both providers and deployers. Specifically, Article 50 stipulates that AI-generated advertisements, encompassing synthetic media must be disclosed in a manner that is both clear and distinguishable. Non-compliance with this requirement can result in substantial fines, ranging from €15 million to 3% of an entity’s worldwide annual turnover.
There is a reported incidence of a potential violation of EU’s AI’s laws involving an advertisement by Guess in the August 2025 print edition of Vogue magazine.The brand’s summer collection advertisement featured a blonde woman in a striped maxi dress and a floral playsuit.
A discreet disclaimer in the corner indicated that her image was generated using AI. However, the AI disclosure was minimal and subtle, potentially falling short of the EU’s legal standards. Should this advertisement circulate within Europe, Guess could face enforcement action, even if the creative work originated outside the EU.
REGULATION:
The Ghanaian and African response
There is presently no dedicated statute or regulation on AI governance in Ghana. Nonetheless, Ghana is establishing itself as a regional leader in the area of responsible AI governance through policy frameworks that sets out the pathway for a wholistic legal regime on AI governance.
In September 2025, the Ghana AI Practitioners’ Guide (GAIPG) was introduced as a comprehensive framework for ethical AI development, specifically adapted to Ghana’s unique context. The guide addresses all phases from design to deployment and prioritises the globally accepted principles of fairness, accountability, and transparency in AI regulation.
Stakeholder consultations are currently in progress to formulate and review comprehensive legislation governing AI development and associated technologies, provisionally titled the Emerging Technologies Bill. This Bill aims to establish an Emerging Technologies Agency with a specialised AI Division, thereby ensuring systematic oversight of AI systems throughout Ghana.
To augment the regulation of emerging technologies in Ghana, the Data Protection Bill is also being developed to regulate AI systems, automated decision-making, cross-border data transfers, and deepfakes.
The forthcoming Data Protection Bill is expected to repeal the current Data Protection Act and amend relevant sections of the Electronic Transactions Act, as these statutes do not sufficiently address the challenges posed by AI and other disruptive technologies.
The objectives of the Bill include strengthening enforcement measures and increasing penalties to deter non-compliance and safeguard the interests of individuals and businesses. Furthermore, the Bill specifically responds to concerns regarding data harvesting by multinational corporations and the lack of domestic data infrastructure.
It is unclear how Ghanaian courts will handle cases where AI-generated ads may breach copyright and trademark laws, or harm established brands. In the lack of local precedent, courts might look to similar foreign court decisions for guidance.
The Continental Context:
In Nigeria, ARCON is addressing the challenge of AI-generated fraudulent advertisements infiltrating digital platforms, noting that the digital advertising landscape is highly vulnerable. The Nigerian advertising sector is forming specialized AI committees to establish ethical guidelines. Likewise, Kenya, South Africa, and Rwanda are progressing with their own AI initiatives.
The African Continental Free Trade Area (AfCFTA) will necessitate harmonised strategies regarding AI regulation, data protection, and cross-border digital commerce. Ghana’s proactive approach establishes it as a pivotal participant in these ongoing deliberations.
The EU AI Act:
The EU AI Act, recognised as the world’s first comprehensive artificial intelligence legislation, commenced phased enforcement in 2025. Its transparency requirements mandate explicit disclosure of content produced by AI systems.
The Act’s prohibited practices include restrictions on the use of emotion recognition technologies within workplaces and prohibit the exploitation of vulnerable individuals through AI-driven advertising. Non-EU organisations whose AI outputs are accessible within the EU fall under its regulatory scope. For Ghanaian brands seeking expansion into European markets, adherence to these regulations is imperative.
PRACTICAL RECOMMENDATIONS FOR BRANDS:
The challenges and risks posed by AI generated advertisements as it affects AI developers, brands/advertisers, platform owners and consumers can be mitigated when respective stakeholders in the ecosystem adhere to the underlisted points:
- IP Creation: Ensure that human creative input is documented for AI-assisted works. Core brand assets should not depend solely on AI generation.
- Training Data: Monitor the usage of copyrighted materials in third-party AI model training. Evaluate opt-out strategies or licensing arrangements as appropriate.
- Trademark Use: Perform comprehensive trademark clearance for all AI-generated brand names, slogans, and logos prior to deployment.
- Advertising Compliance: Clearly identify AI-generated advertisements to consumers. Disclosures must be prominent and not obscured within fine print.
- Vendor Contracts: Update contracts with agencies and technology vendors to appropriately assign liability for intellectual property infringement resulting from AI-generated outputs.
- Regulatory Monitoring: Keep abreast of developments related to the Emerging Technologies Bill and the new Data Protection Bill in Parliament. Prepare for potential requirements regarding mandatory registration of AI systems with the Emerging Technologies Agency.
- Consumer Protection: Refrain from using AI-generated content that could mislead consumers concerning authenticity, endorsements, or performance.
CONCLUSION:
BRANDS, AI, AND THE PATH FORWARD
The advent of artificial intelligence does not justify disregarding established legal and ethical standards. Ghanaian brands continue to be safeguarded by longstanding laws such as copyright, trademark, contract, and consumer protection, which remain fully relevant. AI does not exempt infringements; rather, it introduces additional channels through which they may occur.
In Ghana, the issue is not the necessity of AI regulation, but rather the methodology for doing so. The GAIPG, Emerging Technologies Bill, and revised Data Protection Bill exemplify a deliberate and ethically informed strategy that seeks to harmonise innovation with responsibility.
Brands that integrate compliance measures from the outset such as documenting human contributions, reviewing AI-generated content for potential intellectual property violations, and providing clear disclosures can mitigate legal exposure while fostering consumer trust within an increasingly AI-led digital landscape.
Ultimately, as emphasised by judicial authorities, technological advancement proceeds rapidly, yet the legal principles of fairness, authorship, and accountability remain steadfast.
DISCLAIMER: The Views, Comments, Opinions, Contributions and Statements made by Readers and Contributors on this platform do not necessarily represent the views or policy of Multimedia Group Limited.
DISCLAIMER: The Views, Comments, Opinions, Contributions and Statements made by Readers and Contributors on this platform do not necessarily represent the views or policy of Multimedia Group Limited.
Source: www.myjoyonline.com
