Introduction
Generative Artificial Intelligence (“GenAI”) tools have become an increasingly prominent feature of the modern workplace. Whether used to draft correspondence, summarize lengthy documents, generate code, or support strategic planning, these tools offer significant productivity benefits. At the same time, their growing adoption, often outside the scope of formal corporate oversight, raises serious concerns around data protection, information security, and regulatory compliance.
On March 5, 2026, the Turkish Data Protection Authority (the “Authority”) published a document titled The Use of Generative AI Tools in the Workplace (the “Document”). Rather than a heavy-handed regulatory hammer, the Document is structured as a survival guide for the modern enterprise. Its underlying thesis is that trying to ban AI use in workplaces is a dead end and prohibition will only push its usage underground. Therefore, the real challenge for businesses is not stopping GenAI usage, but casting light on the already existing, unsanctioned GenAI usage in their operations, framed “Shadow AI”.
The Rise of "Shadow AI"
The concept of “Shadow AI” builds on a long-standing challenge familiar to IT governance: Shadow IT. For years, organizations have contended with employees using unauthorized personal cloud storage services, messaging applications, and other third-party tools to circumvent official corporate systems. The Authority draws a deliberate parallel between the two phenomena, while emphasizing that Shadow AI carries materially greater risks.
Shadow IT typically involves the unauthorized storage or transfer of corporate data on external platforms. Shadow AI, by contrast, involves actively feeding corporate data into third-party generative models that process, and in certain configurations may retain or learn from, the information provided to them.
The distinction is important. When an employee stores a file on a personal cloud drive, the data sits on a third-party server, but it remains static. When that same employee pastes a client contract into a public GenAI tool, the data is actively processed by a system that may retain, learn from, or in the worst cases, even reproduce it in outputs served to other users. The organization loses not just custody of the data, but any meaningful visibility into what happens to it.
The Document identifies several factors driving the proliferation of Shadow AI. GenAI tools are widely accessible, often free or low-cost, require no specialized technical knowledge, and deliver immediate results. In the absence of clear corporate policies, employees tend to adopt these tools based on individual initiative to accelerate routine tasks, improve output quality, or simply meet tight deadlines. The result is an untracked and unregulated flow of potentially sensitive corporate and personal data into third-party platforms, entirely outside the organization’s control.
The Risk Landscape
The Authority identifies several categories of risk arising from uncontrolled GenAI use in the workplace, all with significant practical implications for businesses:
- Accountability and Auditability: Where GenAI tools are used outside corporate monitoring and logging systems, it becomes exceedingly difficult to trace how specific outputs were generated, what data was used as input, and on what basis certain conclusions were reached. This undermines incident response capabilities and makes it significantly harder to demonstrate compliance with applicable regulations. In an enforcement context, this is a serious vulnerability: if the Authority investigates a data processing complaint and the organization cannot demonstrate what happened, the inability to account for the processing activity itself is a compliance failure.
- “Automation Bias” and Hallucinations: GenAI tools can produce outputs that are inaccurate, misleading, or biased. The Document draws attention to the risk of “automation bias”, a well-documented tendency among users to accept machine-generated outputs at face value, without adequate critical review. This risk is compounded by the phenomenon of hallucinations, where GenAI systems generate content that appears coherent and authoritative but is factually incorrect. When such outputs are incorporated into corporate decision-making without proper verification, the consequences can be material.
- Intellectual Property and Trade Secrets: Sharing source code, product designs, business strategies, or other competitively sensitive information with publicly accessible GenAI tools creates a tangible risk of intellectual property exposure. In certain cases, data submitted to these platforms may be used in model training or improvement processes, or may become accessible to unauthorized parties, including direct competitors.
- Information Security and Cyber Risks: The use of unmanaged GenAI tools through unsecured APIs, personal devices, or unvetted integrations expands an organization’s attack surface. This can increase exposure to malware, unauthorized access, data loss, and broader threats to the integrity of corporate systems
- Personal Data Protection: From a data protection standpoint, Shadow AI presents acute risks. When employees input client details, employee records, health data, or other personally identifiable information into external GenAI tools, they may be affecting an unauthorized transfer of personal data to a third party. The Authority underscores that Law No. 6698 on the Protection of Personal Data applies in a technology-neutral manner, where its obligations are fully applicable to personal data processing conducted through GenAI systems just like any other software or hardware process. The Document also highlights the possibility that personal data embedded in user prompts may surface in outputs generated for other users, creating an additional vector for unauthorized disclosure.
- Reputational Harm: Finally, the Document flags the risk to institutional credibility. When unverified GenAI outputs reach clients or stakeholders, the reputational damage falls on the organization, not on the tool. In sectors where trust and expertise are core to the client relationship, this risk should not be underestimated.
The Case Against Prohibition
One of the most significant positions taken in the Document is the Authority’s explicit acknowledgment that blanket prohibitions on GenAI use are neither realistic nor effective. The Authority recognizes that in light of the speed, efficiency, and accessibility these tools offer, outright bans are unlikely to prevent actual usage. Rather, prohibitive approaches tend to drive GenAI adoption further underground, compounding precisely the visibility and governance challenges they seek to address.
This position reflects a pragmatic regulatory outlook. The Authority advocates for a structured governance and compliance perspective grounded in guidance and awareness rather than restriction. For organizations, this means the regulatory expectation is not that GenAI will be absent from the workplace, but that its use will be managed.
What Does the Future Look Like?
Drawing on the framework set out in the Document, organizations will inescapably need to consider the following measures in order to manage GenAI-related risks while preserving operational benefits:
1. Institutional Policies: Organizations should develop and communicate a formal policy governing the use of GenAI tools in the workplace addressing which tools are approved for usage; the purposes for which they may be employed; the categories of data that may and may not be shared with these tools; the standards applicable to outputs generated by GenAI; and the organization’s approach to risk management in this context.
2. Data Discipline: Employees should be trained to approach GenAI interactions with a disciplined awareness of data sensitivity. The Document recommends that, wherever possible, users should employ anonymized, generalized, and abstract formulations rather than submitting identifiable personal data or commercially sensitive information. For instance, rather than inputting specific names, dates, or financial figures, employees should frame their prompts in generic terms, or at least with placeholders substituting for actual personal data. This practice is particularly important when dealing with special categories of data, including health information, financial records, and data relating to legal proceedings.
3. Human Oversight: The Document emphasizes that GenAI outputs should not be treated as final or authoritative. Organizations should aim to create a culture in which AI-generated content is regarded as a starting point that requires substantive human review before it informs any decision or communication. Establishing review processes that specifically address the risks of automation bias and hallucination is critical to maintaining decision quality and institutional credibility.
4. Technical Controls: Policy measures should be complemented by appropriate technical safeguards and products. These may include network-level restrictions on access to unapproved GenAI platforms and more importantly the provision of enterprise-grade AI solutions with appropriate security configurations, coupled with other technical solutions.
5. Awareness and Training: Organizations should conduct regular initiatives covering the capabilities and limitations of GenAI tools, the associated legal and technical risks, and the organization’s applicable policies. The Authority also recommends establishing feedback mechanisms through which employees can report practical challenges and share experiences, enabling the organization to identify emerging risks on an ongoing basis.
Conclusion
The Document represents a timely and pragmatic contribution to the evolving regulatory discourse on GenAI in the workplace. Its core message is clear: the question is no longer whether employees are using GenAI tools, but whether organizations have the governance structures in place to manage that usage responsibly.
For businesses operating in Turkey, the Document should serve as a starting point to assess their current exposure to Shadow AI, develop or update their GenAI governance frameworks, and ensure that their data protection compliance programs adequately address the risks posed by these rapidly evolving technologies. Organizations that move proactively by establishing clear policies, implementing technical safeguards, and cultivating an informed workforce will be best positioned to harness the benefits of GenAI while maintaining their legal and regulatory obligations.
