Unless you override it, your organization’s policy for AI-driven tools is “anything goes.” That’s because your developers want to get their job done as quickly as possible. If that involves having Github Copilot write part of the code or copying a code block into ChatGPT for debugging help, so be it.
If you don’t have secrets, maybe that’s fine with you. But even though OpenAI is not training ChatGPT on user prompts, they have not been very diligent about keeping them safe. You should assume that everything your developers paste into ChatGPT will eventually leak.
That includes your data. AI tools are very good at data cleaning and visualization. Your Data Scientists are surely pasting data into ChatGPT and getting back fully functional Python code to run in a Jupyter Notebook. Unless you tell them not to.
If I asked one of your developers or Data Scientists about your policy on AI tools, would they know it? And would they follow the rules or would they take the 10x or 100x productivity boost?