Ethics, etiquette and disclosure when using AI tools in the workplace 

A male in a white shirt looks at a laptop screen while another male in a green t-shirt stands next to him, also looking at the screen and smiling. They are in a colourful office, with meeting rooms behind them.

AI has landed as an everyday tool in the workplace. And while new technology seems to be emerging every week, practices, policies and regulations haven’t yet accelerated at quite the same pace. 

That means it’s over to us to make choices about how we use it, what’s considered acceptable, and what’s not. 

As we all learn new tech, and as AI’s capabilities evolve, our ideas around the ‘rules’ of engagement may change too. But with the right guardrails in place, we can start to collectively ensure our use of AI is safe, considerate and conscious of others.  

Here are 5 key points for recommended AI etiquette to keep in mind: 

Select your AI tools carefully 

AI relies on large amounts of data to develop models, iterate and improve their outputs. Many of the free tools do not have adequate protections in place to prevent your data from being added to a public dataset. 

When evaluating AI tools for use, check the policies and protections in place and make sure you know and understand what will happen to the information you provide as prompts or inputs (see our point around Privacy below as well). 

As with other technology decisions, consider what the organisation already has in place, and how this fits in with the application strategy or technology roadmap. 

Disclose the use of AI meeting assistants 

AI meeting assistants, such as those found in Zoom and Microsoft Teams or provided by Otter.ai and Fireflies.ai, are designed to enhance the efficiency and productivity of virtual meetings by automating various tasks and providing features that assist participants. These AI assistants can perform several functions, including transcription and notetaking, meeting summaries and automated follow ups.  

While AI meeting assistants offer several advantages, they also present potential privacy problems, which is why disclosure about their use is important. Plus, because AI meeting assistants record and process audio and video data, protocol should be set around what can be recorded and who should have access to it, before sharing potentially sensitive or confidential meeting information. 

It’s also a good idea to run through training to make sure you understand your sharing settings – for example, will all attendees or invitees to a meeting receive an automated transcript, just you as the host, or the person with the AI assistant running? 

If you’re planning to use one as a meeting host or attendee, check first, and make sure you have permission to record and share meeting content. 

Privacy and protection 

Privacy is an essential consideration for using AI tools responsibly. In May 2023, Aotearoa’s Privacy Commissioner released his expectations, with a caveat that recommendations would change as technology does.  

Information inputted into open tools (like ChatGPT) is not easily retrieved, with limited controls around its use. The Privacy Commission recommends considering key questions in regards to privacy principles, such as: 

  • Is the training data behind an AI tool relevant, reliable, and ethical? 

  • What was the purpose for collecting personal information? Is your use related? 

  • How are you testing that AI tools are accurate and fair for your intended purpose? Are you talking with people and communities with an interest in these issues?  

  • How are you testing that AI tools are accurate and fair for your intended purpose? Are you talking with people and communities with an interest in these issues?  

Be transparent about generative AI outputs 

Many of us are now experimenting with AI-based tech to create images, write marketing content, script application code, create articles and social media posts, and generate videos and audio recordings. But do we need to make it obvious that AI has had a hand in our outputs? Political parties were in the spotlight earlier this year around their use of AI, when it was evident that computer-generated images had been used to create ads. There are calls to introduce laws when it comes to election campaigning (mandated disclosure is on its way for Europe, with discussions also in the US). But what about the rest of us? While it could be argued that both humans and AI are capable of being misleading, it’s recommended that you create a policy around transparency and disclosure in the use of AI in your business when there’s potential for content to mislead, confuse or misrepresent. 

The reason being is that disclosures have a part to play in how your audience then interprets and responds to the information you’re giving them. This could look like: 

Disclosure: The following content was generated entirely by an AI-based system based on specific requests to the AI system. 

Or 

Disclosure: The following content was generated by me with the assistance of an AI-based system to augment the effort.  

Moreover, there is a grey area about whether the outputs of generative AI can actually be considered your own content. If you check the fine print, platforms often have policies around passing AI-generated content as your own. For example, OpenAI assigns ownership of output to its users but clauses also state that you may not “represent that output from the Services was human-generated when it is not”. 

Beware of bias and misinformation 

If it’s on the internet, then it must be true, right? Nope! There are equal amounts of damaging and misleading information out there as there is accurate. While AI tools can be amazing assistants in their ability to significantly reduce research time, they have the potential to ‘hallucinate’ or state incorrect information as facts, as well as amplify a very human problem: bias. AI tools like ChatGPT are being trained by large amounts of data: the good, the bad and the ugly. It’s important to supplement AI outputs with your own nuanced perspective, critical thought and independent research. It’s probably important to point out that a disclosure (as mentioned above) does not absolve you of responsibility if your content is seen to be problematic. 

There are plenty of productivity benefits that come with the use of AI-powered tools, however their use comes with some considerations to ensure you’re using them in alignment with your principles as a business. Ethics is of course a subjective discussion, and will come down to what your organisation considers responsible use. But given your use of AI tools can only be expected to grow, it’s a good time for your business to consider creating a process around your choice of platforms (for example, for privacy analysis) and protocol around your employees’ use and disclosure of use. Reach out if you’d like to chat to us about this further.

Previous
Previous

Brightly’s B Corp Recertification 

Next
Next

How to manage your workforce’s remote devices