“AI” (Artificial Intelligence) has been the hottest tech buzzword for the past year and a half, and is likely to continue to be a big, evolving topic for some time to come. Integrating AI into your business operations should be done with explicit consideration of both the benefits and the risks.
First of all, let’s make the distinction between two important terms.
- Artificial Intelligence (AI) is a decades-old branch of computer science focused on replicating human decision-making and analysis. AI features are well-established within tools we use every day. For example, advanced spam filtering doesn’t just look for pre-set list “bad” words or patterns, but adjusts what it’s looking for in real time as incoming email patterns emerge.
- Generative AI (genAI) burst into view in 2022 with public betas of tools like ChatGPT and Dall-E. This subcategory of AI is trained from vast catalogs of data to output new, original text, images, audio, or video.
Most of the debate you’ve heard about AI over the past year and a half is really about genAI — new web-based tools and features of your favorite apps that can answer your questions in full paragraphs, summarize long passages into bullet points, and create new images based on text- or image-based prompts. That debate is about, for example, the ethics of using genAI for assignments at school or work, the possibility that human jobs might be replaced by bots, and even the likelihood of genAI taking over the world.
Guidelines for responsible use
GenAI tools hold many promises for organizations of all sizes. For the moment, let’s set aside the fate of humankind, and explore the risks and benefits of using the genAI tools that exist today within your organization.
There are two broad security issues to consider: what you input, and what the genAI tool outputs.
Input
Your input may be a question to prompt an answer from ChatGPT, a set of documents to create a digest for meeting notes, or a description of a specific scene for an illustration to use with a social media post. You should assume that your input is being saved and used by the genAI tool you’ve selected. So, first and foremost: do not use any sensitive, proprietary, or personally identifiable information as the input data for a genAI assignment.
There are specific terms and conditions for each tool you may use, and, if you are looking for a better understanding of how your input data might be used, it’s a good idea to explore these and other privacy statements that your genAI vendor has prepared. But keep in mind that this is a quickly expanding industry with billions of dollars at stake, and genAI’s success depends on having access to as much data as it can get its hands on — there is no guarantee that today’s terms and conditions will be the same tomorrow, and once you have shared any data, there may be no way to unshare it.
Output
If you prompt a genAI program to create content and then use that output for a memo, presentation, or blog post, you are responsible for its accuracy. Be sure to double-check the genAI’s facts, and don’t assume the bot knows more than you do. Along those same lines, if you use content from ChatGPT or a similar source, disclose and cite it accordingly, confirming that the generated content has been reviewed by the author for accuracy and acknowledging responsibility for the statements made.
In particular, genAI tools can be very helpful for generating or editing code based on plain English prompts, creating opportunities for non-programmers to build apps, web sites, or scripts. But executing that code on your own computer or in a networked environment without first confirming the code’s results has the potential for data loss or compromising your system. You should understand and test that code in a controlled development environment to protect yourself and your organization.
Deeper dive
The Civic AI Observatory has a great article that breaks this matter down in extensive detail, and we highly recommend giving it a read.
In particular we appreciated the examples they shared of sample policies that may be instructive as you form your own organization’s posture:
- The official guidance for how civil servants should use this technology from the UK government
- A sample policy that considers the ethical guideline for use in campaigning from a smaller organization
- One-Pager for Staff from London’s Office of Technology
- A sample policy from the Society for Innovation, Technology and Modernisation
If you’re a Macktez client, and you still have questions, reach out to your Technical Account Manager to start a conversation.