Get startedGet started for free

Responsible GPT use

1. Responsible AI

Welcome back! We've covered GPT models and their uses and identified a potential use case for implementing the tool. We'll now discuss responsible AI, including some specific considerations and how to handle inputs and outputs, before going through some real-life examples.

2. Responsible AI

Responsible AI is the practice of developing and using AI systems and tools in a way that benefits society while minimizing the risk of negative consequences.

3. Responsible AI

It is predominantly discussed when developing new AI technologies, but their proper use is just as important. It's a shared responsibility to ensure AI is used safely, and we can't blame the tools for errors. For example, in 2023, an attorney used a GPT tool for legal research, but the tool generated fake court cases, which led to ramifications for the attorney.

4. Considerations

During the development of any AI tool, the developer, whether an individual or an organization, will have to consider things such as lawfulness,

5. Considerations

fairness,

6. Considerations

transparency,

7. Considerations

diversity

8. Considerations

and inclusion,

9. Considerations

accountability,

10. Considerations

privacy,

11. Considerations

and security.

12. User considerations

As users of these tools, we also need to consider these factors by ensuring our behavior complies with laws and regulations (which will differ depending on location and industry), treating all individuals equally without discrimination, understanding how the tool works in order to explain the results to stakeholders, and respecting individual rights while protecting business data from unauthorized access or malicious use. Our responsibility is to apply these considerations when creating a prompt or evaluating the output of a GPT tool. For example, some phrases in an AI generated job description may be considered discriminatory in certain cultures or groups.

13. Check the policy

When using a tool from our employer, they will likely provide a code of conduct and policy on its use, including what data it expects to ingest. It's important to follow these policies, which may cover the use of financial, personal, or proprietary data, to name a few. These policies will differ depending on the business, industry, regulatory body, and tool.

14. General input considerations

Generally, we should avoid using personally identifiable information such as government ID numbers, addresses, or medical or banking information in GPT tools unless explicitly approved. The same applies to proprietary information like confidential strategies or legal and financial data.

15. General input considerations

The same precautions apply to pre-installed tools on personal devices, as using personal information may expose it to the provider or make us vulnerable to attacks.

16. General output considerations

We must also consider the tool's output. Some tools may still be exposed to potential bias, where the output unfairly favors one outcome based on the data it has seen; context tracking, where the tool may lose track of the conversation if the context switches; and hallucination, where it confidently provides an incorrect answer. There is no model or tool out there that can guarantee 100% accurate and correct results. It's our responsibility to understand and follow policies and evaluate and verify the output to ensure safe and responsible use. Even when generating text, it's worth proofreading it before using it.

17. Let's practice!

Time for some practice.