Governance and security
1. Governance and security
Considering the complexity of LLM applications, governance and security are crucial.2. LLM lifecycle: Governance and security
It is the final topic we will cover in the operational phase.3. Governance and security
Neglecting governance and security can have serious consequences. Governance involves policies, guidelines, and frameworks governing LLM applications' development, deployment, and usage. Security involves measures to prevent unauthorized access, data breaches, adversarial attacks, and potential misuse or manipulation of the models' outputs or capabilities.4. Access control
A wide-spread way to ensure information security and manage access is by using role-based access control. In this framework, permissions are assigned to roles, and users are assigned to those roles. All APIs must adhere to security standards, allowing requests only from users with appropriate permission. It's advisable to use a zero trust security model, requiring all users to be authenticated, authorized, and continuously validated. When interacting with the LLM, users may have different roles, potentially affecting the access to confidential information through RAG. Thus, it's crucial to ensure the application assumes the correct role, possibly adjusting it for each request, when accessing external information.5. Threat: Prompt injection
We'll still need to be aware of common threats despite access controls. One such threat is prompt injection, where attackers manipulate input fields or prompts within an application to execute unauthorized commands or actions. Tools are available to detect these adversarial attacks. Such attacks can have serious repercussions for an organization; for instance, in chat applications, allowing arbitrary text can lead to reputation damage or legal obligations. To mitigate this risk, we should assume that prompt instructions can be overridden and contents uncovered. Essentially, treat an LLM as an untrusted user. Identifying and blocking known adversarial prompts can also enhance security.6. Threat: Output manipulation
Output manipulation, like prompt injection, alters an LLM's output. The consequences are comparable, but it's important to recognize that the LLM's output can also be leveraged in downstream attacks. This could involve manipulating the LLM application to execute malicious actions on behalf of the user. To mitigate this risk, avoid granting the application unnecessary authority or permissions to carry out these malicious actions. Additionally, we can implement measures to censor and block specific undesired outputs.7. Threat: Denial-of-service
In denial-of-service attacks, users flood our LLM application with requests, causing substantial cost, availability, and performance issues, particularly in lengthy chains with multiple components. Mitigations include limiting request rates and capping resource usage per request.8. Threat: Data integrity and poisoning
Data poisoning injects false, misleading, or malicious data into our training set, potentially spreading further during LLM fine-tuning or training. While poisoning is usually deliberate, it can also occur unintentionally, such as including copyrighted material or personal information in the training set. Some content may even be harmful. Mitigation strategies include sourcing data from trusted sources and verifying their legitimacy. During training, filters and detection methods should be employed to identify and mitigate poisoned data. Additionally, output censoring can be utilized to block known harmful content.9. Protecting ourselves
To protect ourselves, we should employ the latest security standards and implement mitigation strategies. It's essential to assume the perspective of a malicious user targeting our system. Resources like OWASP provide up-to-date lists of known security threats for LLM applications, allowing us to stay informed about current threats. The threats discussed in this video are not exhaustive and vary depending on the specific application.10. Let's practice!
Let's practice!Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.