Writing tests and securing code with AI
1. Writing tests and securing code with AI
Welcome back! In this video, we’ll learn how to write prompts that generate strong unit tests and reveal code vulnerabilities.2. Test-driven prompting
The first idea is Test-Driven Prompting, inspired by Test-Driven Development. Instead of asking for a function, we define the expected behaviors and edge cases first. Then, we design prompts to match those requirements. For example, instead of: “Write a Python function to parse email addresses.” We use: “Write a Python function to parse email addresses. It should: Accept a valid address like [email protected] Reject invalid ones like user@@domain Raise ValueError for empty input.” This helps the model account for edge cases from the start.3. Test-driven prompting
We can also ask the model to generate tests. A basic prompt might be: “Generate unit tests in Python for a function that processes form data and inserts it into a database.” But adding detail improves results.4. Test-driven prompting
For example: “Generate pytest unit tests for the following function. Include cases for empty input, SQL keywords, and special characters. Assume a mock database connection”. This specificity guides the model toward robust, secure tests.5. Prompting for security
We can also prompt the AI model to generate the tests themselves. For example, we can ask to analyze code for risks: “Scan this Python function for vulnerabilities and suggest safer alternatives.” Even better, we can guide it to look for common flaws, such as SQL injection, XSS, and input validation issues. The more we specify to the model the type of vulnerabilities or the potential tweak point of our code, the more effective the model can be.6. OWASP Top 10
There are other ways to guide the model on which vulnerabilities to look for. You might have heard of the OWASP Top 10, a widely recognized awareness document that outlines the 10 most critical web application security risks. It is updated regularly and covers vulnerabilities such as broken access control and cryptographic failures. Even if we are not sure about the vulnerabilities in our code, using the specific issues from this list in the prompts allows to steer the AI to focus explicitly on those common and critical security risks. With the rise of LLM-based applications, an OWASP Top 10 list for Large Language Model Applications has also been developed.7. OWASP Top 10
For example, let's consider this login() function in Python. We can directly prompt the model to audit it for the OWASP Top 10 vulnerabilities. This way, the model goes through the entire list of vulnerabilities and checks whether they apply to our use case, and if they pose potential issues given our implementation.8. Let's practice!
Keeping in mind that AI is not a replacement for human code review or security audits, let's continue!Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.