testRigor Blog

Weekly QA Testing Knowledge

How to Test Prompt Injections?

As AI-powered applications such as OpenAI GPT-4 and other similar Large Language Models (LLMs) come into play, prompt injection attacks have become one of the key security issues we are dealing with. These attacks trick an AI model by introducing malicious input, which defers its normal instructions or causes it to do something unintended. In …

Test Documentation: Best Practices with Examples

The act of documenting our actions is something we’ve been doing since we were little kids. Remember your chemistry ...

Defect Triage in Software Testing

“Don’t fix bugs later; fix them now” – Steve Maguire. This is an interesting, proactive approach to not just ...

AI Assistants vs AI Agents: How to Test?

AI has changed how software functions, especially with AI Assistants and AI Agents. These systems have already become an integral ...

Full-Stack Tester: Role and Skills

The software industry is currently more aggressive and is no longer interested in seeking resources that concentrate only on one ...

What is AIOps?

Think of a critical application that is experiencing performance issues. Traditionally, you’ll see IT teams would need to ...

Defect Clustering in Software Testing

“Never allow the same bug to bite you twice” – Steve Maguire. This quote tells us the importance of dealing with bugs ...
1 43 44 45 113