QA Engineer — AI-Powered Test Automation
About SingleCase
SingleCase is an AI-native legal practice management platform serving law firms across Europe. We build software that helps lawyers manage cases, clients, time, billing, and documents — all in one place. We are a small, high-ownership team building a serious product for a demanding industry. Quality is foundational to what we do, and we believe AI is fundamentally changing how quality is achieved.
The role
We are looking for a QA engineer who thinks of themselves primarily as an AI workflow builder and testing architect — someone who uses AI to multiply testing capacity, not just as a convenience tool.
This is not a role for someone who wants to manually execute test cases. We have a fast-moving roadmap, a lean team, and high standards. The person in this role will design and orchestrate automated testing systems — using AI agents, LLM-assisted test generation, and smart automation — so that our QA coverage scales independently of headcount.
You will own the testing strategy end-to-end: what gets automated, how AI is used to generate and maintain tests, and how quality gates are enforced throughout the development lifecycle.
What you will do
- Design and operate AI-assisted testing workflows — using LLMs to generate test cases from specs and acceptance criteria, reducing the time from requirement to test coverage to near zero
- Build and maintain end-to-end automated test suites using Playwright (or equivalent), with AI actively generating, reviewing, and maintaining test scripts
- Act as testing orchestrator: define what AI handles autonomously, what requires human judgment, and architect the system accordingly
- Integrate AI-powered test generation into our CI/CD pipeline so that new features arrive with tests, not after them
- Use AI agents for exploratory testing — running autonomous sessions that surface edge cases human testers would miss
- Own regression testing strategy: continuously evaluate which tests to automate, retire, or hand to AI maintenance
- Work with product and developers at the spec stage — shaping features for testability before a line of code is written
- Monitor, triage, and communicate quality signals clearly — not just filing bugs but giving the team a live view of product health
- Research and evaluate new AI testing tools as the space evolves rapidly — we expect this role to stay ahead of the curve
What we are looking for
- Genuine enthusiasm for AI-assisted and AI-driven testing — this is the core of the role, not an add-on
- Hands-on experience building automated test suites (Playwright, Cypress, Selenium, or similar) — you write the code, not just configure the tool
- Experience using LLMs or AI coding tools in a testing or QA context — prompt engineering for test generation, AI-assisted debugging, or autonomous test agents
- Comfortable with TypeScript or JavaScript; Python is a bonus
- Experience with API testing (REST/GraphQL) and CI/CD integration (GitHub Actions or similar)
- Systems thinker: you design processes and tooling, not just individual tests
- Strong communicator — quality signals are only useful if the team understands them
Nice to have
- Experience with AI testing frameworks or agents (e.g. Autify, Reflect, Momentic, or custom LLM-driven test runners)
- Background in SaaS B2B products, legal tech, or other compliance-sensitive domains
- Exposure to performance or load testing (k6, Locust)
- Familiarity with accessibility testing
- A feel for UX and product design — the ability to spot when something works technically but feels wrong to a user, and to communicate that clearly to the product team
What we offer
- Full ownership of the testing strategy and AI tooling stack — you define how quality works here
- A team that takes AI seriously: budget, time, and genuine support to build AI-first workflows
- Direct influence on product decisions — QA is involved from the start, not bolted on at the end
- Small team, high visibility — your work shapes the product, not just verifies it
- Hybrid work from Prague with flexible hours
- Competitive salary benchmarked against the Czech tech market
Hiring process
1. Intro call (30 min) — we want to hear how you think about AI in testing and what workflows you have built or would build.
2. Technical task — a realistic, time-boxed exercise centred on AI-assisted test design. We keep it under 2 hours and respect your time.
3. Technical interview (60 min) — walk us through your approach, your tool choices, and how you would architect our test automation.
4. Final conversation with CTO and product lead.