Scenario
Simulating load on an e-commerce API where multiple users access data every few seconds. A total of 100 requests are executed using concurrent virtual users to measure response time and system behavioour.
Configuration
- Endpoint: https://automationexercise.com/api/brandsList
- Thread Group:
- Number of Threads: 25 users
- Ramp-Up Period: 100 seconds
- Loop Count: 4
π Total Requests:
25 Γ 4 = 100 requests
- HTTP Request:
- Method:
GET - Protocol:
https - Server Name: automationexercise.com
- Path: api/brandsList
- Method:
API Performance Testing Workflow:

Testing Execution with Jmeter:
Test Run #1 β Success Scenario
βοΈWhat happens:
- All requests sent successfully
- API responds with valid data.

π What to analyze:
- Average Response Time β overall performance
- Min / Max β consistency
- Standard Deviation β stability under load
- Throughput β requests per second
Interpretation :
- Acceptable average β system performs well
- Low deviation β stable responses
- Low throughput β light load simulation
β Test Run #2 β Failure (Invalid Endpoint)
π Change endpoint to:
/api/invalidEndpoint

βοΈ What this tests:
- How system handles incorrect requests
- Error response behaviour
β Test Run #3 β Assertion Failure
Add Response Assertion:
- Expect:
200 - Force failure by expecting wrong value (e.g.
404)

βοΈ What this tests:
- Validation logic
- Detection of incorrect responses
βTest Run #4 βOverload Method
Changing number of thread as 500 and ramp-up period as 0.01 seconds, and delay as 5000 ms

βοΈ What this tests:
- System behaviour under extreme load
- Stability when handling concurrent users
- How the API responds when overloaded (e.g., 503 errors)
What This Demonstrates:
- Performance testing under load
- System behavior with multiple users
- Error handling (invalid requests)
- Validation of expected responses
- QA mindset beyond functional testing
“Performance testing is not just about speed. it’s about understanding how the system behaves when real users hit it.”