STE Microproject

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Ecommerce Website- Amazon GUI and Product

Shopping Cart Test Cases

 Test adding a product to the shopping cart.


 Verify that the cart icon updates with the item count.
 Test removing a product from the cart.
 Verify that the cart total is correctly calculated.
 Test updating the quantity of items in the cart.
 Check for proper validation of quantity input.
 Verify that discounts and coupons are applied correctly.
 Test the "Continue Shopping" button in the cart.
 Verify that the "Proceed to Checkout" button works.
 Test cart persistence across user sessions.
 Check for a confirmation message after adding/removing items.
 Test cart functionality with both registered and guest users.
 Verify that the cart is emptied after the order is placed.
 Test cart interactions on mobile devices.
 Check for security measures to prevent cart manipulation.

Rationale

The rapid growth of e-commerce has made software testing crucial for ensuring user satisfaction,
security, and seamless transactions. As platforms like Amazon handle vast amounts of data and
complex user interactions, rigorous testing is essential to identify defects early and maintain a
competitive edge. This project focuses on the graphical user interface (GUI) and core product
features to ensure a smooth user experience, which is vital for customer retention and brand
loyalty.

Literature Review

The existing body of knowledge on software testing emphasizes the importance of both manual
and automated testing in web applications. Studies indicate that manual testing is effective for
exploratory and usability testing, while automated testing excels in regression and repetitive
tasks. Key literature includes works by Mesbah and van Deursen (2009) on automated testing of
web applications, and Miller et al. (2018), who discuss the integration of performance testing in
CI/CD pipelines. Knowledge of testing frameworks (like Selenium and JUnit), performance
testing tools (like JMeter), and test-driven development (TDD) practices is essential to
effectively conduct this project. Familiarity with HTML, CSS, and JavaScript is also necessary
to navigate and manipulate the e-commerce site’s front-end. Furthermore, understanding user
experience principles and security testing methodologies is crucial, as they directly influence
user satisfaction and the protection of sensitive data. This comprehensive knowledge base will
guide the development of a robust testing strategy tailored for e-commerce environments.

Literature Review

The existing body of knowledge on software testing emphasizes the importance of both manual
and automated testing in web applications. Studies indicate that manual testing is effective for
exploratory and usability testing, while automated testing excels in regression and repetitive
tasks. Key literature includes works by Mesbah and van Deursen (2009) on automated testing of
web applications, and Miller et al. (2018), who discuss the integration of performance testing in
CI/CD pipelines. Knowledge of testing frameworks (like Selenium and JUnit), performance
testing tools (like JMeter), and test-driven development (TDD) practices is essential to
effectively conduct this project. Familiarity with HTML, CSS, and JavaScript is also necessary
to navigate and manipulate the e-commerce site’s front-end. Furthermore, understanding user
experience principles and security testing methodologies is crucial, as they directly influence
user satisfaction and the protection of sensitive data. This comprehensive knowledge base will
guide the development of a robust testing strategy tailored for e-commerce environments.

Proposed Methodology

1. Requirement Analysis

 Stakeholder Interviews: Conduct interviews with stakeholders to gather requirements


and understand critical features that need testing.
 Feature Identification: List key functionalities of the e-commerce platform, including
user registration, product search, product detail display, cart management, payment
processing, and order tracking.
 Risk Assessment: Analyze potential risks associated with each feature, focusing on
usability, security, and performance.

2. Test Case Design

 Test Case Development: Write detailed test cases that cover functional and non-
functional requirements. Each test case should include:
o Test case ID
o Test case name
o Steps
o Input
o Expected results
o Actual results
o Status

3. Environment Setup
 Tool Selection: Choose appropriate testing tools based on the project requirements. For
instance:
o Automated Testing: Selenium WebDriver for UI testing, JUnit/TestNG for test
management.
o Performance Testing: JMeter for load testing.
o API Testing: Postman or similar tools for testing backend services.
 Test Environment Configuration: Set up a staging environment that mimics the
production environment to ensure accurate testing results.
 Test Data Preparation: Create sample data for users, products, and transactions to use
during testing. This may involve populating the database with mock data.

4. Manual Testing

 Exploratory Testing: Perform exploratory testing to uncover usability issues and


unexpected behaviors. Focus on areas like navigation, responsiveness, and overall user
experience.
 Functional Testing: Execute test cases manually, validating each feature against the
specified requirements. Document any defects or issues encountered during this phase.

5. Automated Testing

 Script Development: Develop automated test scripts using Selenium for high-priority
features such as:
o User registration and login
o Product search functionality
o Adding/removing products from the cart
o Checkout process
 Test Execution: Run automated tests and analyze the results. Ensure that the tests cover
both positive and negative scenarios.
 Continuous Integration: Integrate automated tests into a CI/CD pipeline (if applicable)
to enable regular execution of tests with every code change.

6. Performance Testing

 Load Testing: Use JMeter to simulate multiple users interacting with the website
simultaneously. This will help assess how the system performs under load and identify
potential bottlenecks.
 Stress Testing: Push the system beyond its limits to determine how it behaves under
extreme conditions. Observe recovery time and stability.
 Analysis of Results: Collect metrics such as response times, throughput, and error rates.
Analyze these metrics to identify areas for improvement.

7. Reporting and Documentation

 Defect Tracking: Log defects in a tracking system (e.g., JIRA) with detailed descriptions
and steps to reproduce. Classify defects based on severity and priority.
 Test Summary Report: Create a comprehensive report summarizing:
o Test case execution results
o Defects found and their status
o Performance metrics
o Recommendations for fixes and improvements
 Stakeholder Presentation: Present findings to stakeholders, emphasizing critical issues
and potential impact on user experience.

8. Review and Feedback

 Post-Testing Review: Conduct a retrospective meeting with the testing team to discuss
what went well, what didn’t, and how to improve the testing process in future projects.
 Incorporate Feedback: Use insights from the review to refine testing strategies and
methodologies for future projects.

9. Future Work

 Continuous Testing: Suggest implementing a continuous testing strategy to ensure


ongoing quality as new features are developed.
 Regular Updates: Recommend periodic reviews of test cases and methodologies to
adapt to changes in the e-commerce landscape or user expectations.

Course Outcomes

1. Enhanced Understanding of Testing Methodologies


2. Practical Experience with Testing Tools
3. Skill Development in Test Case Design
4. Improved Problem-Solving Skills

Literature Review After Outcomes and Execution

The project highlighted manual and automated testing's importance in web applications.
Automated testing improves efficiency (Mesbah & van Deursen, 2009), while manual testing
focuses on usability. Integrating performance testing in CI/CD (Miller et al., 2018) ensures early
issue detection, emphasizing user experience and security in e-commerce.

Actual Procedure Followed

1. Requirement Analysis: Team gathered requirements (team member name).


2. Test Case Design: Developed test cases (team member name).
3. Environment Setup: Configured testing tools (team member name).
4. Manual Testing: Executed test cases (team member name).
5. Automated Testing: Created scripts (team member name).
6. Data Analysis: Compiled results and documented defects (All team members).
Resources/Materials Used

1. Selenium WebDriver
o Specification: Automation testing tool for web applications.
o Quantity: 1 license (open-source).
2. JMeter
o Specification: Performance testing tool for load testing.
o Quantity: 1 installation (open-source).
3. JUnit/TestNG
o Specification: Testing frameworks for unit and integration tests.
o Quantity: 1 library (open-source).
4. Postman
o Specification: API testing tool for testing RESTful services.
o Quantity: 1 installation (free version).
5. Google Chrome
o Specification: Web browser for manual testing.
o Quantity: 1 installation (free).

You might also like