How API Testing with C# and RestSharp

API testing is an essential part of the software development lifecycle, focusing on the communication and data exchange between different software systems. It verifies that APIs are functioning correctly and meeting performance, reliability, and security standards. Using C# with the RestSharp library simplifies the process of interacting with RESTful APIs by providing an easy-to-use interface for making HTTP requests.

How AI-driven Test Case Generation

Introduction
In today’s fast-paced software development environment, manual test case generation struggles to keep up with the ever-increasing complexity of systems, especially in Agile and DevOps-driven projects. AI-driven test case generation has emerged as a powerful solution to streamline and automate this process, leveraging artificial intelligence and machine learning (ML) to improve test accuracy, efficiency, and coverage.

This lecture will explore AI-driven test case generation, how it works, its advantages and challenges, and its application in modern testing environments.


What is AI-Driven Test Case Generation?

AI-driven test case generation automates the creation and optimization of test cases using AI techniques such as machine learning (ML) and natural language processing (NLP). By analyzing historical data, code structure, requirements, and user behavior, AI tools can produce test cases that cover critical functionalities, saving time and effort for testing teams.

Instead of manually writing test cases based on predefined requirements, AI-driven approaches can dynamically generate tests that adapt to the code, highlighting the most important areas to test, and identifying risks that human testers might overlook.


How Does AI-Driven Test Case Generation Work?

  1. Data Analysis
    AI-based tools use data from multiple sources, such as:
    • Historical test data: Past test cases, bug reports, and test execution logs.
    • User interactions: Analyzing how users interact with the system to detect potential problem areas.
    • Source code: Static code analysis to detect patterns and complexities.
    This data helps train the AI models to generate relevant test cases by learning patterns of common defects, usage scenarios, and code areas that need focus.
  2. Natural Language Processing (NLP)
    NLP plays a significant role in understanding natural language specifications, like user stories or business requirements. By analyzing these documents, AI can automatically convert requirements into test cases that align with the intended behavior of the software.
  3. Model-Based Testing
    AI tools can also create models that represent the system’s behavior or user flow. Based on these models, they can generate comprehensive test cases covering all possible scenarios, edge cases, and user paths.
  4. Risk-Based Test Case Generation
    AI can prioritize test cases based on risk analysis, such as:
    • Code complexity.
    • Areas prone to defects.
    • Recently modified code.
    • Critical functionalities or components.
    This approach ensures that high-risk areas are tested more thoroughly, improving the likelihood of catching defects early.
  5. Self-Updating Test Cases
    One of the biggest advantages of AI-driven tools is the ability to maintain and update test cases automatically. As the software evolves, the AI tools can detect changes in the code and automatically adapt test cases, making it easier to keep up with rapid development cycles.

Advantages of AI-Driven Test Case Generation

  1. Speed and Efficiency
    AI tools can generate test cases much faster than manual efforts, making the process more efficient. This speed is particularly valuable in Agile and DevOps environments where rapid iteration is common.
  2. Better Coverage
    AI ensures broader and more comprehensive test coverage by analyzing patterns that humans might miss. This leads to more thorough testing, particularly in complex systems with multiple variables.
  3. Cost-Effectiveness
    Automated test generation reduces the need for extensive human intervention, significantly lowering costs associated with manual test writing and maintenance.
  4. Scalability
    AI can easily scale to accommodate large and complex projects, generating thousands of test cases quickly without needing additional resources.
  5. Adaptability
    As code changes, AI-driven tools can adapt the test cases accordingly, maintaining relevance even in dynamic development environments. This is particularly beneficial in continuous integration and continuous delivery (CI/CD) pipelines.

Challenges of AI-Driven Test Case Generation

  1. Data Dependency
    AI tools require large volumes of high-quality data to be effective. Poor or insufficient data may result in suboptimal test cases.
  2. Complex Setup
    The initial setup of AI-driven systems can be complex, requiring knowledge of AI/ML algorithms, testing frameworks, and training data. It may take time and effort before the system becomes fully functional.
  3. Tool Expertise
    Not all testing teams are familiar with AI-based tools, and additional training may be required to effectively implement and maintain these systems.
  4. Trust in AI
    Some teams may be reluctant to trust AI-generated test cases over manual ones. Ensuring that AI-driven tests align with business requirements and actual software behavior can require oversight.

Use Cases

  • Regression Testing: AI tools can quickly generate test cases for regression testing, ensuring that recent changes haven’t introduced new bugs.
  • User Experience Testing: By analyzing user behavior data, AI can create test cases to mimic real-world user scenarios, improving UX testing.
  • Security Testing: AI can identify potential vulnerabilities in the code and generate relevant test cases, helping teams catch security issues early.

Conclusion

AI-driven test case generation is transforming how software testing is performed. By leveraging AI’s ability to analyze data, adapt to changes, and optimize testing efforts, teams can increase test efficiency, improve coverage, and reduce the time and cost of testing. However, while AI offers many advantages, it requires proper setup, high-quality data, and a clear strategy to maximize its benefits.

Incorporating AI in test generation is becoming essential in today’s fast-evolving software landscape, especially in Agile and DevOps workflows.

Using Paste Special in Visual Studio to Generate C# Classes from JSON

Visual Studio offers a feature called “Paste Special” that allows you to easily generate C# classes from JSON objects. This is particularly useful when working with web APIs or any JSON data, as it automates the creation of data models that match the JSON structure.

  1. Copy the JSON Object:
    • Ensure you have your JSON object copied to the clipboard. For example
  1. Open Visual Studio:
    • Launch Visual Studio and open the project where you want to add the new classes.
  2. Add a New Class File:
    • In Solution Explorer, right-click on the folder where you want to add the new class.
    • Select Add > New Item….
    • Choose Class and give it a meaningful name, then click Add.
  3. Use Paste Special:
    • Open the newly created class file (e.g., MyClass.cs).
    • Delete any default code in the class file.
    • Go to Edit > Paste Special > Paste JSON as Classes.
  4. Review the Generated Code:
  5. Visual Studio will automatically generate C# classes that correspond to the JSON structure. For the example JSON, it would generate something like this:

What Does an SQA Do in Agile? Key Contributions Explained

In the dynamic world of software development, Agile methodology has become the gold standard for delivering iterative, customer-centric products. While Agile emphasizes collaboration, flexibility, and continuous delivery, one critical element often underestimated is the role of Software Quality Assurance (SQA).

Far from being a traditional gatekeeper at the end of the development cycle, SQA in Agile is a continuous, collaborative force that ensures product quality from Day 1. Here’s how SQA contributes meaningfully across the Agile lifecycle:

1. Early Involvement in the Development Cycle

In Agile, quality is everyone’s responsibility—but SQA takes the lead from the start. Testers participate in sprint planning, backlog grooming, and story estimation, ensuring that acceptance criteria are testable and clear.

Benefits:

  • Uncover ambiguities before development starts
  • Promote shared understanding between devs, testers, and product owners
  • Improve test case alignment with business value

2. Continuous Testing

Agile favors rapid iteration. That’s where continuous testing comes in. SQA builds and maintains test automation frameworks that run across CI/CD pipelines, enabling fast feedback loops.

Key practices:

  • Automated unit, integration, and regression testing
  • Frequent smoke and sanity testing after each build
  • Shift-left testing to detect defects early

3. Collaboration with Cross-Functional Teams

In Agile, SQA doesn’t work in silos. Instead, testers collaborate closely with developers, product owners, and UX designers in a shared sprint team. They raise concerns proactively, influence technical decisions, and advocate for testability.

Contributions include:

  • Defining “Done” criteria
  • Participating in daily stand-ups and retrospectives
  • Encouraging pair testing and TDD (Test-Driven Development)

4. Exploratory and Ad-hoc Testing

Beyond scripted tests, SQA performs exploratory testing to uncover edge cases and usability flaws that automated scripts might miss. Agile welcomes changing requirements, and exploratory testing is agile enough to keep up.

Impact:

  • Enhances test coverage in high-risk areas
  • Increases product usability and customer satisfaction
  • Catches “unknown unknowns” before production

5. Continuous Feedback and Improvement

Each Agile sprint ends with a retrospective. SQA contributes by analyzing defect trends, test effectiveness, and root causes. This feedback loop helps the team refine processes, tools, and test strategies over time.

Common SQA metrics in Agile retros:

  • Defect escape rate
  • Test coverage vs. risk
  • Automation ROI
  • Time-to-detect/time-to-fix bugs

6. Risk Mitigation and Prevention

SQA identifies risks early—not just technical bugs, but also requirements volatility, environmental instability, and integration complexity. They ensure mitigation strategies are in place before these risks snowball into blockers.

Tools and techniques:

  • Risk-based testing
  • Impact analysis
  • Root cause analysis using retrospectives

7. Championing Customer-Centric Quality

In Agile, success is measured by working software that delivers value. SQA bridges the gap between business and technology by:

  • Validating user stories against customer expectations
  • Ensuring user journeys are tested across platforms
  • Advocating for accessibility, localization, and performance standards

Final Thoughts

In Agile, Software Quality Assurance is not a phase—it’s a mindset.

SQA professionals play a strategic role in maintaining speed without sacrificing quality. By embedding testing within sprints, collaborating continuously, and leveraging automation and feedback, SQA helps Agile teams build better, faster, and smarter software.

In the end, it’s not just about catching bugs—it’s about delivering confidence with every sprint.

Comprehensive Guide to API Testing: Types, Tools & Techniques

In today’s interconnected software ecosystems, APIs (Application Programming Interfaces) are the backbone of communication between services. Ensuring their performance, security, and functionality through API testing is critical. Whether you’re a developer, QA engineer, or tech enthusiast, understanding the spectrum of API testing techniques helps deliver robust and reliable software.

This guide explores the 15 essential types of API testing, complete with descriptions, workflows, and their unique purpose.

🖥️ 1. UI Testing

Purpose: To verify if the user interface that interacts with the API works correctly.

  • Tests the visible part of the application.
  • Ensures seamless user interaction with underlying APIs.
  • Usually paired with tools like Selenium or Cypress.

⚙️ 2. Functional Testing

Purpose: To ensure each API function performs as expected.

  • Follows functional specifications.
  • Compares input data with expected output.
  • Validates business logic accuracy.

Best For: Verifying correctness of API responses.


📈 3. Load Testing

Purpose: To check API behavior under normal or peak load conditions.

  • Simulates user load with tools like JMeter.
  • Measures response time and system throughput.
  • Detects bottlenecks before production.

🔥 4. Stress Testing

Purpose: To evaluate API stability under extreme or high load conditions.

  • Pushes the system beyond its limits.
  • Identifies how gracefully the system fails or recovers.
  • Essential for scalability and crash handling.

🚬 5. Smoke Testing

Purpose: To check if the basic functionality of the API is working.

  • A “quick check” with minimal test cases.
  • Ensures no major failures before deeper testing begins.
  • Answers: “Does it break immediately?”

🔗 6. Integration Testing

Purpose: To test data flow between multiple modules or services using APIs.

  • Ensures all integrated parts of the system work together.
  • Follows a test plan and compares expected vs actual results.

✅ 7. Validation Testing

Purpose: To ensure the final product meets business and functional requirements.

  • Validates input/output according to predefined standards.
  • Evaluates if the API delivers the right value to users.

🤯 8. Fuzz Testing

Purpose: To test the API’s resilience to unexpected or invalid data.

  • Sends random, malformed, or unexpected input.
  • Detects security flaws, crashes, or unhandled exceptions.

🔐 9. Security Testing

Purpose: To identify vulnerabilities and protect data integrity.

  • Checks authorization, authentication, encryption.
  • Verifies if security test specifications are met.
  • Tools: OWASP ZAP, Postman Security, Burp Suite.

🔁 10. Regression Testing

Purpose: To confirm recent changes didn’t break existing features.

  • Compares results between new and old app versions.
  • Ensures updates or bug fixes haven’t caused regressions.

🛠️ 11. Error Detection / Runtime Testing

Purpose: To identify runtime issues, errors, or performance glitches.

  • Monitors real-time dashboards for error rates, crashes, and logs.
  • Ensures stability during execution with valid input.

🌍 12. Interoperability Testing

Purpose: To verify if APIs work well with diverse systems or environments.

  • Checks compatibility across different OS, platforms, and third-party apps.
  • Critical for cross-platform applications and third-party integrations.

🔁 13. Smoke Testing (Revisited in Context)

Although already mentioned above, this type plays a vital part in every build/deployment pipeline, especially during CI/CD where a quick validation saves time.


🔬 14. UI + API Correlation

Even though UI testing is typically separate, modern testing often involves correlating UI interactions with API responses to ensure true end-to-end validation.


🧪 15. Combining Tests for Full Coverage

API testing should not be siloed. Real success lies in orchestrating various test types together—e.g., running regression tests after load tests, or security checks during integration cycles.


🧰 Final Thoughts: Tools & Tips

Popular Tools:

  • 🧪 Postman (Manual Testing)
  • ⚡ REST Assured (Automation)
  • 📊 JMeter (Load/Stress)
  • 🔐 OWASP ZAP / Burp Suite (Security)
  • 📈 Newman (Postman CLI)

Conclusion:

API testing is more than just calling endpoints. It’s a multifaceted approach involving performance, security, reliability, and usability. Mastering these 15 types of API tests empowers you to build future-ready applications that scale confidently and perform under pressure.

How to input value into an international number text box in selenium

Below is the text box and the corresponding HTML:

If I used sendkeys, sometimes it may not working

driver.findElement(By.name(“mainphone”)).sendKeys(“(02)2222-2222”);
driver.findElement(By.id(“mobilephone”)).sendKeys(“05-5555-5555”);

If sendkeys() methods are not working then use following two ways to input text:

Before sendkeys() use click() method to click inside textfield i.e:

driver.findElement(By.name("mainphone")).click();
driver.findElement(By.name("mainphone")).sendKeys("(02)2222-2222");   
driver.findElement(By.id("mobilephone")).click();
driver.findElement(By.id("mobilephone")).sendKeys("05-5555-5555"); 

Open chrome mobile emulator with selenium c#

Hi guys, I am going to run a test mobile emulator with selenium and VS C#

Import

using System;
using System.Threading;
using NUnit.Framework;
using OpenQA.Selenium;
using OpenQA.Selenium.Support.UI;
using OpenQA.Selenium.Chrome;

Driver Utils Class

First of all delclaring the webdriver, which will be used for open specfic browser

namespace SeleniumAutomation
{
[TestClass]
public class Setup
  {
       IWebDriver webDriver;
  }
}

Open Chrome

[Test]
public void Open_Browser()
{
 webDriver = new ChromeDriver();

}

Open Chrome Mobile Emulator

 [Test]
 public void Mobile_Emulator_Browser()
 {
 ChromeOptions chromeCapabilities = new ChromeOptions();
 chromeCapabilities.EnableMobileEmulation("Pixel 5");
 webDriver = new ChromeDriver(chromeCapabilities);

 }

I think it will be helpful to run chrome in mobile emulator

Technical & Soft Skills Every QA Tester Should Master

🧠 Top Skills That Are Important for a Test Professional in 2025

Software testing has transformed rapidly with the rise of automation, agile, AI, and complex digital ecosystems. A test professional today is no longer limited to clicking through test cases—they are expected to be problem solvers, communicators, coders, and sometimes even data analysts.

The infographic below shows a breakdown of the most critical testing skills, based on industry-wide feedback.


📌 Why These Skills Matter

With constant software releases, test professionals need to ensure quality without slowing down development. That means:

  • Faster feedback loops
  • Deeper collaboration with developers
  • Testing beyond just functionality (security, usability, performance)
  • Understanding user behavior and business context

These skills help testers not just find bugs—but prevent them.

📊 Top Skills Breakdown

Here’s a detailed look at what each skill means and why it matters:

1. 🗣️ Communication Skills (75% Very Important)

📍 Why it’s critical:
Testers must clearly document bugs, write test cases, and explain defects to developers and stakeholders. Good communication prevents misunderstandings and keeps the team aligned.

2. ⚙️ Automation & Scripting (65%)

📍 Why it’s critical:
Manual testing can’t scale. Knowledge of automation frameworks like Selenium, Cypress, or Playwright is essential. Scripting in languages like Python, Java, or JavaScript can save hours of repetitive work.

3. 📚 General Testing Methodologies (62%)

📍 Why it’s critical:
Understanding black-box, white-box, regression, smoke, and exploratory testing helps testers select the right test at the right time.

4. 🌐 Web Technologies (60%)

📍 Why it’s critical:
Modern testers must understand HTML, CSS, and JavaScript to validate front-end issues or debug problems in browser-based apps.

5. 🔌 API Testing (55%)

📍 Why it’s critical:
More apps rely on microservices and APIs. Testers need tools like Postman, REST Assured, or Karate to test APIs for performance, reliability, and correctness.

6. 🛡️ Security Testing (48%)

📍 Why it’s critical:
Testers play a role in identifying vulnerabilities—such as XSS, injection attacks, or broken authentication—before attackers do.

7. 🔁 Agile Methodologies (45%)

📍 Why it’s critical:
Testers in Agile teams need to test early and often, work in sprints, and use practices like TDD (Test-Driven Development) and CI/CD pipelines.

8. 📈 Performance & Load Testing (42%)

📍 Why it’s critical:
Users expect fast apps. Testers use tools like JMeter, Gatling, or k6 to simulate thousands of users and ensure the app can handle peak loads.

9. 📱 Mobile Technologies (39%)

📍 Why it’s critical:
Testing on Android and iOS requires mobile-specific knowledge (e.g., gestures, screen sizes, emulators, Appium automation).

10. 🧠 Customer-Facing & Business Skills

📍 Why it’s critical:
Testers with empathy and business understanding catch real-world bugs—issues a user would actually care about.

🧩 Other Valuable (Emerging) Skills:

SkillUse Case
Data AnalysisUnderstanding logs, charts, trends for performance and issue tracking
Cloud TestingTesting apps deployed in AWS, Azure, GCP
Microservices TestingEnsuring independent services communicate reliably
Big Data TestingValidating high-volume, high-velocity data pipelines
IoT TestingTesting smart devices and networks
AI/ML TestingVerifying predictions, fairness, and performance of AI models
Operations ManagementCoordinating test environments, deployments, and reporting

Final Tips for Testers to Stay Relevant:

  1. 🧑‍💻 Practice automation daily – even if just small scripts
  2. 📚 Read about new tools and trends in QA weekly
  3. 🧪 Participate in real-world testing challenges (e.g., Bug Bashes, uTest)
  4. 🤝 Collaborate with developers and product owners
  5. 📊 Learn to report insights, not just bugs

How to Use ExtentReports in .NET Selenium Automation Projects

Reports play a fundamental role when it comes to TESTING. Tester can now know the real-time reports of test suit execution. Reports made ease to know the ratio of Pass? Or Fail? Post-test suit execution and it is the only documentation to know about test execution results.

Everyone wish to see the detailed description of the test results. Don’t you? here is the solution for it. And, let us see how these reports can be achieved? in Selenium C# – NUnit framework automation testing.

To achieve detailed test execution results as HTML reports we need to rely on third party tool called => Extent Reports. These reports provide decent narration of test execution results and are depicted in PIE chart.

How to Reference Extent Reports in MS Visual Studio

Extent Reports can be directly referenced via NuGet Gallery:

Step 1) Project> Manage NuGet Packages

Step 2) In the next window

  1. Search for ExtentReports
  2. Select the search result
  3. Click Install

Step 3) Install selenium support from NuGet package

Step 3) Click ‘I Accept’

Step 4) Create a new C# class with the below code for Extent Reports.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

using OpenQA.Selenium;
using OpenQA.Selenium.Support.UI;
using OpenQA.Selenium.Chrome;

using NUnit.Framework;
using NUnit.Framework.Interfaces;
using NUnit;

using AventStack.ExtentReports.Reporter;
using AventStack.ExtentReports;
using System.IO;

namespace RnD
{
    [TestFixture]
    public class TestDemo1
    {
        public IWebDriver driver;

        public static ExtentTest test;
        public static ExtentReports extent;

        [SetUp]
        public void Initialize()
        {
            driver = new ChromeDriver();
        }


        [OneTimeSetUp]
        public void ExtentStart()
        {
            
            extent = new ExtentReports();
            var htmlreporter = new ExtentHtmlReporter(@"D:\ReportResults\Report" + DateTime.Now.ToString("_MMddyyyy_hhmmtt") + ".html");
            extent.AttachReporter(htmlreporter);

        }



        [Test]
        public void BrowserTest()
        {
            test = null;
            test = extent.CreateTest("T001").Info("Login Test");

            driver.Manage().Window.Maximize();
            driver.Navigate().GoToUrl("http://testing-ground.scraping.pro/login");
            test.Log(Status.Info, "Go to URL");

            //provide username
            driver.FindElement(By.Id("usr")).SendKeys("admin");
            //provide password
            driver.FindElement(By.Id("pwd")).SendKeys("12345");

            try
            {
                WebDriverWait wait = new WebDriverWait(driver, TimeSpan.FromSeconds(1));
                wait.Until(ExpectedConditions.ElementIsVisible(By.XPath("//h3[contains(.,'WELCOME :)')]")));
                //Test Result
                test.Log(Status.Pass, "Test Pass");

            }

            catch (Exception e)

            {
                test.Log(Status.Fail, "Test Fail");
                throw;

            }
        }

        [TearDown]
        public void closeBrowser()
        {
            driver.Close();
        }

        [OneTimeTearDown]
        public void ExtentClose()
        {
            extent.Flush();
        }
    }
}

Post running test method, the test execution report looks as shown below:

How to setup selenium webdriver with c#

Set Up Visual Studio with Selenium WebDriver:

Create a new project in Visual Studio:

Step 1) In the File Menu, Click New > Project

Step 2) In the next screen

  1. Select the option ‘Visual C#’
  2. Click on Console App (.Net Framework)
  3. Enter name as “RnD”
  4. Click OK

Step 3) The below screen will be displayed once the project is successfully created.

Set up Visual Studio with Selenium WebDriver:

Step 1) Navigate to Project-> Manage NuGet Packages

Step 2) In the next screen

  1. Search for Selenium on the resultant screen
  2. Select the first search result
  3. Click on ‘Install’

Step 3) The below message will be displayed once the package is successfully installed

Steps to install NUnit Framework:

Step 1) Navigate to Project-> Manage NuGet Packages

Step 2) In the next window

  1. Search for NUnit
  2. Select the search result
  3. Click Install

Step 3) The below message will appear once the installation is complete.

Steps to download NUnit Test Adapter

Please note that the below steps work only for 32-bit machines. For 64-bit machines, you need to download the ‘NUnit3 Test Adapter’ by following the same process as mentioned below.

Step 1) Navigate to Project-> Manage NuGet Packages

Step 2) In the next window

  1. Search NUnitTestAdapter
  2. Click Search Result
  3. Click Install

Step 3) Once install is done you will see the following message

Steps to download Chrome Driver

Step 1) Navigate to Project-> Manage NuGet Packages

Step 2) In the next window

  1. Search for Chromdriver
  2. Select the search result
  3. Click Install

Step 3) System may asked for permission. Click on ‘Yes to All’

Step 4) The below message will appear once the installation is complete.

Selenium and NUnit framework:

Selenium with NUnit framework allows differentiating between various test classes. NUnit also allows using annotations such as SetUp, Test, and TearDown to perform actions before and after running the test.

NUnit framework can be integrated with Selenium by creating a NUnit test class and running the test class using NUnit framework.

The below are the steps needed to create and run a test class using NUnit framework.

Steps to create a NUnit Test class in Selenium:

Step 1) In the Solution Explorer, Right clicked on project > Add > Class

Step 2) Class creation window will appear

  1. Provide a name to the class
  2. Click on Add button

Step 3) The below screen will appear.

Step 4) Add the following code to the created class. Please note that you need to specify the location of ‘chromdriver.exe’ file during chrome driver initialization.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

using OpenQA.Selenium;
using OpenQA.Selenium.Support.UI;
using OpenQA.Selenium.Chrome;


using NUnit.Framework;
using NUnit.Framework.Interfaces;
using NUnit;

namespace RnD
{
    [TestFixture]
    public class TestDemo1
    {
        public IWebDriver driver;

        [SetUp]
        public void Initialize()
        {
            driver = new ChromeDriver();
        }

        [Test]
        public void BrowserTest()
        {
            driver.Manage().Window.Maximize();
            driver.Navigate().GoToUrl("https://www.google.com/");
        }

        [TearDown]
        public void closeBrowser()
        {
            driver.Close();
        }
    }
}

Step 4) Click on ‘Build’ -> ‘Build Solution’ or keypress ‘Ctrl + Shift + B’

Step 5) Once the build is successful, we need to open the Test Explorer window. Click on Test -> Windows -> Test Explorer

Step 6) Test Explorer window opens with the list of available tests. Right-click on Test Explorer and select Run Selected Tests

Step 7) Selenium must open the browser with specified URL and close the browser. Test case status will be changed to ‘Pass’ on the Test Explorer window.