Previous Job
Previous
Performance Engineer
Ref No.: 18-11299
Location: Jersey City, New Jersey
Title: Performance Engineer
Duration: 6 + Months Project
Location: Jersey city, NJ 07311

Job Description:
8-10+ years of experience is required.

Activities: Step 1 Data Gathering Activities: Meet with developers, architects, business analysts to review and gather information such as architectural design, user functionalities, batch dependencies, etc.
  • Determine the flow of transactions for capturing as part of Performance Testing.
Step 2 - Design: Create a comprehensive performance test plan and/or test strategy document.
  • Assist the project team in the creation and review of service level agreements (SLAs) for various functionalities.
  • Setup Production monitoring criteria.
  • Determine Acceptance criteria for completion of the Capacity Planning and Performance Testing phases; Determine monitoring requirements and setup monitoring using in-house tools such as Monitoring using Opnet, Prognosis, TeamQuest, etc.
Step 3 Coding: Create performance scripts using VuGen using the HTTP, Web services, Citrix, etc protocol (as applicable) to emulate the application.
  • All scripts are to be appropriately correlated, parameterized, check points, think time added, etc.
  • Build custom code with the C programming language to make scripts robust or dynamic.
  • Review scripts with Performance Team and/or business team.
  • Determine and validate system functions and user patterns.
  • Build usage models based on these inputs.
  • Setup Performance Test users and performance test data.
  • Validate and configure connectivity and functionality.
  • Run Baseline or Benchmark tests under light load to validate the correctness of the automated test scripts, identify obvious performance issues early in the testing cycle and provide a basis of comparison for future tests.
Step 4 - Testing & Analysis: Run performance tests targeted at the applications from either an external cloud location or from inside the BTMU internal network.
  • Run tests directly against the external components to determine performance and capacity requirements by eliminating network latency and external components.
  • Run scheduled tests such as user experience tests, endurance tests, stress tests, etc.
  • Performance-test execution involves running every test script and collecting results for all KPIs and metrics in the test plan. Analyze the results after each test and determine whether acceptance criteria are met and determine if tuning is required.
  • This analysis may or may not include formal reporting.
  • If necessary, perform Ah-Hoc testing focusing on a particular component for troubleshooting or tuning purposes.
  • Shift focus to tuning after performance criteria have been met but the team wants to reduce the amount of resources being used in order to increase platform headroom, decrease the volume of hardware needed, and/or further improve system performance.
  • Use in-house tools to drill down to the actual issue at hand and make recommendations to make changes to the system and/or application.
  • Run Re-tests after every tuning change in order to determine the impact on performance.
  • Publish an informal report after each interim performance test.
  • Publish a formal final report after all performance criteria have been met.