Previous Job
Previous
Senior Enterprise/Big Data Infrastructure Engineer
Ref No.: 18-00269
Location: Dallas, Texas
Start Date: 03/26/2018
Senior Enterprise/Big Data Infrastructure Engineer
Summary/Focus
● Data Infrastructure design and management
Description
● Builds, manages and maintains and monitors the infrastructure that hosts what the Enterprise/Big Data
Architect has designed. Frequently involved in the design and deployment of big data solutions,
because of the experience they have with specific environments and elements of infrastructure.
Responsibilities
● Designs implements and manages the infrastructure that hosts the data
● Benchmark systems, analyse system bottlenecks and propose solutions to eliminate them
● Articulate and analyze pros and cons of various technologies and platforms
● Document use cases, solutions and recommendations
● Implementation recommendations
● Assist with system integrations
● Infrastructure implementation, configuration and management
● Perform all phases of software engineering including requirements analysis, application design, code
development and testing
● Design reusable components, frameworks and libraries
● Work very closely with architecture groups and drive solutions
● Participate in an Agile/Scrum methodology to deliver high-quality software releases every 2 weeks
through Sprints
● Design and develop innovative solutions to meet the needs of the business
● Review code and provide feedback relative to best practices and improving performance
● Troubleshoot production support issues post-deployment and come up with solutions as required
● Mentor and guide other software engineers within the team
● Oncall and Escalation for after hour support
Technical Skills
● Big Data, Enterprise Data, Distributed Data
● Cloud, local and hosted infrastructure
● SQL/NOSQL experience with Database such as Microsoft SQL Server, DB2, Teradata, Oracle, Aurora,
PostgreSQL, MySQL, Cassandra, MongoDb etc
● A strong understanding of software development life cycle and methodologies such as Agile/Waterfall
● Experience with object-oriented/object function scripting languages: Python, Java, C#, Scala, etc.
● Strong understanding of big data technologies, such as Hadoop, MapReduce, Spark, Yarn, Hive, Pig,
Presto, Storm etc
● Strong experience with ETL/ELT tools (Informatica, Talend, Pentaho, ODI)
● Strong knowledge of APIs, specifically REST APIs, SDKs and CLI tools
● AWS knowledge as applied to big data applications
● Data security/privacy, including PCI
● Troubleshooting performance issue resolution
● Thorough understanding of technological infrastructure and how it relates to projects
● Experience working in a DevOps model using Agile
● Experience with CI/CD with Jenkins pipelines, OpenShift, Gradle, GitHub and Docker
● Test Automation
● DevOps knowledge is a nice to have
Other Skills
● Both creative and analytic approaches in a problem-solving environment
● Excellent written and verbal communication skills
● Communicating with both technical and non-technical collaborators
● Excellent teamwork and collaboration skills
Experience and Education
● 6+ years of experience in large development initiatives involving big data
● 6+ years of experience in developing high volume database applications
● 4+ years of experience with complex shell scripting using Linux
● 2+ years of experience in developing distributed computing systems applications using solutions such
as Hadoop, Hbase, Hive, Java/MapReduce, Spark, Scala, Storm, Kafka, Flume, Sqoop & Pig
● 4+ years of agile experience
● 1+ years cloud experience (AWS, Azure, Containers, etc.)
● Bachelors or Masters in Computer Science or Software Engineering
● 4+ years experience on ETL/ELT Tools
Notes
● This role has significant skills overlap with the data engineer role below.
● There is the possibility of combining these two roles, or splitting them differently along various skill sets