Previous Job
Previous
Senior Enterprise/Big Data Engineer
Ref No.: 18-00270
Location: Dallas, Texas
Start Date: 03/26/2018
Senior Enterprise/Big Data Engineer
Summary/Focus
● Data software development, interfacing and configuration
Description
● Builds the software and configuration that implement what the Enterprise/Big Data Architect has
designed. This role develops, maintains, tests and evaluates big data solutions within the organization.
Frequently involved in the design of big data solutions, because of the experience they have with
specific tools, frameworks and languages.
Responsibilities
● Data software development
● Data visualization and reporting software development
● Data transport development and configuration
● Data purge and archive
● Data systems integration and API design and development
● Developing, testing and reviewing code and providing feedback
● Establishing & Implementing Big Data best practices
● Reviewing requirements and making sure they are functional and usable
● Perform all phases of software engineering including requirements analysis, application design, code
development and testing
● Design reusable components, frameworks and libraries
● Work very closely with architecture groups and drive solutions
● Participate in an Agile/Scrum methodology to deliver high-quality software releases every 2 weeks
through Sprints
● Design and develop innovative solutions to meet the needs of the business
● Review code and provide feedback relative to best practices and improving performance
● Troubleshoot production support issues post-deployment and come up with solutions as required
● Mentor and guide other software engineers within the team
● Data transformation development
Technical Skills
● Big Data, Enterprise Data, Distributed Data
● SQL/NOSQL experience with Database such as Microsoft SQL Server, DB2, Teradata, Oracle, Aurora,
PostgreSQL, MySQL, Cassandra, MongoDb etc
● A strong understanding of software development life cycle and methodologies such as Agile/Waterfall
● Experience with object-oriented/object function scripting languages: Python, Java, C#, Scala, etc.
● Strong understanding of big data technologies, such as Hadoop, MapReduce, Spark, Yarn, Hive, Pig,
Presto, Storm etc
● Strong experience with ETL/ELT tools (Informatica, Talend, Pentaho, ODI)
● Strong knowledge of APIs, specifically REST APIs, SDKs and CLI tools
● AWS knowledge as applied to big data applications
● Data security/privacy, including PCI
● Troubleshooting performance issue resolution
● Thorough understanding of technological infrastructure and how it relates to projects
● Experience working in a DevOps model using Agile
● Experience with CI/CD with Jenkins pipelines, OpenShift, Gradle, GitHub and Docker
● Test Automation
● DevOps knowledge is a nice to have
Other Skills
● Both creative and analytic approaches in a problem-solving environment
● Excellent written and verbal communication skills
● Communicating with both technical and non-technical collaborators
● Excellent teamwork and collaboration skills
Experience and Education
● 6+ years of experience in object oriented programming
● 5+ years of experience in large development initiatives involving big data
● 5+ years of experience in developing high volume database applications
● 3+ years of experience with complex shell scripting using Linux
● 2+ years of experience in developing distributed computing systems applications using solutions such
as Hadoop, Hbase, Hive, Java/MapReduce, Spark, Scala, Storm, Kafka, Flume, Sqoop & Pig
● 2+ years of agile experience
● 2 years cloud experience (AWS, Azure, Containers, etc.)
● Bachelors or Masters in Computer Science or Software Engineering
Notes
● This role has significant skills overlap with the data infrastructure engineer role above.
● There is the possibility of combining these two roles, or splitting them differently along various skill sets