Stratpoint’s Big Data Team is all about empowering the ‘doers’ in companies to make smarter decisions with their data. As an ETL Software Engineer, your primary role will be to design and build data pipelines. You will be focused on designing and implementing solutions on ELK, Kafka, Spark, Talend, and Tableau. In this role, you will be exposed to AWS, Azure, GCP including Cloudera so the ideal candidate needs to have a strong foundation in big data technology and a passion to learn new technologies.
Main duties and responsibilities:
● Design and develop applications utilizing the Spark and Hadoop Frameworks
● Read, extract, transform, stage, and load data to multiple targets, including Databases, Data Lakes, and Data Warehouses using any of the ff: Talend, Informatica, DataStage, SSIS, etc.
● Migrate existing data processing from standalone or legacy technology scripts to Spark framework processing
● Work / manage gigabytes/terabytes of data and must understand the challenges of transforming and enriching such large datasets
● Bachelor’s degree in Computer Science / IT / Computing / Business or equivalent
● At least 3 years of experience in Analytical SQL programming
● At least 3 years of experience in Java/Scala programming
● 1 year ETL experience (any tool) with one Change Data Capture (CDC) implementation
● 1 year of experience in Linux
● 1 year of experience in Hadoop, Big Data ecosystem
● Experience in any cloud environment (GCP/AWS/Azure)