AWS Data Engineer

Job Title: AWS Data Engineer

Qualifications:

– Bachelor’s Degrees and Advanced Diplomas in Information Technology and/or Computer Sciences

– Minimum Experience: 4 – 10 years’ experience in a similar environment

Job Description:

We are seeking an experienced AWS Data Engineer to design, build, and operationalize large-scale enterprise data solutions and applications using AWS data and analytics services in combination with third-party tools. The ideal candidate will have advanced knowledge of SQL and expertise in utilizing big data technologies on AWS cloud services, including Athena, Lambda, CloudFormation Stack, EMR, Spark/Storm, Kafka/Elasticsearch, among others.

Responsibilities:

1. Design, build, and operationalize large scale enterprise data solutions and applications using AWS data and analytics services in combination with Third Parties – Glue, Step-functions, Kafka CC, PySpark, DynamoDB, Delta.io, RedShift, Lambda, DeltaLake, Python.

2. Analyze, re-architect, and re-platform on-premises data warehouses to data platforms on AWS cloud using AWS or Third-Party services and Kafka CC.

3. Design and build production data pipelines from ingestion to consumption within a big data architecture, using Java, PySpark, Scala, Kafka CC.

4. Design and implement data engineering, ingestion, and curation functions on AWS cloud using AWS native or custom programming.

5. Perform detail assessments of current state data platforms and create an appropriate transition path to AWS cloud.

6. Design, implement, and support an analytical data infrastructure providing ad-hoc access to large datasets and computing power.

7. Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL, AWS big data technologies, and Kafka CC.

8. Creation and support of real-time data pipelines built on AWS technologies including Glue, Lambda, Step Functions, PySpark, Athena, and Kafka CC.

9. Continual research of the latest big data and visualization technologies to provide new capabilities and increase efficiency.

10. Work closely with team members to drive real-time model implementations for monitoring and alerting of risk systems.

11. Collaborate with other tech teams to implement advanced analytics algorithms that exploit Liberty’s rich datasets for statistical analysis, prediction, clustering, and machine learning.

12. Help continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for Liberty customers.

Skills and Requirements:

– Proficiency in SQL (Oracle, Redshift, PostgreSQL)

– Experience with Java, PySpark, Scala

– Familiarity with AWS data and analytics services

– Strong problem-solving and analytical skills

– Excellent communication and collaboration abilities

Scroll to Top