Position: Sr. AWS Data engineer (**C2H**)
Location: Remote (Scottsdale AZ)
Duration: 4 Months C2H
Experience: 14 years
Must have skills: AWS Redshift S3 Glue SQL Python programming & frameworks
Description:
We are looking for a talented and experienced Sr. Data Engineer with Healthcare IT experience in our Data Operations team where you will play a pivotal role in shaping the future of the data platform while thriving in a fastpaced environment that encourages personal growth and development.
Responsibilities:
- Work in the Data Operations squad to design develop and enhance the data and analytics solutions and pipelines driving datadriven decisionmaking processes.
- Collaborate with scrum masters product owners and engineering teams to iteratively create data and analytics solutions that meet both business and technical requirements.
- Work closely with the development team in building Innovative API first cloud native solutions using AWS platform.
- Design build and optimize data storage solutions using AWS DynamoDB AWS S3 Lambda Glue Athena & Redshift and other relevant technologies.
- Continuously enhance full delivery pipeline through automation expanded yet increasingly efficient test coverage ultimately optimizing timetomarket and overall quality.
- Optimize and finetune existing data pipelines for performance scalability and reliability.
- Implement data quality checks and monitoring processes to ensure data accuracy and consistency.
- Stay current with industry trends and best practices in data engineering and recommend improvements to existing processes.
- Mentor junior engineers in the team; collaborate in technical documentation; participate in code reviews and adhering to engineering excellence/best practices.
- Strong understanding of serverless architecture and the ability to design and implement serverless data processing solutions.
- Solid understanding of data modeling ETL processes and data warehousing concepts.
- Effective communication skills to collaborate with technical and nontechnical stakeholders.
Required Skills & Qualifications:
- Bachelors or Masters degree in Computer Science Engineering or a related field.
- 9 years of experience in data engineering roles with a strong focus on Python application development and data processing.
- Proficiency in Python programming and experience with relevant frameworks (e.g. Pandas NumPy etc.).
- 3 years experience with automation of DevOps build using GitLab/Bitbucket/Jenkins/Maven.
- 5 years experience AWS cloud and AWS services such as S3 Buckets Lambda API Gateway DynamoDB SQS queues and Redshift.
- 5 years experience with batch job scheduling and identifying data/job dependencies.
- Understanding of Spark Hive Kafka Kinesis Spark Streaming and Airflow.
- Experience in Software Engineering and Development.
- Understanding of database schema design.
- Proficient in one of the coding languages (Python Java Scala).
- AWS certifications are a plus.