Required Experience AWS ETL Data Modelling Data Integration Data Manipulation GITHUB Azure Devops Databrick Good to have pytest
We are seeking a highly skilled AWS Data Engineer to join our dynamic team. As an AWS Data Engineer you will be responsible for designing building and maintaining data pipelines and ETL processes on the AWS platform. In addition to your technical expertise we value individuals who are enthusiastic about contributing to community building initiatives such as Centers of Excellence (CoE) and Communities of Practice (CoP).
Responsibilities:
- AWS Data Solutions Development: Design develop and implement robust data solutions on AWS leveraging services such as Amazon Redshift Amazon S3 AWS Glue AWS Lambda and others. Ensure scalability reliability and performance of data pipelines and ETL processes.
- Data Modeling and Optimization: Develop and maintain data models schemas and data warehouses to support analytical and reporting needs. Optimize data structures and queries for efficient data retrieval and processing.
- Data Integration and Transformation: Implement data integration solutions to consolidate data from various sources ensuring consistency and accuracy. Perform data manipulation and transformation as required to meet business requirements.
- Version Control and Collaboration: Utilize version control systems like GitHub and collaboration tools like Azure DevOps to manage codebase and facilitate team collaboration. Contribute to code reviews and ensure adherence to coding standards and best practices.
- Cloud Data Platforms: Experience with cloudbased data platforms such as Databricks is highly desirable. Familiarity with Azure DevOps for CI/CD pipelines and automation tasks is a plus.
- Documentation and Knowledge Sharing: Document data engineering processes workflows and best practices. Contribute to knowledge sharing initiatives like CoE and CoP by sharing insights presenting findings and participating in discussions related to data engineering and AWS.
- Testing and Quality Assurance: Good to have experience with testing frameworks like pytest for implementing automated tests to ensure the reliability and quality of data solutions.
Qualifications:
- Bachelors degree in Computer Science Engineering or a related field.
- years of experience as a Data Engineer with a strong focus on AWS technologies.
- Proficiency in ETL (Extract Transform Load) processes data modeling and data integration techniques.
- Handson experience with AWS services such as Amazon Redshift Amazon S3 AWS Glue AWS Lambda etc.
- Experience with data manipulation using SQL and other relevant languages or tools.
- Familiarity with version control systems like GitHub and collaboration tools like Azure DevOps.
- Experience with cloudbased data platforms such as Databricks is a plus.
- Good to have experience with testing frameworks like pytest for implementing automated tests.
- Strong problemsolving skills and attention to detail.
- Excellent communication and collaboration skills.
Join us in our mission to harness the power of data on the AWS platform while also contributing to community building initiatives that promote knowledge sharing and collaboration across the organization. If you are passionate about data engineering and AWS and enjoy working in a collaborative and innovative environment we encourage you to apply!
data modelling,etl,github,aws,data manipulation,pytest,data integration,azure devops,data modeling,databrick