Job Title: Sr. Databricks Developer / Architect
Locations: Remote
Type of hire: Fulltime
Job Description:
- Closely work with Architect and lead to design solutions to meet functional and nonfunctional requirements.
- Participate to understand architecture and solution design artifacts.
- Evangelize reuse through the implementation of shared assets.
- Proactively implement engineering methodologies standards and leading practices.
- Provide insight and direction on roles and responsibilities required for solution operations.
- Identify communicate and mitigate Risks Assumptions Issues and Decisions throughout the full lifecycle.
- Considers the art of the possible compares various solution options based on feasibility and impact and proposes actionable plans.
- Demonstrate strong analytical and technical problemsolving skills.
- Ability to analyze and operate at various levels of abstraction.
- Ability to balance what is strategically right with what is practically realistic.
Minimum qualifications
- Excellent technical skills to enabling the creation of futureproof complex global solutions
- Bachelors Degree or equivalency (CS CE CIS IS MIS or engineering discipline) or equivalent work experience.
- Maintains close awareness of new and emerging technologies and their potential application for service offerings and products.
- Work with architect and leads for solutioning to meet functional and nonfunctional requirements.
- Demonstrated knowledge of relevant industry trends and standards.
- Demonstrate strong analytical and technical problemsolving skills.
- Must have excellent coding skills either Python or Scala preferably Python.
- Must have at least 5 years of experience in Data Engineering domain with total of 7 years.
- Must have implemented at least 2 project endtoend in Databricks.
- Must have at least 2 years of experience on databricks which consists of various components as below
- Delta lake
- dbConnect
- db API 2.0
- Databricks workflows orchestration
- Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments.
- Must have strong understanding of Data warehousing and various governance and security standards around Databricks.
- Must have knowledge of cluster optimization and its integration with various cloud services.
- Must have good understanding to create complex data pipeline.
- Must have good knowledge of Data structure & algorithms.
- Must be strong in SQL and spraksql.
- Must have strong performance optimization skills to improve efficiency and reduce cost.
- Must have worked on both Batch and streaming data pipeline.
- Must have extensive knowledge of Spark and Hive data processing framework.
- Must have worked on any cloud (Azure AWS GCP) and most common services like ADLS/S3 ADF/Lambda CosmosDB/DynamoDB ASB/SQS Cloud databases.
- Must be strong in writing unit test case and integration test.
- Must have strong communication skills and have worked on the team of size 5 plus.
- Must have great attitude towards learning new skills and upskilling the existing skills.
- Good to have Rest API knowledge.
- Good to have understanding around cost distribution.
- Good to have knowledge of Databricks platform design.
- Good to have Unity catalog and basic governance knowledge.
- Good to have Databricks SQL Endpoint understanding.
- Good To have CI/CD experience to build the pipeline for Databricks jobs.
- Good to have if worked on migration project to build Unified data platform.
- Good to have knowledge of DBT.
- Good to have knowledge of docker and Kubernetes.