GCP Data Engineer
Robotics technology LLC San Diego
Job Description : -As a Senior Data Engineer, you will Design and develop big data applications using the latest open-source technologies. Desired working in offshore model and Managed outcome Develop logical and physical data models for big data platforms.
Automate workflows using Apache Airflow. Create data pipelines using Apache Hive, Apache Spark, Apache Kafka. Provide ongoing maintenance and enhancements to existing systems and participate in rotational on-call support. Learn our business domain and technology infrastructure quickly and share your knowledge freely and actively with others in the team.
Mentor junior engineers on the team Lead daily standups and design reviews Groom and prioritize backlog using JIRA Act as the point of contact for your assigned business domainRequirements:GCP Experience 4+ years of recent GCP experience Experience building data pipelines in GCP GCP Dataproc, GCS & BIG Query experience 10+ years of hands-on experience with developing data warehouse solutions and data products.
6+ years of hands-on experience developing a distributed data processing platform with Hadoop, Hive or Spark, Airflow or a workflow orchestration solution are required 5+ years of hands-on experience in modelling and designing schema for data lakes or for RDBMS platforms.
Experience with programming languages: Python, Java, Scala, etc. Experience with scripting languages: Perl, Shell, etc. Practice working with, processing, and managing large data sets (multi-TB/PB scale Exposure to test driven development and automated testing frameworks.
Background in Scrum/Agile development methodologies. Capable of delivering on multiple competing priorities with little supervision. Excellent verbal and written communication skills. Bachelor's Degree in computer science or equivalent experience.
The most successful candidates will also have experience in the following: Gitflow Atlassian products Bitbucket, JIRA, Confluence etc. Continuous Integration tools such as Bamboo, Jenkins, or TFS We are an equal opportunity employer. All aspects of employment including the decision to hire, promote, discipline, or discharge, will be based on merit, competence, performance, and business needs.
Automate workflows using Apache Airflow. Create data pipelines using Apache Hive, Apache Spark, Apache Kafka. Provide ongoing maintenance and enhancements to existing systems and participate in rotational on-call support. Learn our business domain and technology infrastructure quickly and share your knowledge freely and actively with others in the team.
Mentor junior engineers on the team Lead daily standups and design reviews Groom and prioritize backlog using JIRA Act as the point of contact for your assigned business domainRequirements:GCP Experience 4+ years of recent GCP experience Experience building data pipelines in GCP GCP Dataproc, GCS & BIG Query experience 10+ years of hands-on experience with developing data warehouse solutions and data products.
6+ years of hands-on experience developing a distributed data processing platform with Hadoop, Hive or Spark, Airflow or a workflow orchestration solution are required 5+ years of hands-on experience in modelling and designing schema for data lakes or for RDBMS platforms.
Experience with programming languages: Python, Java, Scala, etc. Experience with scripting languages: Perl, Shell, etc. Practice working with, processing, and managing large data sets (multi-TB/PB scale Exposure to test driven development and automated testing frameworks.
Background in Scrum/Agile development methodologies. Capable of delivering on multiple competing priorities with little supervision. Excellent verbal and written communication skills. Bachelor's Degree in computer science or equivalent experience.
The most successful candidates will also have experience in the following: Gitflow Atlassian products Bitbucket, JIRA, Confluence etc. Continuous Integration tools such as Bamboo, Jenkins, or TFS We are an equal opportunity employer. All aspects of employment including the decision to hire, promote, discipline, or discharge, will be based on merit, competence, performance, and business needs.
We do not discriminate on the basis of race, color, religion, marital status, age, national origin, ancestry, physical or mental disability, medical condition, pregnancy, genetic information, gender, sexual orientation, gender identity or expression, national origin, citizenship/ immigration status, veteran status, or any other status protected under federal, state, or local law.
Alphatec SpineCarlsbad, 30 mi from San Diego
We are looking for an experienced Staff Data Engineer to lead the development of our high-performance data pipelines and scalable APIs using Databricks and Rust. As a key technical leader, you will design, implement, and optimize data infrastructure...
OuraSan Diego
on helping people live healthier and happier lives, we ensure that our team members have what they need to do their best work both in and out of the office.
We are seeking a passionate early talent Data Engineering to join our Research Operations team...
San Diego County Regional Airport AuthoritySan Diego
operations. The ideal candidate will establish data modeling standards, define best practices for data management, and collaborate closely with Data Engineers and Data Analysts to maintain data accuracy and integrity.
This role is uniquesmall team, big...