Open Data Science job portal

Data Engineer, Corporate 379 views

Join SADA as a Data Engineer, Corporate!

Your Mission 

As a Data Engineer, Corporate at SADA, you will have the opportunity to work with big data and emerging Google Cloud technologies to drive the corporate services. You will have an opportunity to design, develop, and maintain the best Enterprise Data Warehouse solution to fit their corporate needs. You will be interacting with all of their business units and Google Cloud subject matter experts.

From transforming business requirements, solution architecture, data modeling, architecting, ETL, metadata, and business continuity, you will have the opportunity to work collaboratively with architects and other engineers to recommend, prototype, build, and debug data infrastructures on Google Cloud Platform (GCP). You will have an opportunity to work on real-world data problems facing their customers today. Engagements vary from being purely consultative to requiring heavy hands-on work and cover a diverse array of domain areas, such as data migrations, data archival and disaster recovery, and big data analytics solutions requiring batch or streaming data pipelines, data lakes, and data warehouses. 

You will be expected to run point on whole projects, end-to-end, and to mentor less experienced Data Engineers. You will be recognized as an expert within the team and will build a reputation with Google and their customers. You will demonstrate repeated delivery of project architectures and critical components that other engineers demur to you for lack of expertise. You will also participate in early-stage opportunity qualification calls, as well as guide client-facing technical discussions for established projects. 

Pathway to Success 

#BeOneStepAhead: At SADA, they are in the business of change. They are focused on leading-edge technology that is ever-evolving. They embrace change enthusiastically and encourage agility. This means that not only do their engineers know that change is inevitable, but they embrace this change to continuously expand their skills, preparing for future customer needs. 

Your success starts by positively impacting the direction of a fast-growing practice with vision and passion. You will be measured quarterly by the breadth, magnitude, and quality of your contributions, your ability to estimate accurately, customer feedback at the close of projects, how well you collaborate with your peers, and the consultative polish you bring to customer interactions.  

As you continue to execute successfully, they will build a customized development plan together that takes you through the engineering or management growth tracks. 

Expectations

Internal Facing – You will interact with internal customers and stakeholders on a regular basis, sometimes daily, other times weekly/bi-weekly. Expectations will be to capture requirements and deliver solutions that are suitable for corporate divisions.

Training – Ongoing with the first-week orientation at HQ followed by a 90-day onboarding schedule. Details of the timeline can be shared. Due to the COVID-19 pandemic, all onboarding will be temporarily conducted remotely.

Job Requirements

Required Credentials:

Required Qualifications: 

  • Mastery in the following domain area:
    • Data warehouse modernization: building complete data warehouse solutions on BigQuery, including technical architectures, star/snowflake schema designs, query optimization, ETL/ELT pipelines, and reporting/analytic tools. Must have hands-on experience working with batch or streaming data processing software (such as Beam, Airflow, Hadoop, Spark, Hive, etc.)
  • Proficiency in the following domain areas:
    • Big Data: managing Hadoop clusters (all included services), troubleshooting cluster operation issues, migrating Hadoop workloads, architecting solutions on Hadoop, experience with NoSQL data stores like Cassandra and HBase, building batch/streaming ETL pipelines with frameworks such as Spark, Spark Streaming, and Apache Beam, and working with messaging systems like Pub/Sub, Kafka and RabbitMQ. 
  • Data migration: migrating data stores to reliable and scalable cloud-based stores, including strategies for minimizing downtime. May involve conversion between relational and NoSQL data stores, or vice versa
  • Backup, restore & disaster recovery: building production-grade data backup and restore, and disaster recovery solutions. Up to petabytes in scale
  • 4+ years of experience with Data modeling, SQL, ETL, Data Warehousing, and Data Lakes
  • 4+ years experience in writing production-grade data solutions (relational and NoSQL)
  • in an enterprise-class RDBMS
  • 2+ years of experience with enterprise-class Business Intelligence tools such as Looker, PowerBI, Tableau, etc.
  • Experience writing software in one or more languages such as Python, Java, R, or Go
  • Experience with systems monitoring/alerting, capacity planning, and performance tuning

Useful Qualifications:

  • Experience working with Google Cloud data products (CloudSQL, Spanner, Cloud Storage, Pub/Sub, Dataflow, Dataproc, Bigtable, BigQuery, Dataprep, Composer, etc)
  • Experience with IoT architectures and building real-time data streaming pipelines
  • Experience operationalizing machine learning models on large datasets
  • Demonstrated leadership and self-direction — the willingness to teach others and learn new techniques
  • Demonstrated skills in selecting the right statistical tools given a data analysis problem
  • Ability to balance and prioritize multiple conflicting requirements with high attention to detail
  • Excellent verbal/written communication & data presentation skills, including the ability to succinctly summarize key findings and effectively communicate with both business and technical teams

About SADA

Values:  They built their core values on themes that internally compel them to deliver their best to their partners, their customers, and to each other. Ensuring a diverse and inclusive workplace where they learn from each other is core to SADA’s values. They welcome people of different backgrounds, experiences, abilities, and perspectives. They are an equal opportunity employer.

  1. Make them rave
  2. Be data-driven
  3. Think one step ahead
  4. Drive purposeful impact
  5. Do the right thing

Work with the best: SADA has been the largest partner in North America for Google Cloud portfolio of products since 2016 and has been named 2020, 2019, and 2018 Google Cloud Global Reseller Partner of the Year. SADA has also been awarded Best Place to Work year after year by the Business Intelligence GroupInc. Magazine, as well as LA Business Journal!

Benefits: Unlimited PTO, Paid Parental Leave, competitive and attractive compensation,  performance-based bonuses, paid holidays, rich medical, dental, vision plans, life, short and long-term disability insurance, 401K/RRSP with match, professional development reimbursement program as well as Google Certified training programs.

Business Performance: SADA has been named to the INC 5000 Fastest-Growing Private Companies list for 15 years in a row garnering Honoree status. CRN has also named SADA on the Top 500 Global Solutions Providers for the past 5 years. The overall culture continues to evolve with engineering at its core: 3200+ projects completed, 4000+ customers served, 10K+ workloads and 30M+ users migrated to the cloud.

SADA is committed to the safety of its employees and recommends that new hires receive a COVID vaccination before beginning work.

More Information

Share this job

SADA

(0)
Company Information
Connect with us
Contact Us
https://jobs.opendatascience.com/wp-content/themes/noo-jobmonster/framework/functions/noo-captcha.php?code=b964b

Here at the Open Data Science Conference we gather the attendees, presenters, and companies that are working on shaping the present and future of AI and data science. ODSC hosts one of the largest gatherings of professional data scientists with major conferences in the USA, Europe, and Asia.