Looking for talented, hardworking and results-oriented individuals to join their team to build data foundations and tools to craft the future of commerce and Apple Pay. You will design and implement scalable, extensible and highly-available data pipelines on large volume data sets, that will enable actionable insights & strategy for payment products. Their culture is about getting things done iteratively and rapidly, with open feedback and debate along the way; they believe analytics is a team sport, but they strive for independent decision-making and taking calculated risks. Their team collaborates deeply with partners across product and design, engineering, and business teams: their mission is to drive innovation by providing the business and data scientist partners outstanding systems to make decisions that improve the customer experience of using their services. This will include demonstrating large and complex data sources, helping derive actionable insights, delivering dynamic and intuitive decision tools, and bringing their data to life via amazing visualizations.
Working with the head of Wallet Payments & Commerce Data Engineering & BI, this person will collaborate with various data analysts, instrumentation experts, and engineering teams to identify requirements that will derive the creation of data pipelines. You will work closely with the application server engineering team to understand the architecture and internal APIs involved in upcoming and ongoing projects related to Apple Pay. They are seeking an outstanding person to play a lead role in helping the analysts & business users make decisions using data and visualizations. You will partner with key partners across the engineering, analytics & business teams as you design and build query data structures.
The ideal candidate is a self-motived teammate, skilled in a broad set of Big Data processing techniques with the ability to adapt and learn quickly, provide results with limited direction, and choose the best possible data processing solution is a must.
Key Qualifications
- 10+ years experience with ETL, BI & Data Analytics
- 5+ years of professional experience with Big Data systems, data pipelines and data processing
- Practical hands-on experience with technologies like Apache Hadoop, Apache Pig, Apache Hive, Apache Sqoop & Apache Spark
- Ability to understand API Specs, identify relevant API calls, extract data and implement data pipelines & SQL friendly data structures
- Understanding various distributed file formats such as Apache Avro, Apache Parquet and common methods in data transformation
- Expertise in Python, Unix Shell scripting and Dependency drove job schedulers!
- Expertise in Core JAVA, Oracle, Teradata, and ANSI SQL
- Familiarity with Apache Oozie and PySpark
- Knowledge of Scala and Splunk is a plus
- Familiarity with data visualization tools such as Tableau.
- Familiarity with rule-based tools and APIs for multi-stage data correlation on large data sets
- Excellent time management skills to manage work to tight deadlines and handle the pressure of executive requests and product launches
- Experience with mentoring and leading data engineers
Description
Translate business requirements by the business team into data and engineering specifications
Build scalable data sets based on engineering specifications from the available raw data and derive business metrics/insights
Work with engineering and business stakeholders to define and implement the data engagement relationships required with partners
Understand and Identify server APIs that needs to be instrumented for data reporting and align the server events for execution in already established data pipelines
Analyze complex data sets, identify and formulate correlational rules between heterogeneous sources for effective analytics
Process, clean and validate the integrity of data used for analysis
Develop Python and Shell Scripts for data ingestion from external data sources for business insights
Work hand in hand with the DevOps team and develop monitoring and alerting scripts on various data pipelines and jobs
Mentor a team of talented engineers
Education & Experience
Minimum of bachelor’s degree, preferably in Computer Science, Information Technology or EE, or relevant industry experience is preferred
More Information
- Salary Offer 0 ~ $3000
- Experience Level Junior
- Total Years Experience 0-5
- Dropdown field Option 1