Job description
At N3TWORK we collect and use hundreds of millions of events each day to understand the performance of our games and we need you to build the tools used by our data science team to interpret that data. This data informs every aspect of our business, from user acquisition, to game design, to helping identify new opportunities. Work with Pandas, R, Hadoop, Spark, and custom tools to scalably transform and process data across our company.
Responsibilities
- Closely collaborate with data analysts to build and own scalable tools and processes that process large volumes of data to answer key business questions.
- Drive data use best practices for the user acquisition team and game teams.
- Support improvements to the data pipeline so that it continues to scale with growth.
- Write and maintain quality, readable code
Requirements
- Exceptional talent and the ability to apply that talent within a world class team.
- 2+ years experience with Hadoop and Spark.
- 2+ years with SQL.
- Experience with data analysis using Pandas or R.
- Deep knowledge of data structures, algorithms and design patterns and how to apply them to the problem at hand
- Experience operating production services in AWS or another cloud environment
- Ability to identify and correct performance bottlenecks on a live system
- Solid understanding of Unix/Linux
- Understanding of and interest in successful free to play game design and development
- Curiosity that drives you to continually learn new things
Bonus
- Experience with Google BigQuery
- Experience with Jenkins
Seniority Level
Mid-Senior level
Industry
- Computer Games
Employment Type
Full-time
Job Functions
- Engineering
More Information
- Salary Offer 0 ~ $3000
- Experience Level Junior
- Total Years Experience 0-5
- Dropdown field Option 1