They’re on a mission to build the best platform in the world for engineers to understand and scale their systems, applications, and teams. They operate at high scale—trillions of data points per day—providing always-on alerting, metrics visualization, logs, and application tracing for tens of thousands of companies. Their engineering culture values pragmatism, honesty, and simplicity to solve hard problems the right way.
You Will
-
- Build distributed, high-volume data pipelines that power the core Datadog product
- Do it with Spark, Luigi, Kafka, and other open-source technologies
- Work all over the stack, moving fluidly between programming languages: Scala, Java, Python, Go, and more
- Join a tightly knit team solving hard problems the right way
- Own meaningful parts of their service, have an impact, grow with the company
Requirements
-
- You have a BS/MS/Ph.D. in a scientific field or equivalent experience
- You have built and operated data pipelines for real customers in production systems
- You are fluent in several programming languages (JVM & otherwise)
- You enjoy wrangling huge amounts of data and exploring new data sets
- You value code simplicity and performance
- You want to work in a fast, high growth startup environment that respects its engineers and customers
Bonus Points
-
- You are deeply familiar with Spark and/or Hadoop
- In addition to data pipelines, you’re also quite good with Kubernetes and cloud technology
- You’ve built applications that run on AWS
- You’ve built your own data pipelines from scratch, know what goes wrong, and have ideas for how to fix it
More Information
- Salary Offer 0 ~ $3000
- Experience Level Junior
- Total Years Experience 0-5
- Dropdown field Option 1