Agero is powering the next generation of software-enabled driver safety services and technology, pushing the limits of big data to transform the entire driving experience. The majority of leading vehicle manufacturers and insurance providers use Agero’s roadside assistance, accident management, dispatch, consumer affairs and telematics innovations to strengthen their businesses and create stronger, lasting connections with their customers. Together, we’re making driving smarter and safer for everyone.
The Data Science and Analytics group at Agero is a central resource for innovative data products, scientific analysis, and actionable insights. We are a collaborative, consultative team that works cross functionally to:
- support partners throughout the organization in making informed, data driven decisions,
- unlock the value within our data to create innovative new product offerings, drive efficiency, and improve customer experience, and
- provide greater access to information and insights through dashboards, data self-service tools, and training.
We believe that data is a key asset, and thanks to Agero’s scale and history, is a true competitive advantage.
About the Role:
Agero’s Data Science and Analytics team is developing a modern, cloud-based (AWS) data platform to support our analytical products and machine learning pipelines. Our team is expanding, and we are looking to add a self-motivated Data Engineer to help implement a curated, “single source of truth” data set that will feed a variety of downstream use cases including APIs, reports, dashboards, self-service tools, and modeling efforts. The platform will provide the Data Science and Analytics team, along with the broader organization, more secure, centralized, and reliable access to the company’s data. At the same time, it will reduce the effort required to ingest new sources of data or to build new ETL processes to support new modeling or reporting use cases.
Additionally, this individual will operate as a member of the team building out production machine learning pipelines that allow our data science team to test and deploy new models at scale. This framework will tightly integrate with the data platform described above. As part of this effort, the Data Engineer will be tasked with developing data processing pipelines to support various modeling and forecasting efforts, defining a process for our Data Scientists and Engineers to quickly and reliably deploy new models into production, and designing data models to expose model outputs to a variety of downstream use cases. Both efforts will leverage similar cloud technologies, and the data platform will be the “source” and “sink” of data to and from the ML pipeline(s).
- Participate in the development of requirements of a modern, cloud-based data platform
- Participate in the design of data models and data management strategies to support various analytical and modeling applications
- Participate in the development of robust and flexible ETL and data processing pipelines to support a variety of use cases
- Participate in the design and deployment of self-service tools, dashboards, and APIs leveraging the “single source of truth” data
ADDITIONAL RESPONSIBILITIES INCLUDE:
- Participate in the migration of existing products to the newly developed platform
- Help maintain existing data sources, add new data sources
- Develop new curated data sets to support new analytical efforts.
- Leveraging ML models, forecast series of key metrics at a regular cadence
- Develop APIs that expose data to other internal applications
- Create dashboards and visualizations
Skills, Experiences and Education:
- 4+ years of Coding experience leveraging Python and SQL
- Bachelor’s degree in Computer Science, Computer Engineering, or related field
- 2+ years of industry experience in a similar role
- Cloud computing (Ideally AWS tech including S3, Redshift, Lambda, Glue, DynamoDB)
- Data management and processing, including experience with Relational and Non-relational data stores (NoSQL, S3, Hadoop, etc.)
- Good communication skills both in written (technical documents, Python notebooks) and spoken (meetings, presentations) forms.
- Willing and able to learn and meet business needs.
- Independent, self-organizing, and able to prioritize multiple complex assignments
- Experience using Git and working on shared code repositories
It would be great if you also had:
- Experience training and deploying machine learning models in a business context
- Experience with Tableau and data visualization
- Advanced degree in a related technical field
- Backend web (API) development
- Salary Offer 0 ~ $3000
- Experience Level Junior
- Total Years Experience 0-5
- Dropdown field Option 1