The Data Services group is critical for Bloomberg terminal and enterprise applications. They are responsible for core data and analytics services that provide a single point of entry for applications to retrieve any kind of data available in Bloomberg. With more than 300 billion requests per day, it is imperative that they not only provide high quality data but also strong metadata to capture context and relationships in data.
Over the next year, they will be building out next-gen infrastructure for universal metadata to enable higher-order cross-asset analytics. To support this initiative, they are expanding the team across London and Frankfurt. These new teams will be responsible for the query, distribution, and discovery aspects of metadata. It is an exciting project with huge potential and new team members will have the opportunity to contribute and lead the project in different aspects.
You’ll need to have:
- Experience in modern C++, Java, and/or Python.
- A keen interest in working with low-latency and high-throughput services in a distributed environment.
- Good problem solving and communication skills, and ability to thrive in a highly collaborative and dynamic work environment.
They’d love to see:
- Experience with caching in general or with multi-tiered caches.
- Experience with text search systems such as SOLR or Lucene.
What’s in it for you:
As a team, they always encourage and promote new ideas from team members and explore different open-source technologies. They have hackathon sprints twice a year to try different ideas and some of which have already made their way into the product. They also invest in continuous learning for their team members. If you’re someone who is motivated and interested in solving challenging problems with a big impact across the company, this is a great team for you. You’ll have the opportunity to take ownership, learn new technologies, and work with teams across the company to guide them into using new technology solutions.
About their team:
They own and manage low latency and highly scalable infrastructure to store, query, distribute and discover field metadata. To enable low-latency metadata lookups we use multi-tiered caches in a highly distributed environment and use SOLR search engine to allow complex searches. Metadata is distributed internally through Kafka pipe from publisher to these service platforms. Their team also publishes linked metadata i.e. Bloomberg Vocabulary and Ontology to external customers in machine-readable semantic web formats such as RDF. They also own external client-facing data discovery UIs which are heavily used and in the top 50 terminal functions. Get to know some of their team members and what they enjoy about the team here:https://www.techatbloomberg.com/blog/meet-the-team-data-service-engineering/
- Salary Offer 0 ~ $3000
- Experience Level Junior
- Total Years Experience 0-5
- Dropdown field Option 1