It’s an exciting time to join Cervest. Our inaugural product, EarthScan, will launch in 2021. EarthScan is an automated climate risk impact discovery tool that measures the impact of climate change on physical assets like buildings.
Backed by leading VCs in Europe and the US, Cervest is a climate tech company building the world’s first open access AI-powered Climate Intelligence platform.
Our platform will allow organisations to understand the climate risk across all their assets for any timescale and any scenario, anywhere on Earth. Data engineering plays a crucial role in making this vision a reality. The team supports the acquisition, ingestion, processing and hosting of any data and designs, and delivers infrastructure to support efficiency and innovation at scale.
As a company, we are a pro-diversity, highly inclusive organisation, committed to bringing together people of all backgrounds and enabling them to succeed. We know that a richly diverse team will help us achieve our mission sooner.
We are looking for a Data Engineer with experience of working with large scale, distributed computing to join the team, and help us develop our data platform. The role offers a unique opportunity to join an exciting, early-stage, highly mission-driven team where you’ll have the ability to make a significant impact on our company and our users.
- Working closely with our scientists, product designers and other engineers to build the core components of our data platform that satisfies a set of cross functional requirements
- Harmonising data from disparate sources
- Productionising statistical and ML models - including earth science or commercials risk assessment and aggregation models
- Developing ways of monitoring data reliability and quality and tracking provenance
- Work with senior leaders throughout Cervest to make sure that what we are building is best in class for what we are trying to achieve today as well as 12 months from now
- Supporting the delivery of tactical requirements both internal and external as and when they occur
- Hands on experience of designing and developing data engineering pipelines
- Experience using and configuring distributed like the Spark framework (Pyspark/Scala) ideally with Geospatial raster data formats.
- Experience working with datasets that are 100s of Terabytes
- Significant professional Python experience
- Knowledge with different types of data formats such as Parquet, Avro, Protobuf
- Knowledge of configuring clusters for distributed workloads
- Good level of experience / comfort with Cloud Deployment Environments (AWS preferred)
- Knowledge of deploying using Docker
Opportunities to learn, grow and thrive with support from talented and empathetic team mates
We are a remote first company and looking for candidates who would be able to come to our office in London (once travel is sensible) a few times a year using more sustainable transport methods (we’ll help with that) so generally within one time zone of the UK.