Data Engineer (Remote)

Work Type: Contract
The Opportunity

One of the biggest untapped markets in fintech is personal financial advice. It’s also one of the toughest domains to crack. If you’re looking to build a successful career and make an impact on the lives of many South Africans, this opportunity is for you.

What are we all about?

LifeCheq is a fintech company specializing in personal finance, operating throughout South Africa. Our distinct approach to financial advice and our market-leading advice platform has redefined the category. We’re gearing up to scale our operations significantly in an exciting new chapter.

Our technology touches consumers, financial advisors and large enterprise institutions. The vision for our advice platform is to become one of the most transformative technology platforms in South Africa’s financial services landscape. In this role, you will be a pivotal player at an important point in our journey.

We mainly use Clojure for our backend, supplemented by ClojureScript and TypeScript for our frontends. Our frontend technologies include React, Storybook.js, Webpack, and Apollo. Python is our choice for machine learning, and our data environment comprises Databricks on AWS along with some dedicated AWS infrastructure.

Our business is funded by reputable institutional investors including Futuregrowth, African Rainbow Capital and Naspers Foundry.

Your responsibilities

  • Develop and refine the data model for our data lakehouse on Databricks and AWS;
  • Maintain and improve our analytics data gateway REST API;
  • Work with the backend team to integrate new analytics data sources, ensuring efficient data ingestion and transformation through scalable ELT processes;
  • Define data quality metrics, automate data quality monitoring, and establish processes to address data quality issues in a timely manner;
  • Collaborate with machine learning engineers to understand data requirements and streamline data processing for machine learning tasks;
  • Partner with the reporting team to mature the data model of our reporting views and optimise query performance;
  • Develop and use monitoring tools to track the performance and reliability of data pipelines and infrastructure;
  • Foster best practices in data automation, code, and infrastructure management;
  • Contribute to our architecture and technology choices going forward.

What skills you’ll bring

  • A minimum of 2 years’ experience as a data engineer or similar role;
  • Strong skills in Python and SQL, with experience in writing production-level code;
  • Good knowledge of cloud engineering;
  • Experience in implementing best practices for automation and infrastructure as code.

What will set you apart from others

  • Professional experience with
    • AWS;
    • Databricks;
    • Spark;
    • Terraform;
    • DBT;
  • Proven track record in designing, implementing, and maintaining data pipelines and data lakehouses;
  • Background in working with startups or small teams, managing end-to-end data engineering cycles;
  • Familiarity with supporting machine learning systems in production.

What It's Like To Work Here

You will collaborate with agile cross-functional squads, which include backend engineers, machine learning engineers, financial experts, and designers. Each squad has a clear focus on customer-centric goals and the autonomy to achieve them.

We take your professional growth seriously! We will work with you to map out your success plan and provide an opportunity to fast-track your career while making a genuine difference.

This is a contract, remote position. We need you to be fluent in English, and be able to work core hours (10:00 to 16:00) for a GMT+2 time zone.

Submit Your Application

You have successfully applied
  • You have errors in applying
Cover Letter