Back to job search

Senior Data Engineer

Grab (Jakarta, Indonesia)
DKI Jakarta, Indonesia 🇮🇩
Grab is Southeast Asia’s leading superapp, offering a suite of services consisting of deliveries, mobility, financial services, enterprise and others. Grabbers come from all over the world, and we are united by a common mission: to drive Southeast Asia forward by creating economic empowerment for everyone. At Grab, every Grabber is guided by The Grab Way, which explains our mission and the operating principles on how we can achieve it together. We call these principles the 4Hs: Heart We work together as OneGrab to serve communities in Southeast Asia Hunger We work to understand ground truths and drive improvements, big and small Honour We keep our word and steward our resources wisely to build and sustain trust Humility We are a constant work-in-progress, and we never stop learning to get better

About this position

As the Data engineer in the Lending Data Engineering team, you will work closely with various stakeholders to understand business and data requirements, and be responsible for building and managing data assets using scalable big data technologies.

Responsibilities

• Developing and maintaining scalable and reliable ETL pipelines and processes to ingest data from a large number and variety of data sources
• Developing a deep understanding of real-time data productions availability to inform on the real time metric definitions
• Develop data quality checks and establish best practices for data governance, quality assurance, data cleansing, and ETL-related activities
• Develop familiarity with the existing inbuilt data platform tools and utilize them efficiently to set up the data pipelines.
• Maintaining and optimizing the performance of our data analytics infrastructure to ensure accurate, reliable and timely delivery of key insights for decision making
• Design and deliver the next-gen data lifecycle management suite of tools/frameworks, including ingestion and consumption on the top of the data lake to support real-time, API-based and serverless use-cases, along with batch as relevant.
• Build solutions leveraging AWS services such as Glue, Redshift, Athena, Lambda, S3, Step Functions, EMR, and Kinesis to enable efficient data processing and analytics.
• Develop a deep understanding of real-time data production availability to inform real-time metric definitions using tools like Amazon MSK or Kinesis Data Streams.
• Implement and monitor data quality checks and establish best practices for data governance, quality assurance, data cleansing, and ETL-related activities using AWS Glue DataBrew or similar tools.

Requirements

• At least 5+ years of relevant experience in developing scalable, secured, distributed, fault tolerant, resilient & mission-critical data solutions