Insighture is a leading technology consultancy that drives digital transformation for businesses worldwide. With a team of over 85 expert consultants, the company delivers tailored, high-impact strategies and solutions, enabling scalable product engineering. As an AWS partner, Insighture excels in co-integrated cloud services. It has collaborated with more than 50 clients globally, guiding them through cloud adoption, DevOps transformation, enterprise modernisation, and more.
The team’s expertise spans Cloud-Native Development, Solutions Architecture, UI/UX, Quality Engineering, Data Engineering, AI/ML, and DevSecOps. These capabilities empower businesses to achieve impactful and innovative outcomes.
In 2024, Insighture achieved ISO certification and was recognised as a Great Place to Work, earning three prestigious awards: Best Workplace in Sri Lanka, Best Workplace for Technology, and Best Workplace for Young People. Insighture's technology and expertise are embedded in the work of internationally recognised care providers, global freight operations, child protection systems, and health tech platforms across Australia, the UK, and Singapore.
We are seeking a motivated and detail-oriented Senior Data Engineer for a 6 months contract to join our growing team.
Responsibilities:
- Design, build, and optimize robust ELT/ETL data pipelines and data sets.
- Architect scalable solutions to migrate and integrate data from large-scale enterprise data warehouses (e.g., Teradata) to GCP (BigQuery).
- Lead performance tuning initiatives for complex SQL queries and Python scripts.
- Optimize data processing and storage costs/performance on BigQuery and other large-scale data warehouses.
- Define and enforce engineering standards for data warehousing, SQL development, and CI/CD processes.
- Mentor junior engineers on best practices for cloud-native development.
- Architect and manage CI/CD pipelines using GitHub Enterprise and Cloud Build to automate deployment of data code (SQL, DBT models, Python).
- Develop and maintain complex workflows using Google Cloud Composer (Airflow) and/or DBT (Data Build Tool) to ensure reliable and timely data delivery.
- Leverage a variety of GCP services (Cloud Functions, Cloud Composer, Pub/Sub, Kubernetes) to build event-driven and serverless data architectures.
- Partner with Data Analysts and Business Intelligence teams to support data consumption in tools like Tableau (OnPrem/SaaS) and ensure data models meet business requirements.
- Utilize shell scripting and Python/Java to support data operations and infrastructure tasks.
Qualifications:
- 5+ years of experience in Data Engineering, with a proven track record of delivering large-scale data solutions.
- 3+ years of hands-on experience as a GCP Data Engineer, specifically with BigQuery.
- 3+ years of experience working with Large-Scale Data Warehouses (e.g., Teradata, Netezza, Snowflake), including experience with migrations or interoperability with cloud platforms.
- Advanced SQL knowledge with demonstrable experience in performance tuning and query optimization.
- Strong proficiency in Python for data processing and performance optimization.
- Working knowledge of Java and shell scripting.
- Experience with Data Build Tool (DBT) for data transformation.
- Experience with workflow orchestration using Apache Airflow / Google Cloud Composer.
- Strong experience with CI/CD pipelines (Cloud Build) and version control systems (GitHub Enterprise).
- Solid understanding of the GCP data ecosystem including Cloud Composer, Cloud Functions, Pub/Sub, and Kubernetes.
- Familiarity with data visualization tools such as Tableau and how data engineers support them (performance, data modeling).
Preferred:
- Experience migrating large workloads from Teradata to BigQuery.
- GCP Professional Data Engineer Certification.
- Experience with streaming data and real-time processing.