📄 About This Role
You’ll be working in a collaborative and practical environment that values clean, reliable solutions over unnecessary complexity. You will help design and maintain the pipelines and infrastructure that support analytics and reporting across the business.
What You’ll Be Doing
Pipeline Development: Building and maintaining scalable ETL/ELT processes from multiple data sources.
Data Architecture: Designing data models and warehouse structures to support reporting and analytics.
Cloud Infrastructure: Working within [AWS / Azure], utilizing tools such as Databricks or Snowflake.
Orchestration: Managing pipeline workflows using Airflow, Azure Data Factory, or dbt.
Best Practices: Applying CI/CD and Infrastructure-as-Code (IaC) to data deployments.
Collaboration: Partnering with analysts and stakeholders to ensure data reliability and governance.
Mentorship: Supporting junior engineers and contributing to team standards.
✅ Requirements
- Experience: 3+ years of experience building production-grade data pipelines.
- Technical Skills: Strong proficiency in SQL and Python.
- Cloud Platforms: Hands-on experience with Azure, AWS, or GCP.
- Data Tooling: Familiarity with Databricks, Snowflake, or Azure Synapse.
- Engineering Principles: Solid understanding of dimensional modelling, Git, and CI/CD workflows.
- Communication: Ability to explain technical concepts to non-technical stakeholders.
⭐ Nice to Have
- Experience with Terraform or IaC tools.
- Exposure to streaming (Kafka, Spark Streaming).
- Familiarity with Docker or Kubernetes.
- Experience in regulated industries (Finance, Healthcare, Public Sector).
Interested but not quite ready?
If this role isn't quite right, get in touch anyway. Email us at team@factyze.com and tell us about yourself.