Skip to Content

Join our team!

We are looking for a Data Analyst to design and scale modern data pipelines—building dbt models, orchestrating Airflow/Cloud Composer, and optimizing Snowflake on GCP. You’ll integrate diverse sources, deliver reliable data marts and dimensional models, enforce data quality and SLAs, and drive performance, cost-efficiency, security, and documentation across the stack.


What is your mission? 

You will provide the best service to our partner brands by performing these tasks:

  • Design, build, and optimize ETL/ELT pipelines for reliability, performance, and cost-efficiency
  • Develop and maintain dbt models and documentation
  • Create and manage Airflow / Cloud Composer DAGs with robust scheduling and monitoring
  • Architect and implement scalable Snowflake solutions
  • Develop dimensional models and data marts
  • Implement FDW/federated connections for cross-database queries
  • Integrate data from APIs, databases, and files across multiple source systems
  • Deploy and manage data infrastructure on GCP
  • Implement and monitor data quality checks, SLAs, lineage, and observability
  • Maintain documentation, data dictionaries, and process guides
  • Conduct code reviews and collaborate cross-functionally


Who are we looking for?

  • Bachelor’s degree in Computer Science, Information Systems, Engineering, or related field (or equivalent practical experience)
  • 1–2+ years professional experience in data engineering or analytics engineering
  • Advanced SQL including complex queries, optimization, indexing, CTEs, and window functions
  • Experience with PostgreSQL, MySQL, SQL Server
  • Hands-on experience with dbt for transformation and modeling
  • Strong experience with Snowflake including streams, tasks, optimization, and time travel
  • Familiarity with GCP services such as BigQuery, Cloud Storage, Cloud Functions, Cloud Composer, Dataflow, and Pub/Sub
  • Experience with Apache Airflow DAG development, workflow design, error handling, and monitoring
  • ETL/ELT design and implementation including incremental loads and CDC
  • Ability to handle structured and semi-structured data formats such as JSON, XML, and Parquet
  • Proficiency in Python (Pandas, NumPy, SQLAlchemy) and bash/shell scripting
  • Experience integrating REST APIs and using Git for version control
  • Familiarity with CI/CD for data pipelines
  • Knowledge of FDW/federated queries such as postgres_fdw
  • Understanding of data lineage, impact analysis, and data quality frameworks
  • Exposure to real-time streaming tools like Kafka or Pub/Sub
  • Experience with modern ingestion tools such as Fivetran, Stitch, or Airbyte
  • Knowledge of reverse ETL tools like Census or Hightouch
  • Familiarity with infrastructure-as-code tools such as Terraform and distributed computing frameworks like Spark or Beam
  • Awareness of data governance and privacy standards such as GDPR and CCPA
  • Certifications (nice-to-have): Snowflake SnowPro, Google Cloud Professional Data Engineer, dbt Analytics Engineering
  • Strong communication, documentation discipline, collaboration, problem-solving, and attention to detail

Job Site Banner

Company Perks

Free learning and development courses for your personal and career growth

Comprehensive HMO benefits and insurance since day 1

Dynamic company events

Above-industry salary package and incentives

Opportunities for promotion

Free meals and snacks

Our Values

Worldwide, strongly uphold our values to be of service to our people, our clients, and our community.

WE PUT PEOPLE FIRST

We consider our people as the foundation of our success.

WE STRIVE FOR EXCELLENCE

Our commitment to quality ensures that we always do our best.

WE EMBRACE INNOVATION

We stay agile and fast, always looking for ways to solve our clients’ needs.

WE DELIVER DELIGHT

We pride ourselves on helping our clients reach their full potential.

WE CREATE REAL IMPACT

We do things right and we get the job done.