Data Warehouse Engineer
Congregate Technologies
40 LPA
Location: Hyderabad
Posted: January 02, 2026
Posted By: System Administrator
Job Description
Data Warehouse Engineer
What a Data Warehouse Engineer should accomplish
Design, standardize, and optimize data pipelines connecting all solution area source systems to the enterprise data warehouse. Implement automated, scalable Snowflake development workflows, and ensure the warehouse architecture fully supports advanced data science and analytics initiatives.
30 Days
• Document and assess all existing data pipelines; identify gaps and new pipeline requirements.
• Define a standardized framework for auditing, logging, and alerting across pipelines.
• Design a set of standard architectural patterns for the data warehouse.
• Review and evaluate the current data science landscape, tools, and strategy.
90 Days
• Build or enhance pipelines to address identified needs.
• Collaborate with solution areas to define required data warehouse architecture.
• Apply the standardized auditing, logging, and alerting framework to all pipelines.
• Define and agree on SLAs for all data pipelines.
• Modify warehouse structures and pipelines to align with architectural patterns.
• Create formal documentation and timelines for the organization’s data science strategy.
• Verify that warehouse architectural patterns meet data science requirements.
1 Year+
• Monitor and uphold SLAs for all pipelines.
• Develop dashboards and reports to track pipeline usage and performance.
• Continuously optimize data transfer processes and warehouse resource allocation.
• Conduct regular audits to ensure adherence to architectural patterns.
• Continuously ensure warehouse architecture supports evolving data science needs.
What does a Data Warehouse Engineer do?
• Design, administer, develop, and deploy real-time, automated, and scalable data pipelines from multiple source systems into the enterprise data lake and/or data warehouse.
• Develop and implement auditing, logging, and data quality strategies to ensure reliability, accuracy, and consistency of large-scale data workflows.
• Diagnose and resolve issues in high-volume data processing systems, ensuring minimal downtime and data integrity.
• Collaborate closely with technology teams, business stakeholders, and solution area partners to define data requirements and deliver secure, timely access to critical data.
• Optimize pipeline and query performance using profiling tools, SQL, Python, and other performance tuning methods.
• Translate operational and analytical requirements into effective data warehouse designs and solutions.
• Maintain in-depth knowledge of source systems and downstream consumers to champion data quality and usability across the organization.
• Participate in architecture reviews, contribute to best practice guidelines, and help establish data governance standards.
• Continuously research and adopt emerging technologies and approaches to improve warehouse performance and scalability.
What will I need to thrive in this role?
• Bachelor’s degree in Data Science, Analytics, Information Management, Computer Science, Information Technology or equivalent professional experience.
• 3+ years of experience working with SQL in production environments, with strong query writing and optimization skills.
• 3+ years implementing modern, architecture-based data warehouses in enterprise environments.
• 2+ years hands-on experience with leading cloud data warehouses such as Snowflake, Redshift, or BigQuery, including familiarity with standard architectural patterns.
• Proficiency in software engineering and scripting (e.g., Python, Bash, JavaScript) to automate and orchestrate workflows.
• Ability to communicate complex technical concepts to both technical and non-technical audiences.
• Strong understanding of data modeling, ETL/ELT design, and data lifecycle management.
• Proven ability to work with large-scale datasets from diverse data sources.
What will give me an advantage in this role?
• Advanced SQL expertise, including stored procedures, analytic/window functions, and complex query optimization.
• Advanced Snowflake skills, including streams, tasks, stored procedures, and pipeline orchestration.
• Experience with cloud-native data services such as AWS SNS, SQS, SES, S3, Lambda, and Glue (or equivalent in Azure/GCP).
• Familiarity with modern data platforms and orchestration tools such as Apache Airflow, Spark, FiveTran, Kafka, Cassandra, or Elasticsearch.
• Expertise in data quality frameworks and data governance best practices.
• Experience working with time-series databases for financial or operational analytics.
• Background in Big Data, non-relational databases, machine learning, and data mining techniques.
• Knowledge of security, compliance, and regulatory requirements related to enterprise data storage and usage.
What a Data Warehouse Engineer should accomplish
Design, standardize, and optimize data pipelines connecting all solution area source systems to the enterprise data warehouse. Implement automated, scalable Snowflake development workflows, and ensure the warehouse architecture fully supports advanced data science and analytics initiatives.
30 Days
• Document and assess all existing data pipelines; identify gaps and new pipeline requirements.
• Define a standardized framework for auditing, logging, and alerting across pipelines.
• Design a set of standard architectural patterns for the data warehouse.
• Review and evaluate the current data science landscape, tools, and strategy.
90 Days
• Build or enhance pipelines to address identified needs.
• Collaborate with solution areas to define required data warehouse architecture.
• Apply the standardized auditing, logging, and alerting framework to all pipelines.
• Define and agree on SLAs for all data pipelines.
• Modify warehouse structures and pipelines to align with architectural patterns.
• Create formal documentation and timelines for the organization’s data science strategy.
• Verify that warehouse architectural patterns meet data science requirements.
1 Year+
• Monitor and uphold SLAs for all pipelines.
• Develop dashboards and reports to track pipeline usage and performance.
• Continuously optimize data transfer processes and warehouse resource allocation.
• Conduct regular audits to ensure adherence to architectural patterns.
• Continuously ensure warehouse architecture supports evolving data science needs.
What does a Data Warehouse Engineer do?
• Design, administer, develop, and deploy real-time, automated, and scalable data pipelines from multiple source systems into the enterprise data lake and/or data warehouse.
• Develop and implement auditing, logging, and data quality strategies to ensure reliability, accuracy, and consistency of large-scale data workflows.
• Diagnose and resolve issues in high-volume data processing systems, ensuring minimal downtime and data integrity.
• Collaborate closely with technology teams, business stakeholders, and solution area partners to define data requirements and deliver secure, timely access to critical data.
• Optimize pipeline and query performance using profiling tools, SQL, Python, and other performance tuning methods.
• Translate operational and analytical requirements into effective data warehouse designs and solutions.
• Maintain in-depth knowledge of source systems and downstream consumers to champion data quality and usability across the organization.
• Participate in architecture reviews, contribute to best practice guidelines, and help establish data governance standards.
• Continuously research and adopt emerging technologies and approaches to improve warehouse performance and scalability.
What will I need to thrive in this role?
• Bachelor’s degree in Data Science, Analytics, Information Management, Computer Science, Information Technology or equivalent professional experience.
• 3+ years of experience working with SQL in production environments, with strong query writing and optimization skills.
• 3+ years implementing modern, architecture-based data warehouses in enterprise environments.
• 2+ years hands-on experience with leading cloud data warehouses such as Snowflake, Redshift, or BigQuery, including familiarity with standard architectural patterns.
• Proficiency in software engineering and scripting (e.g., Python, Bash, JavaScript) to automate and orchestrate workflows.
• Ability to communicate complex technical concepts to both technical and non-technical audiences.
• Strong understanding of data modeling, ETL/ELT design, and data lifecycle management.
• Proven ability to work with large-scale datasets from diverse data sources.
What will give me an advantage in this role?
• Advanced SQL expertise, including stored procedures, analytic/window functions, and complex query optimization.
• Advanced Snowflake skills, including streams, tasks, stored procedures, and pipeline orchestration.
• Experience with cloud-native data services such as AWS SNS, SQS, SES, S3, Lambda, and Glue (or equivalent in Azure/GCP).
• Familiarity with modern data platforms and orchestration tools such as Apache Airflow, Spark, FiveTran, Kafka, Cassandra, or Elasticsearch.
• Expertise in data quality frameworks and data governance best practices.
• Experience working with time-series databases for financial or operational analytics.
• Background in Big Data, non-relational databases, machine learning, and data mining techniques.
• Knowledge of security, compliance, and regulatory requirements related to enterprise data storage and usage.
Application Stats
Total Applications: 1
Posted: Jan 02, 2026