IPC Technology Solutions

About the Role:

We’re looking for an experienced Data Engineer to join our Data & AI team. This is an exciting opportunity to be part of a high-impact team building world-class data solutions. If you’re passionate about technology, solving complex problems, and have a strong background in designing, building, and maintaining scalable data pipelines and infrastructure, this role is for you. 

You’ll work closely with data scientists, analysts, and other stakeholders to ensure the efficient flow and storage of data, enabling data-driven decision-making across the organization.

What You’ll Do:

  • Design and implement scalable, high-performance data pipelines to collect, process, and store large volumes of data. 
  • Act as the expert in data lake, warehousing, and integration strategies. 
  • Optimize and troubleshoot existing data pipelines for performance and reliability. 
  • Ensure data quality and integrity by implementing validation and cleansing processes. 
  • Deploy and manage data infrastructure using cloud platforms (e.g., AWS, Azure, GCP). 
  • Monitor and maintain data systems to ensure high availability and performance. 
  • Stay updated with the latest advancements in data engineering technologies and best practices. 

Who We’re Looking For:

  • A data-driven professional with hands-on experience building data lakes, warehouses, and pipelines. 
  • Someone with a strong background in ETL processes and cloud platforms (e.g., AWS, Azure Data Lake, Databricks). 
  • A collaborative team player who can work with data scientists and analysts to understand their needs and deliver solutions

Essential Skills: 

  • Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 
  • Experience: 5+ years in data engineering, with expertise in data lakes, SQL, and ETL tools. 
  • Technical Skills:  
    • Strong coding skills in Python, Java, or Scala. 
    • Proficiency in SQL and relational databases (e.g., MySQL, PostgreSQL). 
    • Expertise in big data technologies (e.g., Hadoop, Spark, Kafka). 
    • Hands-on experience with cloud platforms (e.g., AWS, Azure, GCP) and their data services (e.g., S3, Redshift, BigQuery). 
    • Familiarity with data warehousing solutions (e.g., Snowflake, Redshift, BigQuery). 
    • Knowledge of ETL tools and frameworks (e.g., Apache NiFi, Talend, Informatica). 
    • Experience with Databricks for data processing and analytics. 
    • Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). 
    • Experience with version control systems (e.g., Git) and CI/CD pipelines. 

Nice-to-Haves:  

  • Experience in the financial technology space. 
  • Knowledge of real-time data processing and streaming technologies (e.g., Apache Kafka, Apache Flink). 
  • Understanding of data governance and security best practices. 
  • Familiarity with machine learning pipelines and MLOps practices. 
Job Category: Engineering
Job Type: Remote
Job Location: Egypt
Years of experience: 5 years
Expected Salary: After the interview

Apply for this position

Allowed Type(s): .pdf, .doc, .docx